Canonical Voices

Posts tagged with 'linux'

Prakash Advani

If you are still on XP, whats your plans ?

11% of the (admittedly small) 641 companies queried stated they intend to switch to Linux. The low-cost, robust security and growing reputation in enterprise use are likely key factors informing such plans.

Perhaps more shockingly is that 37% of those asked intend to stick with Windows XP past the expiry date. Of those, 40% reason that as ‘it works’ there’s little need to change, while 39% claim software they rely on depends on XP.

Read More: http://www.omgubuntu.co.uk/2014/02/windows-xp-users-may-switch-linux

 

Read more
Prakash Advani

Many schools in Romania today are using proprietary software like Microsoft Windows and Microsoft Office — most of which are either unlicenced copies or old unsupported versions, for which the schools may face legal issues, according to the Education Ministry of Romania. To tackle this problem, the Ministry recommends the schools to either purchase newer, licenced copies of these software, or switch to open source solutions like GNU/Linux, particularly Ubuntu and Edubuntu.

Read More: http://www.muktware.com/2014/02/romanian-edu-ministry-recommends-ubuntu-schools/21844

Read more
Prakash Advani

Demand for people with Linux skills is increasing, a trend that appears to follow a shift in server sales.

Cloud infrastructure, including Amazon Web Service, is largely Linux based, and cloud services’ overall growth is increasing Linux server deployments. As many as 30% of all servers shipped this year will be cloud services providers, according to research firm IDC.

This shift may be contributing to Linux hiring trends reported by the Linux Foundation and IT careers website Dice, in a report released Wednesday. The report states that 77% of hiring managers have put hiring Linux talent on their list of priorities, up from 70% a year ago.

Read More: http://www.computerworld.in/news/demand-for-linux-skills-rises

Read more
Prakash

Intel’s solution for next generation wearable technology is Intel’s Edison. SD card size computer is launched at CES.

Features: 

  • Dual-core low-power 22nm 400MHz Intel Quark processor.
  • Integrated Wi-Fi
  • Bluetooth
  • Runs Linux

 

Read more
Alberto Milone

In this Ubuntu release cycle I worked, among other things, on improving user experience with hybrid systems and proprietary graphics drivers. The aim was to make it easier to enable the discrete card when in need of better performance i.e. when the integrated card wouldn’t be enough.

In 13.10 I focused mainly on enablement, making sure that by installing one extra package together with the driver, users would end up with a fully working system with no additional configuration required on their end.

As for 12.04.3, I backported my work from 13.10 and I also made sure that Jockey (the restricted drivers manager in Precise) detects systems with hybrid graphics, recommends the correct driver – hiding any drivers which may support the card but not in a hybrid graphics context – and installs the extra package when users decide to enable the discrete card. The installation process is very straightforward, however, if you’re still using the old kernel/X stack, Jockey won’t show any drivers. The backported stack from Raring (which comes by default with 12.04.3) is required.

There are some known issues, which will be fixed in a near future.

If you would like to try this work on your system, you can find the instructions here.

 

Read more
Prakash

This machine isn’t your standard corporate-issue device, but a machine that from top to bottom is open in its design.

Every component in Huang’s laptop, known as the Novena, is open. Datasheets describing the design and workings of each component – from the motherboard, through to the ports and various processors – is documented and freely available online. Anyone with the expertise can build a complete firmware for each component from source.

The question is why did Huang, former hardware lead on the open source Chumby internet appliance, decide to do it?

Read More: http://www.zdnet.com/building-the-open-source-laptop-how-one-engineer-turned-the-geek-fantasy-to-reality-7000018987/

Read more
Prakash

Intel has shipped its first “open source PC,” a bare-bones computer aimed at software developers building x86 applications and hobbyists looking to construct their own computer.
The PC, called the MinnowBoard, is basically a motherboard with no casing around it. It was codeveloped by Intel and CircuitCo Electronics, a company that specializes in open-source motherboards, and went on sale this month for US$199 from a handful of retailers.
It’s the first open-source PC to be offered with an Intel x86 processor, and the board’s schematics and design files are published and can be replicated under a Creative Commons license.

MinnowBoard includes 1GB of DDR2 memory, an HDMI port, Gigabit Ethernet, USB ports, and a micro-SD slot for expandable storage. The board’s open-source UEFI firmware allows for the development of custom secure boot environments.

The board comes pre-loaded with the Angstrom Linux distribution and is compatible with Yocto Project, which enables the creation of hardware agnostic Linux-based systems.

Read More http://www.computerworld.in/news/intels-first-open-source-pc-sale-199-122852013

Read more
Prakash

Rockchip’s RK3188 processor is one of the fastest ARM Cortex-A9 chips around. The 28nm quad-core processor outperforms the chips found in the Samsung Galaxy S III and Google Nexus 7, for instance. And it’s a relatively inexpensive chip, which explains why it’s proven popular with Chinese tablet and TV box makers.

Most devices featuring the RK3188 processor ship with Android 4.1 or Android 4.2. But soon you may be able to run Ubuntu, Fedora, or other desktop Linux operating systems on an RK3188 device.

Read More.

Read more
John Pugh

Oh boy. June stormed in and the May installment is late! Not much changed at the top. The Northern Hemisphere spring storms keep Stormcloud at the top with Fluendo DVD staying put at the number two spot. Steam continues its top of the chart spree on the Free Top 10.

Want to develop for the new Phone and Tablet OS, Ubuntu Touch? Be sure to check out the “Go Mobile” site for details.

Top 10 paid apps

  1. Stormcloud
  2. Fluendo DVD Player
  3. Filebot
  4. Quick ‘n Easy Web Builder
  5. MC-Launcher
  6. Mini Minecraft Launcher
  7. Braid
  8. UberWriter
  9. Drawers
  10. Bastion

Top 10 free apps

  1. Steam
  2. Motorbike
  3. Master PDF Editor
  4. Youtube to MP3
  5. Screencloud
  6. Nitro
  7. Splashtop Remote Desktop App for Linux
  8. CrossOver (Trial)
  9. Plex Media Server
  10. IntelliJ IDEA 12 Community Edition

Would you like to see your app featured in this list and on millions of user’s computers? It’s a lot easier than you think:

Notes:

  • The lists of top 10 app downloads includes only those applications submitted through My Apps on the Ubuntu App Developer Site. For more information about of usage of other applications in the Ubuntu archive, check out the Ubuntu Popularity Contest statistics.
  • The top 10 free apps list contains gratis applications that are distributed under different types of licenses, some of which may not be open source. For detailed license information, please check each application’s description in the Ubuntu Software Center.

Follow Ubuntu App Development on:

Social Media Icons by Paul Robert Lloyd

Read more
rvr

In my daily work at Canonical I use VM's quite often to test Ubuntu Web Apps and new browser releases in different environments. I use a MacBook Pro 15" (mid 2012) as the main computer, currently running Ubuntu 13.04. This computer had rEFIt to boot and the default OSX system. Of the 500 GB, just 75 were dedicated to Linux, so I was forced to delete VM's or backup them in an external hard disk to re-use them. Finally, I decided to buy a SSD, which are quite cheap nowadays.

The SSD I bought was a Samsung 840 250 GB (not Pro version, which has some additional features). It costs 179 €.

This are the steps I followed to move my Ubuntu setup from a HD to a SSD.

  1. Burn a DVD with Ubuntu. I reused an old 12.04 disk, the -mac version.
  2. Boot Ubuntu from the DVD.
  3. Attach an external USB drive. This is used to (backup and) copy the partition from the HD to the SSD.
  4. Run gparted as root to copy the Linux partition (in my case, an ext4).

To copy the partition, I resized the USB drive partitions to live room to the HD's Linux partition, which is 75 GB. Then, in gparted I selected the partition from the HD, selected "Copy" from the partition menu and then "Paste" it in the spare space in the USB drive. This step took +40 minutes. At this point, the Linux partition is available in the external disk.

After that, I switched off the computer and replaced the HD with the SSD. Follow the link to see how. Now, the second part: to move and setup the system to the solid state disk. The SSD disk was blank, so it needed proper configuration.

  1. Boot Ubuntu from the DVD.
  2. Attach the external USB drive.
  3. Run gparted.
  4. Create a partition table in the SSD. I used GUID Partition Table (gpt) format, the one the original HD uses.
  5. Partition the SDD.
    • Create an EFI partition. The first partition has FAT32 format, 200 MB in size, "EFI" as label and "grub_boot" flag. 
    • Create the swap partition. At the end of the disk, I created a 10 GB "linux-swap" partition.
    • The rest of the disk will be available for the main Linux partition.
  6. Copy the Linux partition to the SSD. In gparted, select the Linux partition in the USB drive, copy and paste it in the SSD. This takes +45 minutes.
  7. Resize the Linux partition to fill the entire disk and flag it as "boot".

Congratulations! The Linux partition is now copied bit-by-bit in the SSD. However, it cannot boot. The reasons are: rEFIt (the bootloader) is not installed; the Linux partitions  are not properly configured. To do that, we need to modify the file /etc/fstab in the Linux partition (SSD).

And this is the final third step: setup the system to properly boot. In my SSD, /dev/sda1 is the EFI partition, /dev/sda2 the Linux partition and /dev/sda3 the swap partition. You may have different setup. /etc/fstab must be changed to reflect this addresses. From a terminal:

$ sudo mkdir /media/root 

$ sudo mount /dev/sda2 /media/root

The Linux partition is accesible in the directory /media/root/. The file /etc/fstab/ of the Linux partition can be edited now at /media/root/etc/fstab/

$ gksu gedit /media/root/etc/fstab

This is the original content:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda5 during installation
UUID=24acabd4-2fcb-49aa-9fb3-ce9e657d4465 / ext4 errors=remount-ro,user_xattr 0 1
# swap was on /dev/sda7 during installation
UUID=c40532ab-5716-47bc-9b95-3672b834c6a2 none swap sw 0 0

Here, the modifications:

# / was on /dev/sda5 during installation
/dev/sda2 / ext4 discard,errors=remount-ro,user_xattr 0 1
# swap was on /dev/sda7 during installation
/dev/sda3 none swap sw 0 0

The blkid command can be run as root to find out the UUID strings of the devices and use them instead.

Then, I downloaded the compiled version of rEFInd, which is a fork of the rEFIt bootloader and able to run from Linux, Mac and Windows.

$ cd /media/root/root/

$ sudo wget http://downloads.sourceforge.net/project/refind/0.6.11/refind-bin-0.6.11.zip

This is almost done. We now "log" into the Linux partition (SSD) to apply the changes and setup the bootloaders. In order to do that successfully, the /proc and /dev from the live DVD are mounted to the Linux partition and the EFI partition (this is needed by rEFInd).

$ sudo mkdir /media/root/boot/efi

$ sudo mount -B /proc /media/root/proc

$ sudo mount -B /dev /media/root/dev

$ sudo mount /dev/sda1 /media/root/boot/efi

$ sudo chroot /media/root/

Now we're "logged" in the Linux partition as the root user. This is when GRUB and rEFInd are installed in the SSD to be able to boot Linux from the Mac.

# mount -t sysfs sysfs /sys

# cd /root/

# unzip refind-bin-0.6.11.zip

# cd refind-bin-0.6.11

# ./install-bin.sh --esp --alldrivers

# grub-install /dev/sda

# update-grub

# exit

And that's all. The system should be able to boot the Linux partition from the SSD.

Some caveats and open questions:

  • I installed rEFInd first, without GRUB. rEFInd and the system wasn't able to boot.
  • For some reasons, rEFInd in my system is much slower than rEFIt. It takes around 40 seconds to show up.
  • After installing GRUB, I'm not able to mount the EFI partition. Now has an unkown partition format.
  • Does GRUB really needs rEFInd?

References

Read more
Prakash

While it is not certain if Google is going to offer Android or ChromeOS for PCs, but Intel is already working on making the $200 Android PC to boost the sagging PC sales.

So far, the notebook market is dominated by two players, Windows and OS X, but there’s an operating system that could drop into this mix and be highly disruptive — Android.

There’s been a lot of discussion bouncing around the tech blogosphere about Intel’s plans to get all disruptive and start supporting Android on devices that will cost in the region of $200.

While Microsoft might not be happy about being sidelined by a company that was once one of its biggest supporters, this is exactly what the PC industry needs.

Think this is a huge leap? It isn’t. Some of Intel’s Atom processors are already compatible with Android 4.2 Jelly Bean.

Read More.

 

 

 

Read more
Prakash

Ubuntu 13.04.10 is here.  Torrent is the preferred method for me.

Ubuntu 13.04
Torrent Links Direct Downloads
Ubuntu Desktop 13.04 64-Bit Torrent Main Server
Ubuntu Desktop 13.04 32-Bit Torrent Main Server
Ubuntu Server 13.04 64-Bit Torrent Main Server
Ubuntu Server 13.0432-Bit Torrent Main Server

Other releases.

http://releases.ubuntu.com/13.04/ (Ubuntu Desktop and Server)
http://cloud-images.ubuntu.com/releases/13.04/release/ (Ubuntu Cloud Server)
http://cdimage.ubuntu.com/netboot/13.04/ (Ubuntu Netboot)
http://cdimage.ubuntu.com/ubuntu-core/releases/13.04/release/ (Ubuntu Core)
http://cdimage.ubuntu.com/edubuntu/releases/13.04/release/ (Edubuntu DVD)
http://cdimage.ubuntu.com/kubuntu/releases/13.04/release/ (Kubuntu)
http://cdimage.ubuntu.com/lubuntu/releases/13.04/release/ (Lubuntu)
http://cdimage.ubuntu.com/ubuntustudio/releases/13.04/release/ (Ubuntu Studio)
http://cdimage.ubuntu.com/ubuntu-gnome/releases/13.04/release/ (Ubuntu-GNOME)
http://cdimage.ubuntu.com/ubuntukylin/releases/13.04/release/ (UbuntuKylin)
http://cdimage.ubuntu.com/xubuntu/releases/13.04/release/ (Xubuntu)

As always Have fun :)

Ubuntu Unleashed 2012 Edition: Covering 11.10 and 12.04 (7th Edition) (7th Edition)

Read more
Prakash

From:  http://www.slideshare.net/blackducksoftware/the-2013-future-of-open-source-survey-results

Black Duck and North Bridge announce the results of the seventh annual Future of Open Source Survey. The 2013 survey represents the insights of more than 800 respondents – the largest in the survey’s history – from both non-vendor and vendor communities.

Read more
Prakash

Netflix, the popular video-streaming service that takes up a third of all internet traffic during peak traffic hours isn’t just the single largest internet traffic service. Netflix, without doubt, is also the largest pure cloud service.

netflixcloud-620x457
Netflix, with more than a billion video delivery instances per month, is the largest cloud application in the world.

At the Linux Foundation’s Linux Collaboration Summit in San Francisco, California, Adrian Cockcroft, director of architecture for Netflix’s cloud systems team, after first thanking everyone “for building the internet so we can fill it with movies”, said that Netflix’s Linux, FreeBSD, and open-source based services are “cloud native”.

By this, Cockcroft meant that even with more than a billion video instances delivered every month over the internet, “there is no datacenter behind Netflix”. Instead, Netflix, which has been using Amazon Web Services since 2009 for some of its services, moved its entire technology infrastructure to AWS in November 2012.

Read More.

Read more
facundo

Linux Containers


A nivel de máquinas virtuales de uso genérico (por lo tanto descartando ScummVM o cosas similares) siempre me manejé con VirtualBox. Aunque ahora es de Oracle y no lo miro con buenos ojos, siempre funcionó bastante bien (sin pedirles cosas demasiado locas) y es una buena manera de tener un Windor corriendo aunque uno esté todo el día en Linux (por ejemplo, para poder hacer facturas en la AFIP, la puta que los parió).

Incluso, cuando laburaba en Ericsson, que me obligaban a usar Windor, tenía un VMWare con un Ubuntu instalado (un Gutsy, o un Hardy, creo... cuanta agua bajo el puente!) que me servía para cuando tenía que hacer cosas serias a nivel de red, o para el caso cualquier cosa divertida.

Pero nunca había encontrado una forma piola de tener máquinas virtuales de Linux bajo Linux. Y con "piola" me refiero a que funcione bien y que sea relativamente fácil de armar.

Y acá entra LXC.

Linux container

Aunque LXC no es propiamente dicho una "máquina virtual" (es más bien un "entorno virtual"), igual permite la ejecución de un linux que no se mezcla a nivel de configuraciones ni de paquetes instalados ni de lo que uno puede romper del sistema con la máquina que uno tiene.

¿Para qué se puede usar? En mi caso lo uso mucho en el laburo, ya que mi máquina de desarrollo es un Ubuntu Quantal, pero los sistemas que corren en los servers son bajo Precise o Lucid (entonces tengo un container para cada uno). Y también los tengo pensado usar para probar instalaciones desde cero (por ejemplo, al armar un .deb por primera vez, probar de instalarlo en una máquina limpia).

¿Cómo se arma y usa un contenedor? Luego de instalar los paquetes necesarios (sudo apt-get install lxc libvirt-bin), la creación de un contenedor es bastaaaaante simple (de acá en adelante reemplazar en todos lados el "mi-lxc" por el nombre que ustedes quieran para el contenedor):

    sudo lxc-create -t ubuntu -n mi-lxc -- -r precise -a i386 -b $USER

Desmenucemos. El -t es el template a tomar, y el -n es para el nombre que le vamos a dar. A partir de ahí vemos un "--", lo que significa que el resto son opciones para el template propiamente dicho. En este caso, que use el release Precise, la arquitectura i386, y mi mismo usuario.

Lo maravilloso de esto es que el container, adentro, tiene mi usuario, porque el home es compartido! Y con esto todas las configuraciones de bash, vim, ssh, gnupg, etc, con lo cual "hacer cosas" adentro del lxc es directo, no hay que configurar todo (pero, al mismo tiempo, podemos "romper" el home desde adentro del container, ojo al piojo).

Para arrancar el container podemos hacer

    sudo lxc-start -n mi-lxc

Y esto nos va a dejar con el prompt listo para loguear, y acá alcanza con usar los propios usuario y password. Una vez adentro, usamos el container como si fuera una máquina nuevita.

Todo muy lindo, pero igual me gustan hacerle algunas configuraciones que hacen el uso aún más directo y sencillo. Y estas configuraciones, a nivel de sistema, son basicamente para que podamos entrar al container más fácil, y usar desde adentro aplicaciones gráficas.

Para entrar más fácil, tenemos que tener Avahi configurado. Más allá de instalarlo (sudo apt-get update; sudo apt-get install avahi-daemon), hay un detalle a toquetear.  Adentro del lxc, abrir el archivo /etc/avahi/avahi-daemon.conf y aumentar bastante el rlimit-nproc (por ejemplo, de 3 a 300).

Con esto ya estamos listos para entrar fácil al container. Lo podemos probar en otra terminal; hacer:

    ssh mi-lxc.local

Lindo, ¿no?. Pero también está bueno poder forwardear los eventos de X, así podemos levantar aplicaciones gráficas. Para eso tenemos que tocar lo siguiente en el host (o sea, no en el container sino en la máquina "real"): editar /var/lib/lxc/mi-lxc/fstab y agregarle la linea:

    /tmp/.X11-unix tmp/.X11-unix none bind

En el container, tenemos que estar seguros que /tmp/.X11-unix exista, y reiniciarlo luego de estas configuraciones.

También necesitamos setear DISPLAY. Esto yo lo mezclé en el .bashrc, poniendo algo también para que cuando entro por ssh me cambie el color del prompt (incluso, poniendo distintos colores a distintos containers). Lo que estoy usando es:

    if [ `hostname` = "mi-lxc" ]; then
        export PS1='\[\e[1;34m\]\u@\h:\w${text}$\[\e[m\] ';
        export DISPLAY=:0
    fi

Para terminar, entonces, les dejo los tres comandos que más uso en el día a día con los containers, más allá de la instalación en sí: levantar el container (fíjense el -d, que lo levanta como un demonio, total nos conectamos por ssh); conectarnos (fíjense el -A, para que forwardee la conexión del agente de autenticación); y finalmente apagar el container:

    sudo lxc-start -n mi-lxc -d
    ssh -A mi-lxc.local
    sudo lxc-stop -n mi-lxc

Que lo disfruten.

Read more
Prakash

With Windows 8 pushing a “touch-first” desktop interface—Microsoft’s words, not ours—and with Valve’s Steam on Linux beginning to bring much-needed games and popular attention to the oft-overlooked operating system, there’s never been a better time to take Linux out for a test drive.

Dipping your toes into the penguin-filled waters of the most popular open-source ecosystem is easy, and you don’t have to commit to switching outright to Linux. You can install it alongside your current Windows system, or even try it without installing anything at all.

Ubuntu is the most popular Linux distribution for desktop and laptop Linux users, so we’ll focus on Ubuntu throughout this guide. For the most part, Ubuntu just plain works. It sports a subtle interface that stays out of your way. It enjoys strong support from software developers (including Valve, since Steam on Linux only officially supports Ubuntu). And you can find tons of information online if you run into problems.

Read more.

Read more
brendandonegan

The inaugural online UDS (or vUDS as it’s becoming known) is underway. This brings with it a number of new challenges in terms of running a good session. Having sat in on a few sessions yesterday and been the session lead for several sessions at physical UDS’s going back nearly two years now, I thought I’d jot down a few tips on how to run a good session.

Prepare

Regardless of whether the session is physical or virtual, it’s always important to prepare. The purpose of a UDS session is to get feedback on some proposed plan of work (even if it is extremely nebulous at the time of the session). Past experience shows that sessions are always more productive where most of the plan is already fleshed out prior to the session and the session basically functions as a review/comments meeting. This depends on your particular case though, since the thing you are planning may not be possible to flesh out in a lot of detail without feedback. I personally find this is rarely the case though.

Be punctual

UDS runs on a tight schedule, at least in the physical version, although I don’t see any good reason why this should change for vUDS. Punctuality is therefore important not as good manners but from a practical point of view. You need to time to compose yourself, find notes and make sure everything is set up. For a physical UDS this would have been to check microphones are working and projectors are projecting. For a vUDS, in my brief experience, this means making sure everyone who needs to be is invited into the hangout and that the etherpad is up and the video feed is working on the session page.

Delegate

As the session lead it is your responsibility to run a good session, however it will be impossible for you to perform all the tasks required to achieve this on your own. Someone needs to be making notes on the Etherpad and someone needs to be monitoring IRC. You should also be looking out for questions yourself but since you may be concentrating on conveying information and answering other questions, you do need help with this.

Avoid going off track

Time is limited in a UDS session and you may have a lot of points to get through. Be wary of getting distracted from the point of the session and discussing things that may not be entirely relevant. Don’t be afraid to cut people short – if the question is important to them then you can follow up offline later.

Manage threads of communication

This one is quite vUDS specific, but especially now that audiovisual participation is limited, it is important that all of the conversation take place in one spot. Particularly for people who are catching up with the video streams later on. Don’t allow a parallel conversation to develop on IRC if possible. If someone asks a question in IRC, repeat it to the video audience and answer it in the hangout, not on IRC. If someone is talking a lot in IRC and even answering questions, do invite them into the hangout so that what they’re saying can be recorded. It may not be possible to avoid this entirely, but as session lead you need to do your best to mitigate it.

Follow up

Not so much a tip for running a good session, but for getting the best from a good session. Remember to read the notes from the session and rewatch the video so that you can use the feedback to adapt your plan and find places to follow up.

That’s all there is to say, I really hope this first virtual UDS goes very well and that sessions are productive for everyone involved.


Read more
alex

Appy polly loggies for the super long delay between episodes, but I finally carved out some time for our exciting dénouement in the memory leak detection series. Past episodes included detection and analysis.

As a gentle reminder, during analysis, we saw the following block of code:

 874                 GSList *dupes = NULL;
 875                 const char *path;
 876 
 877                 dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupes");
 878                 path = nm_object_get_path (NM_OBJECT (ap));
 879                 dupes = g_slist_prepend (dupes, g_strdup (path));
 880 #endif
 881                 return NULL;

And we concluded with:

Is it safe to just return NULL without doing anything to dupes? maybe that’s our leak?
We can definitively say that it is not safe to return NULL without doing anything to dupes. We definitely allocated memory, stuck it into dupes, and then threw dupes away. This is our smoking gun.

But there’s a twist! Eagle-eyed reader Dave Jackson (a former colleague of mine from HP, natch) spotted a second leak! It turns out that line 879 was exceptionally leaky during its inception. As Dave points out, the call to g_slist_prepend() passes g_strdup() as an argument. And as the documentation says:

Duplicates a string. If str is NULL it returns NULL. The returned string should be freed with g_free() when no longer needed.

In memory-managed languages like python, the above idiom of passing a function as an argument to another function is quite common. However, one needs to be more careful about doing so in C and C++, taking great care to observe if your function-as-argument allocates memory and returns it. There is no mechanism in the language itself to automatically free memory in the above situation, and thus the call to g_strdup() seems like it also leaks memory. Yowza!

So, what to do about it?

The basic goal here is that we don’t want to throw dupes away. We need to actually do something with it. Here again are the 3 most pertinent lines.

 877                 dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupes");
 878                 path = nm_object_get_path (NM_OBJECT (ap));
 879                 dupes = g_slist_prepend (dupes, g_strdup (path));
 881                 return NULL;

Let’s break these lines down.

  1. On line 877, we retrieve the dupes list from the dup_data.found object
  2. Line 878 gets a path to the duplicate wifi access point
  3. Finally, line 879 adds the duplicate access point to the old dupes list
  4. Line 881 throws it all away!

To me, the obvious thing to do is to change the code between lines 879 and 881, so that after we modify the duplicates list, we save it back into the dup_data object. That way, the next time around, the list stored inside of dup_data will have our updated list. Makes sense, right?

As long as you agree with me conceptually (and I hope you do), I’m going to take a quick shortcut and show you the end result of how to store the new list back into the dup_data object. The reason for the shortcut is that we are now deep in the details of how to program using the glib API, and like many powerful APIs, the key is to know which functions are necessary to accomplish your goal. Since this is a memory leak tutorial and not a glib API tutorial, just trust me that the patch hunk will properly store the dupes list back into the dup_data object. And if it’s confusing, as always, read the documentation for g_object_steal_data and g_object_set_data_full.

@@ -706,14 +706,15 @@
 +		GSList *dupes = NULL;
 +		const char *path;
 +
-+		dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupes");
++		dupes = g_object_steal_data (G_OBJECT (dup_data.found), "dupes");
 +		path = nm_object_get_path (NM_OBJECT (ap));
 +		dupes = g_slist_prepend (dupes, g_strdup (path));
++		g_object_set_data_full (G_OBJECT (dup_data.found), "dupes", (gpointer) dupes, (GDestroyNotify) clear_dupes_list);
 +#endif
  		return NULL;
  	}

If the above patch format looks funny to you, it’s because we are changing a patch.

-+		dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupes");
++		dupes = g_object_steal_data (G_OBJECT (dup_data.found), "dupes");

This means the old patch had the line calling g_object_get_data() and the refreshed patch now calls g_object_steal_data() instead. Likewise…

++		g_object_set_data_full (G_OBJECT (dup_data.found), "dupes", (gpointer) dupes, (GDestroyNotify) clear_dupes_list);

The above call to g_object_set_data_full is a brand new line in the new and improved patch.

Totally clear, right? Don’t worry, the more sitting and contemplating of the above you do, the fuller and more awesomer your neckbeard grows. Don’t forget to check it every once in a while for small woodland creatures who may have taken up residence there.

And thus concludes our series on how to detect, analyze, and fix memory leaks. All good? Good.

waiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiit!!!!!!!11!

I can hear the observant readers out there already frantically scratching their necks and getting ready to point out the mistake I made. After all, our newly refreshed patch still contains this line:

 +		dupes = g_slist_prepend (dupes, g_strdup (path));

And as we determined earlier, that’s our incepted memory leak, right? RIGHT!??

Not so fast. Take a look at the new line in our updated patch:

++		g_object_set_data_full (G_OBJECT (dup_data.found), "dupes", (gpointer) dupes, (GDestroyNotify) clear_dupes_list);

See that? The last argument to g_object_set_data_full() looks quite interesting indeed. It is in fact, a cleanup function named clear_dupes_list(), which according to the documentation, will be called

when the association is destroyed, either by setting it to a different value or when the object is destroyed.

In other words, when we are ready to get rid of the dup_data.found object, as part of cleaning up that object, we’ll call the clear_dupes_list() function. And what does clear_dupes_list() do, praytell? Why, let me show you!

static void
clear_dupes_list (GSList *list)
{
	g_slist_foreach (list, (GFunc) g_free, NULL);
	g_slist_free (list);
}

Trés interesante! You can see that we iterate across the dupes list, and call g_free on each of the strings we did a g_strdup() on before. So there wasn’t an inception leak after all. Tricky tricky.

A quick digression is warranted here. Contrary to popular belief, it is possible to write object oriented code in plain old C, with inheritance, method overrides, and even some level of “automatic” memory management. You don’t need to use C++ or python or whatever the web programmers are using these days. It’s just that in C, you build the OO features you want yourself, using primitives such as structs and function pointers and smart interface design.

Notice above we have specified that whenever the dup_data object is destroyed, it will free the memory that was stuffed into it. Yes, we had to specify the cleanup function manually, but we are thinking of our data structures in terms of objects.

In fact, the fancy features of many dynamic languages are implemented just this way, with the language keeping track of your objects for you, allocating them when you need, and freeing them when you’re done with them.

Because at the end of the day, it is decidedly not turtles all the way down to the CPU. When you touch memory in in python or ruby or javascript, I guarantee that something is doing the bookkeeping on your memory, and since CPUs only understand assembly language, and C is really just pretty assembly, you now have a decent idea of how those fancy languages actually manage memory on your behalf.

And finally now that you’ve seen just how tedious and verbose it is to track all this memory, it should no longer be a surprise to you that most fancy languages are slower than C. Paperwork. It’s always paperwork.

And here we come to the upshot, which is, tracking down memory leaks can be slow and time consuming and trickier than first imagined (sorry for the early head fake). But with the judicious application of science and taking good field notes, it’s ultimately just like putting a delicious pork butt in the slow cooker for 24 hours. Worth the wait, worth the effort, and it has a delicious smoky sweet payoff.

Happy hunting!


kalua pork + homemade mayo and cabbage

Read more
Prakash

Hackable Lego Robot Runs Linux

The Lego Mindstorms EV3 is the first major revamp of the Lego Group’s programmable robot kit since 2006, and the first to run embedded Linux.

Unveiled at the CES Show in Las Vegas yesterday, with the first public demos starting today at the Kids Play Summit at the Venetian Hotel, the $350 robot is built around an upgraded “Intelligent Brick” computer. Lego swapped out the previous microcontroller for a 300MHz ARM9 processor capable of running new Linux-based firmware. As a result, the kids-oriented Mindstorms EV3 offers far more programmability than the NXT series, which was last updated in 2009, says Lego.

Read More.

Read more
Prakash

The team behind the Samba file, print, and authentication server suite for Microsoft Windows clients announced the release of Samba version 4 yesterday. This version includes significant new capabilities that offer an open source replacement for many enterprise infrastructure roles currently delivered exclusively by Microsoft software, including acting as a domain controller, providing SMB2.1 protocol support, delivering clustering, and offering a virtual filesystem (VFS) interface. It comes with Coverity security certification and easy upgrade scripts. The release notes include details of all changes.

Notably, this includes the first open source implementation of Microsoft’s Active Directory protocols; Samba previously only offered Windows NT domain controller functions. According to the press release, “Samba 4.0 provides everything needed to serve as an Active Directory Compatible Domain Controller for all versions of Microsoft Windows clients currently supported by Microsoft, including the recently released Windows 8.”

Samba 4 can join existing Active Directory domains and provides all necessary function to host a domain that can be joined by Microsoft Active Directory servers. It provides all the services needed by Microsoft Exchange, as well as opening up the possibility of fully open source alternatives to Exchange such as the OpenChange project.

Read More.

Read more