Canonical Voices

Posts tagged with 'linux'

Prakash

From:  http://www.slideshare.net/blackducksoftware/the-2013-future-of-open-source-survey-results

Black Duck and North Bridge announce the results of the seventh annual Future of Open Source Survey. The 2013 survey represents the insights of more than 800 respondents – the largest in the survey’s history – from both non-vendor and vendor communities.

Read more
Prakash

Netflix, the popular video-streaming service that takes up a third of all internet traffic during peak traffic hours isn’t just the single largest internet traffic service. Netflix, without doubt, is also the largest pure cloud service.

netflixcloud-620x457
Netflix, with more than a billion video delivery instances per month, is the largest cloud application in the world.

At the Linux Foundation’s Linux Collaboration Summit in San Francisco, California, Adrian Cockcroft, director of architecture for Netflix’s cloud systems team, after first thanking everyone “for building the internet so we can fill it with movies”, said that Netflix’s Linux, FreeBSD, and open-source based services are “cloud native”.

By this, Cockcroft meant that even with more than a billion video instances delivered every month over the internet, “there is no datacenter behind Netflix”. Instead, Netflix, which has been using Amazon Web Services since 2009 for some of its services, moved its entire technology infrastructure to AWS in November 2012.

Read More.

Read more
facundo

Linux Containers


A nivel de máquinas virtuales de uso genérico (por lo tanto descartando ScummVM o cosas similares) siempre me manejé con VirtualBox. Aunque ahora es de Oracle y no lo miro con buenos ojos, siempre funcionó bastante bien (sin pedirles cosas demasiado locas) y es una buena manera de tener un Windor corriendo aunque uno esté todo el día en Linux (por ejemplo, para poder hacer facturas en la AFIP, la puta que los parió).

Incluso, cuando laburaba en Ericsson, que me obligaban a usar Windor, tenía un VMWare con un Ubuntu instalado (un Gutsy, o un Hardy, creo... cuanta agua bajo el puente!) que me servía para cuando tenía que hacer cosas serias a nivel de red, o para el caso cualquier cosa divertida.

Pero nunca había encontrado una forma piola de tener máquinas virtuales de Linux bajo Linux. Y con "piola" me refiero a que funcione bien y que sea relativamente fácil de armar.

Y acá entra LXC.

Linux container

Aunque LXC no es propiamente dicho una "máquina virtual" (es más bien un "entorno virtual"), igual permite la ejecución de un linux que no se mezcla a nivel de configuraciones ni de paquetes instalados ni de lo que uno puede romper del sistema con la máquina que uno tiene.

¿Para qué se puede usar? En mi caso lo uso mucho en el laburo, ya que mi máquina de desarrollo es un Ubuntu Quantal, pero los sistemas que corren en los servers son bajo Precise o Lucid (entonces tengo un container para cada uno). Y también los tengo pensado usar para probar instalaciones desde cero (por ejemplo, al armar un .deb por primera vez, probar de instalarlo en una máquina limpia).

¿Cómo se arma y usa un contenedor? Luego de instalar los paquetes necesarios (sudo apt-get install lxc libvirt-bin), la creación de un contenedor es bastaaaaante simple (de acá en adelante reemplazar en todos lados el "mi-lxc" por el nombre que ustedes quieran para el contenedor):

    sudo lxc-create -t ubuntu -n mi-lxc -- -r precise -a i386 -b $USER

Desmenucemos. El -t es el template a tomar, y el -n es para el nombre que le vamos a dar. A partir de ahí vemos un "--", lo que significa que el resto son opciones para el template propiamente dicho. En este caso, que use el release Precise, la arquitectura i386, y mi mismo usuario.

Lo maravilloso de esto es que el container, adentro, tiene mi usuario, porque el home es compartido! Y con esto todas las configuraciones de bash, vim, ssh, gnupg, etc, con lo cual "hacer cosas" adentro del lxc es directo, no hay que configurar todo (pero, al mismo tiempo, podemos "romper" el home desde adentro del container, ojo al piojo).

Para arrancar el container podemos hacer

    sudo lxc-start -n mi-lxc

Y esto nos va a dejar con el prompt listo para loguear, y acá alcanza con usar los propios usuario y password. Una vez adentro, usamos el container como si fuera una máquina nuevita.

Todo muy lindo, pero igual me gustan hacerle algunas configuraciones que hacen el uso aún más directo y sencillo. Y estas configuraciones, a nivel de sistema, son basicamente para que podamos entrar al container más fácil, y usar desde adentro aplicaciones gráficas.

Para entrar más fácil, tenemos que tener Avahi configurado. Más allá de instalarlo (sudo apt-get update; sudo apt-get install avahi-daemon), hay un detalle a toquetear.  Adentro del lxc, abrir el archivo /etc/avahi/avahi-daemon.conf y aumentar bastante el rlimit-nproc (por ejemplo, de 3 a 300).

Con esto ya estamos listos para entrar fácil al container. Lo podemos probar en otra terminal; hacer:

    ssh mi-lxc.local

Lindo, ¿no?. Pero también está bueno poder forwardear los eventos de X, así podemos levantar aplicaciones gráficas. Para eso tenemos que tocar lo siguiente en el host (o sea, no en el container sino en la máquina "real"): editar /var/lib/lxc/mi-lxc/fstab y agregarle la linea:

    /tmp/.X11-unix tmp/.X11-unix none bind

En el container, tenemos que estar seguros que /tmp/.X11-unix exista, y reiniciarlo luego de estas configuraciones.

También necesitamos setear DISPLAY. Esto yo lo mezclé en el .bashrc, poniendo algo también para que cuando entro por ssh me cambie el color del prompt (incluso, poniendo distintos colores a distintos containers). Lo que estoy usando es:

    if [ `hostname` = "mi-lxc" ]; then
        export PS1='\[\e[1;34m\]\u@\h:\w${text}$\[\e[m\] ';
        export DISPLAY=:0
    fi

Para terminar, entonces, les dejo los tres comandos que más uso en el día a día con los containers, más allá de la instalación en sí: levantar el container (fíjense el -d, que lo levanta como un demonio, total nos conectamos por ssh); conectarnos (fíjense el -A, para que forwardee la conexión del agente de autenticación); y finalmente apagar el container:

    sudo lxc-start -n mi-lxc -d
    ssh -A mi-lxc.local
    sudo lxc-stop -n mi-lxc

Que lo disfruten.

Read more
Prakash

With Windows 8 pushing a “touch-first” desktop interface—Microsoft’s words, not ours—and with Valve’s Steam on Linux beginning to bring much-needed games and popular attention to the oft-overlooked operating system, there’s never been a better time to take Linux out for a test drive.

Dipping your toes into the penguin-filled waters of the most popular open-source ecosystem is easy, and you don’t have to commit to switching outright to Linux. You can install it alongside your current Windows system, or even try it without installing anything at all.

Ubuntu is the most popular Linux distribution for desktop and laptop Linux users, so we’ll focus on Ubuntu throughout this guide. For the most part, Ubuntu just plain works. It sports a subtle interface that stays out of your way. It enjoys strong support from software developers (including Valve, since Steam on Linux only officially supports Ubuntu). And you can find tons of information online if you run into problems.

Read more.

Read more
brendandonegan

The inaugural online UDS (or vUDS as it’s becoming known) is underway. This brings with it a number of new challenges in terms of running a good session. Having sat in on a few sessions yesterday and been the session lead for several sessions at physical UDS’s going back nearly two years now, I thought I’d jot down a few tips on how to run a good session.

Prepare

Regardless of whether the session is physical or virtual, it’s always important to prepare. The purpose of a UDS session is to get feedback on some proposed plan of work (even if it is extremely nebulous at the time of the session). Past experience shows that sessions are always more productive where most of the plan is already fleshed out prior to the session and the session basically functions as a review/comments meeting. This depends on your particular case though, since the thing you are planning may not be possible to flesh out in a lot of detail without feedback. I personally find this is rarely the case though.

Be punctual

UDS runs on a tight schedule, at least in the physical version, although I don’t see any good reason why this should change for vUDS. Punctuality is therefore important not as good manners but from a practical point of view. You need to time to compose yourself, find notes and make sure everything is set up. For a physical UDS this would have been to check microphones are working and projectors are projecting. For a vUDS, in my brief experience, this means making sure everyone who needs to be is invited into the hangout and that the etherpad is up and the video feed is working on the session page.

Delegate

As the session lead it is your responsibility to run a good session, however it will be impossible for you to perform all the tasks required to achieve this on your own. Someone needs to be making notes on the Etherpad and someone needs to be monitoring IRC. You should also be looking out for questions yourself but since you may be concentrating on conveying information and answering other questions, you do need help with this.

Avoid going off track

Time is limited in a UDS session and you may have a lot of points to get through. Be wary of getting distracted from the point of the session and discussing things that may not be entirely relevant. Don’t be afraid to cut people short – if the question is important to them then you can follow up offline later.

Manage threads of communication

This one is quite vUDS specific, but especially now that audiovisual participation is limited, it is important that all of the conversation take place in one spot. Particularly for people who are catching up with the video streams later on. Don’t allow a parallel conversation to develop on IRC if possible. If someone asks a question in IRC, repeat it to the video audience and answer it in the hangout, not on IRC. If someone is talking a lot in IRC and even answering questions, do invite them into the hangout so that what they’re saying can be recorded. It may not be possible to avoid this entirely, but as session lead you need to do your best to mitigate it.

Follow up

Not so much a tip for running a good session, but for getting the best from a good session. Remember to read the notes from the session and rewatch the video so that you can use the feedback to adapt your plan and find places to follow up.

That’s all there is to say, I really hope this first virtual UDS goes very well and that sessions are productive for everyone involved.


Read more
alex

Appy polly loggies for the super long delay between episodes, but I finally carved out some time for our exciting dénouement in the memory leak detection series. Past episodes included detection and analysis.

As a gentle reminder, during analysis, we saw the following block of code:

 874                 GSList *dupes = NULL;
 875                 const char *path;
 876 
 877                 dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupes");
 878                 path = nm_object_get_path (NM_OBJECT (ap));
 879                 dupes = g_slist_prepend (dupes, g_strdup (path));
 880 #endif
 881                 return NULL;

And we concluded with:

Is it safe to just return NULL without doing anything to dupes? maybe that’s our leak?
We can definitively say that it is not safe to return NULL without doing anything to dupes. We definitely allocated memory, stuck it into dupes, and then threw dupes away. This is our smoking gun.

But there’s a twist! Eagle-eyed reader Dave Jackson (a former colleague of mine from HP, natch) spotted a second leak! It turns out that line 879 was exceptionally leaky during its inception. As Dave points out, the call to g_slist_prepend() passes g_strdup() as an argument. And as the documentation says:

Duplicates a string. If str is NULL it returns NULL. The returned string should be freed with g_free() when no longer needed.

In memory-managed languages like python, the above idiom of passing a function as an argument to another function is quite common. However, one needs to be more careful about doing so in C and C++, taking great care to observe if your function-as-argument allocates memory and returns it. There is no mechanism in the language itself to automatically free memory in the above situation, and thus the call to g_strdup() seems like it also leaks memory. Yowza!

So, what to do about it?

The basic goal here is that we don’t want to throw dupes away. We need to actually do something with it. Here again are the 3 most pertinent lines.

 877                 dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupes");
 878                 path = nm_object_get_path (NM_OBJECT (ap));
 879                 dupes = g_slist_prepend (dupes, g_strdup (path));
 881                 return NULL;

Let’s break these lines down.

  1. On line 877, we retrieve the dupes list from the dup_data.found object
  2. Line 878 gets a path to the duplicate wifi access point
  3. Finally, line 879 adds the duplicate access point to the old dupes list
  4. Line 881 throws it all away!

To me, the obvious thing to do is to change the code between lines 879 and 881, so that after we modify the duplicates list, we save it back into the dup_data object. That way, the next time around, the list stored inside of dup_data will have our updated list. Makes sense, right?

As long as you agree with me conceptually (and I hope you do), I’m going to take a quick shortcut and show you the end result of how to store the new list back into the dup_data object. The reason for the shortcut is that we are now deep in the details of how to program using the glib API, and like many powerful APIs, the key is to know which functions are necessary to accomplish your goal. Since this is a memory leak tutorial and not a glib API tutorial, just trust me that the patch hunk will properly store the dupes list back into the dup_data object. And if it’s confusing, as always, read the documentation for g_object_steal_data and g_object_set_data_full.

@@ -706,14 +706,15 @@
 +		GSList *dupes = NULL;
 +		const char *path;
 +
-+		dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupes");
++		dupes = g_object_steal_data (G_OBJECT (dup_data.found), "dupes");
 +		path = nm_object_get_path (NM_OBJECT (ap));
 +		dupes = g_slist_prepend (dupes, g_strdup (path));
++		g_object_set_data_full (G_OBJECT (dup_data.found), "dupes", (gpointer) dupes, (GDestroyNotify) clear_dupes_list);
 +#endif
  		return NULL;
  	}

If the above patch format looks funny to you, it’s because we are changing a patch.

-+		dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupes");
++		dupes = g_object_steal_data (G_OBJECT (dup_data.found), "dupes");

This means the old patch had the line calling g_object_get_data() and the refreshed patch now calls g_object_steal_data() instead. Likewise…

++		g_object_set_data_full (G_OBJECT (dup_data.found), "dupes", (gpointer) dupes, (GDestroyNotify) clear_dupes_list);

The above call to g_object_set_data_full is a brand new line in the new and improved patch.

Totally clear, right? Don’t worry, the more sitting and contemplating of the above you do, the fuller and more awesomer your neckbeard grows. Don’t forget to check it every once in a while for small woodland creatures who may have taken up residence there.

And thus concludes our series on how to detect, analyze, and fix memory leaks. All good? Good.

waiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiit!!!!!!!11!

I can hear the observant readers out there already frantically scratching their necks and getting ready to point out the mistake I made. After all, our newly refreshed patch still contains this line:

 +		dupes = g_slist_prepend (dupes, g_strdup (path));

And as we determined earlier, that’s our incepted memory leak, right? RIGHT!?‽

Not so fast. Take a look at the new line in our updated patch:

++		g_object_set_data_full (G_OBJECT (dup_data.found), "dupes", (gpointer) dupes, (GDestroyNotify) clear_dupes_list);

See that? The last argument to g_object_set_data_full() looks quite interesting indeed. It is in fact, a cleanup function named clear_dupes_list(), which according to the documentation, will be called

when the association is destroyed, either by setting it to a different value or when the object is destroyed.

In other words, when we are ready to get rid of the dup_data.found object, as part of cleaning up that object, we’ll call the clear_dupes_list() function. And what does clear_dupes_list() do, praytell? Why, let me show you!

static void
clear_dupes_list (GSList *list)
{
	g_slist_foreach (list, (GFunc) g_free, NULL);
	g_slist_free (list);
}

Trés interesante! You can see that we iterate across the dupes list, and call g_free on each of the strings we did a g_strdup() on before. So there wasn’t an inception leak after all. Tricky tricky.

A quick digression is warranted here. Contrary to popular belief, it is possible to write object oriented code in plain old C, with inheritance, method overrides, and even some level of “automatic” memory management. You don’t need to use C++ or python or whatever the web programmers are using these days. It’s just that in C, you build the OO features you want yourself, using primitives such as structs and function pointers and smart interface design.

Notice above we have specified that whenever the dup_data object is destroyed, it will free the memory that was stuffed into it. Yes, we had to specify the cleanup function manually, but we are thinking of our data structures in terms of objects.

In fact, the fancy features of many dynamic languages are implemented just this way, with the language keeping track of your objects for you, allocating them when you need, and freeing them when you’re done with them.

Because at the end of the day, it is decidedly not turtles all the way down to the CPU. When you touch memory in in python or ruby or javascript, I guarantee that something is doing the bookkeeping on your memory, and since CPUs only understand assembly language, and C is really just pretty assembly, you now have a decent idea of how those fancy languages actually manage memory on your behalf.

And finally now that you’ve seen just how tedious and verbose it is to track all this memory, it should no longer be a surprise to you that most fancy languages are slower than C. Paperwork. It’s always paperwork.

And here we come to the upshot, which is, tracking down memory leaks can be slow and time consuming and trickier than first imagined (sorry for the early head fake). But with the judicious application of science and taking good field notes, it’s ultimately just like putting a delicious pork butt in the slow cooker for 24 hours. Worth the wait, worth the effort, and it has a delicious smoky sweet payoff.

Happy hunting!


kalua pork + homemade mayo and cabbage

Read more
Prakash

Hackable Lego Robot Runs Linux

The Lego Mindstorms EV3 is the first major revamp of the Lego Group’s programmable robot kit since 2006, and the first to run embedded Linux.

Unveiled at the CES Show in Las Vegas yesterday, with the first public demos starting today at the Kids Play Summit at the Venetian Hotel, the $350 robot is built around an upgraded “Intelligent Brick” computer. Lego swapped out the previous microcontroller for a 300MHz ARM9 processor capable of running new Linux-based firmware. As a result, the kids-oriented Mindstorms EV3 offers far more programmability than the NXT series, which was last updated in 2009, says Lego.

Read More.

Read more
Prakash

The team behind the Samba file, print, and authentication server suite for Microsoft Windows clients announced the release of Samba version 4 yesterday. This version includes significant new capabilities that offer an open source replacement for many enterprise infrastructure roles currently delivered exclusively by Microsoft software, including acting as a domain controller, providing SMB2.1 protocol support, delivering clustering, and offering a virtual filesystem (VFS) interface. It comes with Coverity security certification and easy upgrade scripts. The release notes include details of all changes.

Notably, this includes the first open source implementation of Microsoft’s Active Directory protocols; Samba previously only offered Windows NT domain controller functions. According to the press release, “Samba 4.0 provides everything needed to serve as an Active Directory Compatible Domain Controller for all versions of Microsoft Windows clients currently supported by Microsoft, including the recently released Windows 8.”

Samba 4 can join existing Active Directory domains and provides all necessary function to host a domain that can be joined by Microsoft Active Directory servers. It provides all the services needed by Microsoft Exchange, as well as opening up the possibility of fully open source alternatives to Exchange such as the OpenChange project.

Read More.

Read more
Prakash

While ARM is gaining a lot of momentum, the challenge with ARM until now was that every architecture is very different from different vendors and requires a separate kernel and entire OS stack.

With Linux Kernel 3.7, this has changed for the better.

ARM’s problem was that, unlike the x86 architecture, where one Linux kernel could run on almost any PC or server, almost every ARM system required its own customized Linux kernel. Now with 3.7, ARM architectures can use one single vanilla Linux kernel while keeping their special device sauce in device trees.

The end result is that ARM developers will be able to boot and run Linux on their devices and then worry about getting all the extras to work. This will save them, and the Linux kernel developers, a great deal of time and trouble.

Just as good for those ARM architects and programmers who are working on high-end, 64-bit ARM systems, Linux now supports 64-bit ARM processors. 64-bit ARM CPUs won’t ship until in commercial quantities until 2013. When they do arrive though programmers eager to try 64-bit ARM processors on servers will have Linux ready for them.

Read More.

Read more
Prakash

From PC World.

Ubuntu is a widely popular open-source Linux distribution with eight years of maturity under its belt, and more than 20 million users. Of the roughly 5 percent of desktop OSs accounted for by Linux, at least one survey suggests that about half are Ubuntu. (Windows, meanwhile, accounts for about 84 percent.)

The timing of this latest Ubuntu release couldn’t be better for Windows users faced with the paradigm-busting Windows 8 and the big decision of whether to take the plunge.

Initial uptake of Windows 8 has been unenthusiastic, according to reports, and a full 80 percent of businesses will never adopt it, Gartner predicts. As a result, Microsoft’s big gamble may be desktop Linux’s big opportunity.

So, now that Canonical has thrown down the gauntlet, let’s take a closer look at Ubuntu 12.10 to see how it compares with Windows 8 from a business user’s perspective.

 

Windows 8 Pro (x86) Ubuntu 12.10
License fee $39 to $69 upgrade Free
CPU architectures supported x86, x86-64 x86, x86-64, ARM, PPC
Minimum RAM 1GB, 2GB 512MB
Minimum hard-disk space 20GB 5GB
Concurrent multiuser support No Yes
Workspaces One Two or more
Virtualization Hyper-V KVM
License Not applicable GPL Open Source: Main, Non-GPL: Restricted
Productivity software included None LibreOffice
Graphics tools included No Yes

Read More.

Read more
alex

In our last exciting episode, we learned how to capture a valgrind log. Today we’re going to take the next step and learn how to actually use it to debug memory leaks.

There are a few prerequisites:

  1. know C. If you don’t know it, go read the C programming language which is often referred to as K&R C. Be sure to understand the sections on pointers, and after you do, come back to my blog. See you in 2 weeks!
  2. a nice supply of your favorite beverages and snacks. I prefer coffee and bacon, myself. Get ready because you’re about to read an epic 2276 word blog entry.

That’s it. Ok, ready? Let’s go!

navigate the valgrind log
Open the valgrind log that you collected. If you don’t have one, you can grab one that I’ve already collected. Take a deep breath. It looks scary but it’s not so bad. I like to skip straight to the good part near the bottom. Search the file for “LEAK SUMMARY”. You’ll see something like:

==13124== LEAK SUMMARY:
==13124==    definitely lost: 916,130 bytes in 37,528 blocks
==13124==    indirectly lost: 531,034 bytes in 12,735 blocks
==13124==      possibly lost: 82,297 bytes in 891 blocks
==13124==    still reachable: 2,578,733 bytes in 42,856 blocks
==13124==         suppressed: 0 bytes in 0 blocks
==13124== Reachable blocks (those to which a pointer was found) are not shown.
==13124== To see them, rerun with: --leak-check=full --show-reachable=yes

You can see that valgrind thinks we’ve definitely leaked some memory. So let’s go figure out what leaked.

Valgrind lists all the leaks, in order from smallest to largest. The leaks are also categorized as “possibly” or “definitely”. We’ll want to focus on “definitely” for now. Right above the summary, you’ll see the worst, definite leak:

==13124== 317,347 (77,312 direct, 240,035 indirect) bytes in 4,832 blocks are definitely lost in loss record 10,353 of 10,353
==13124==    at 0x4C2B6CD: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==13124==    by 0x74E3A78: g_malloc (gmem.c:159)
==13124==    by 0x74F6CA2: g_slice_alloc (gslice.c:1003)
==13124==    by 0x74F7ABD: g_slist_prepend (gslist.c:265)
==13124==    by 0x4275A4: get_menu_item_for_ap (applet-device-wifi.c:879)
==13124==    by 0x427ACE: wireless_add_menu_item (applet-device-wifi.c:1138)
==13124==    by 0x41815B: nma_menu_show_cb (applet.c:1643)
==13124==    by 0x4189EC: applet_update_indicator_menu (applet.c:2218)
==13124==    by 0x74DDD52: g_main_context_dispatch (gmain.c:2539)
==13124==    by 0x74DE09F: g_main_context_iterate.isra.23 (gmain.c:3146)
==13124==    by 0x74DE499: g_main_loop_run (gmain.c:3340)
==13124==    by 0x414266: main (main.c:106)

Wow, we lost 300K of memory in just a few hours. Now imagine if you don’t reboot your laptop for a week. Yeah, that’s not so good. Time for a coffee and bacon break, the next part is about to get fun.

read the stack trace
What you saw above is a stack trace, and it’s printed chronologically “backwards”. In this example, malloc() was called by g_malloc(), which was called by g_slice_alloc(), which in turn was called by g_slist_prepend(), which itself was called by get_menu_item_for_ap() and so forth. The first function ever called was main(), which should hopefully make sense.

At this point, we need to use a little bit of extra knowledge to understand what is happening. The first function, main() is in our program, nm-applet. That’s fairly easy to understand. However, the next few functions that begin with g_main_ don’t actually live inside nm-applet. They are part of glib, which is a library that nm-applet depends on. I happened to have just known this off the top of my head, but if you’re ever unsure, you can just google for the function name. After searching, we can see that those functions are in glib, and while there is some magic that is happening, we can blissfully ignore it because we see that we soon jump back into nm-applet code, starting with applet_update_indicator_menu().

a quick side note
Many Linux programs will have a stack trace similar to the above. The program starts off in its own main(), but will call various other libraries on your system, such as glib, and then jump back to itself. What’s going on? Well, glib provides a feature known as a “main loop” which is used by the program to look for inputs and events, and then react to them. It’s a common programming paradigm, and rather than have every application in the world write their own main loop, it’s easier if everyone just uses the one provided by glib.

The other observation is to note how the function names appear prominently in the stack trace. Pundits wryly say that naming things is one of the hardest things in computer science, and I completely agree. So take care when naming your functions, because people other than you will definitely see them and need to understand them!

Alright, let’s get back to the stack trace. We can see a few functions that look like they belong to nm-applet, based on their names and their associated filenames. For example, the function wireless_add_menu_item() is in the file applet-device-wifi.c on line 1138. Now you see why we wanted symbols from the last episode. Without the debug symbols, all we would have seen would have been a bunch of useless ??? and we’d be gnashing our teeth and wishing for more bacon right now.

Finally, we see a few more g_* functions, which means we’re back in the memory allocation functions provided by glib. It’s important to understand at this point that g_malloc() is not the memory leak. g_malloc() is simply doing whatever nm-applet asks it to do, which is to allocate memory. The leak is highly likely to be in nm-applet losing a reference to the pointer returned by g_malloc().

What does it mean?
Now we’re ready to start the real debugging. We know approximately where we are leaking memory inside nm-applet: get_menu_item_for_ap() which is the last function before calling the g_* memory functions. Time to top off on coffee because we’re about to get our hands dirty.

reading the source
The whole point of open source is being able to read the source. Are you as excited as I am? I know you are!

First, let’s get the source to nm-applet. Assuming you are using Ubuntu and you are using 12.04, you’d simply say:

$ cd Projects
$ mkdir network-manager-gnome
$ cd network-manager-gnome
$ apt-get source network-manager-gnome
$ cd network-manager-applet-0.9.4.1

Woo hoo! That wasn’t hard, right?

side note #2
Contrary to popular belief, reading code is harder than writing code. When you write code, you are transmitting the thoughts of your messy brain into an editor, and as long as it kinda works, you’re happy. When you read code, now you’re faced with the problem of trying to understand exactly what the previous messy brain wrote down and making sense of it. Depending on how messy that previous brain was, you may have real trouble understanding the code. This is where pencil and paper and plenty of coffee come into play, where you literally trace through what the program is doing to try and understand it.

Luckily there are at least a few tools to help you do this. My favorite tools are cscope and ctags, which help me to rapidly understand the skeleton of a program and navigate around its complex structure.

Assuming you are in the network-manager-applet-0.9.4.1 source tree:

$ apt-get install cscope ctags
$ cscope -bqR
$ ctags -R
$ cscope -dp4


You are now presented with a menu. Use control-n and control-p to navigate input fields at the bottom. Try navigating to “Find this C symbol:” and then type in get_menu_item_for_ap, and press enter. The search results are returned, and you can press ’0′ or ’1′ to jump to either of the locations where the function is referenced. You can also press the space bar to see the final search result. Play around with some of the other search types and see what happens. I’ll talk about ctags in a bit.

Alrighty, let’s go looking for our suspicious nm-applet function. Start up cscope as described above. Navigate to “Find this global definition:” and search for get_menu_item_for_ap. cscope should just directly put you in the right spot.

Based on our stack trace, it looks like we’re doing something suspicious on line 879, so let’s go see what it means.

 869         if (dup_data.found) {
 870 #ifndef ENABLE_INDICATOR
 871                 nm_network_menu_item_best_strength (dup_data.found, nm_acce
 872                 nm_network_menu_item_add_dupe (dup_data.found, ap);
 873 #else
 874                 GSList *dupes = NULL;
 875                 const char *path;
 876 
 877                 dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupe
 878                 path = nm_object_get_path (NM_OBJECT (ap));
 879                 dupes = g_slist_prepend (dupes, g_strdup (path));
 880 #endif
 881                 return NULL;
 882         }

Cool, we can now see where the source code is matching up with the valgrind log.

Let’s start doing some analysis. The first thing to note are the #ifdef blocks on lines 870, 873, and 880. You should know that ENABLE_INDICATOR is defined, meaning we do not execute the code in lines 871 and 872. Instead, we do lines 874 to 879, and then we do 881. Why do we do 881 if it is after the #endif? That’s because we fell off the end of the #ifdef block, and then we do whatever is next, after we fall off, namely returning NULL.

Don’t worry, I don’t know what’s going on yet, either. Time for a refill!

Back? Great. Alright, valgrind says that we’re doing something funky with g_slist_prepend().

==13124==    by 0x74F7ABD: g_slist_prepend (gslist.c:265)

And our relevant code is:

 874                 GSList *dupes = NULL;
 875                 const char *path;
 876 
 877                 dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupe
 878                 path = nm_object_get_path (NM_OBJECT (ap));
 879                 dupes = g_slist_prepend (dupes, g_strdup (path));
 880 #endif
 881                 return NULL;

We can see that we declare the pointer *dupes on line 874, but we don’t do anything with it. Then, we assign something to it on line 877. Then, we assign something to it again on line 879. Finally, we end up not doing anything with *dupes at all, and just return NULL on line 881.

This definitely seems weird and worth a second glance. At this point, I’m asking myself the following questions:

  • did g_object_get_data() allocate memory?
  • did g_slist_prepend() allocate memory?
  • are we overwriting *dupes on line 879? that might be a leak.
  • is it safe to just return NULL without doing anything to dupes? maybe that’s our leak?

Let’s take them in order.

did g_object_get_data() allocate memory?
g_object_get_data has online documentation, so that’s our first stop. The documentation says:

Returns :
the data if found, or NULL if no such data exists. [transfer none]

Since I am not 100% familiar with glib terminology, I guess [transfer none] means that g_object_get_data() doesn’t actually allocate memory on its own. But let’s be 100% sure. Time to grab the glib source and find out for ourselves.

$ apt-get source libglib2.0-0
$ cd glib2.0-2.32.1
$ cscope -bqR
$ ctags -R
$ cscope -dp4
search for global definition of g_object_get_data

Pretty simple function.

3208 gpointer
3209 g_object_get_data (GObject     *object,
3210                    const gchar *key)
3211 {
3212   g_return_val_if_fail (G_IS_OBJECT (object), NULL);
3213   g_return_val_if_fail (key != NULL, NULL);
3214 
3215   return g_datalist_get_data (&object->qdata, key);
3216 }

Except I have no idea what g_datalist_get_data() does. Maybe that guy is allocating memory. Now I’ll use ctags to make my life easier. In vim, put your cursor over the “g” in “g_datalist_get_data” and then press control-]. This will “step into” the function. Magic!

 844 gpointer
 845 g_datalist_get_data (GData       **datalist,
 846                      const gchar *key)
 847 {
 848   gpointer res = NULL; 
 ... 
 856   d = G_DATALIST_GET_POINTER (datalist);
 ...
 859       data = d->data;
 860       data_end = data + d->len;
 861       while (data < data_end)
 862         {
 863           if (strcmp (g_quark_to_string (data->key), key) == 0)
 864             {
 865               res = data->data;
 866               break;
 867             }
 868           data++;
 869         }
 ... 
 874   return res;
 875 }

This is a pretty simple loop, walking through an existing list of pointers which have already been allocated somewhere else, starting on line 861. We do our comparison on line 863, and if we get a match, we assign whatever we found to res on line 865. Note that all we are doing here is a simple assignment. We are not allocating any memory!

Finally, we return our pointer on line 874. Press control-t in vim to pop back to your last location.

Now we know for sure that g_object_get_data() and g_datalist_get_data() do not allocate any memory at all, so there can be no possibility of a leak here. Let’s try the next function.

did g_slist_prepend() allocate memory?
First, read the documentation, which says:

The return value is the new start of the list, which may have changed, so make sure you store the new value.

This probably means it allocates memory for us, but let’s double-check just to be sure. Back to cscope!

 259 GSList*
 260 g_slist_prepend (GSList   *list,
 261                  gpointer  data)
 262 {
 263   GSList *new_list;
 264 
 265   new_list = _g_slist_alloc ();
 266   new_list->data = data;
 267   new_list->next = list;
 268 
 269   return new_list;
 270 }

Ah ha! Look at line 265. We are 100% definitely allocating memory, and returning it on line 269. Things are looking up! Let’s keep going with our questions.

are we overwriting *dupes on line 879? that might be a leak.
Remember:

 877                 dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupe
 878                 path = nm_object_get_path (NM_OBJECT (ap));
 879                 dupes = g_slist_prepend (dupes, g_strdup (path));

We’ve already proven to ourselves that line 877 doesn’t allocate any memory. It just sets dupes to some value. However, on line 879, we do allocate memory. It is equivalent to this code:

  int *dupes;
  dupes = 0x12345678;
  dupes = malloc(128);

So simply setting dupes to the return value of g_object_get_data() and later overwriting it with the return value of malloc() does not inherently cause a leak.

By way of counter-example, the below code is a memory leak:

  int *dupes;
  dupes = malloc(64);
  dupes = malloc(128);    /* leak! */

The above essentially illustrates the scenario I was worried about. I was worried that g_object_get_data() allocated memory, and then g_slist_prepend() also allocated memory which would have been a leak because the first value of dupes got scribbled over by the second value. My worry turned out to be incorrect, but that is the type of detective work you have to think about.

As a clearer example of why the above is a leak, consider the next snippet:

  int *dupes1, *dupes2;
  dupes1 = malloc(64);     /* ok */
  dupes2 = malloc(128);    /* ok */
  dupes1 = dupes2;         /* leak! */

First we allocate dupes1. Then allocate dupes2. Finally, we set dupes1 = dupes2, and now we have a leak. No one knows what the old value of dupes1 was, because we scribbled over it, and it is gone forever.

is it safe to just return NULL without doing anything to dupes? maybe that’s our leak?
We can definitively say that it is not safe to return NULL without doing anything to dupes. We definitely allocated memory, stuck it into dupes, and then threw dupes away. This is our smoking gun.

Next time, we’ll see how to actually fix the problem.

Read more
Prakash

Over €10 million (approximately £8 million or $12.8 million) has been saved by the city of Munich, thanks to its development and use of the city’s own Linux platform. The calculation of savings follows a question by the city council’s independent Free Voters (Freie Wähler) group,

Read More.

Urge your city to save money from taxes, its your hard earned money.

 

Read more
Prakash

After installing Ubuntu 12.10, the first thing I wanted to do, was to disable reverse scrolling – you scroll down and it scrolls up! This is also called natural scrolling by Apple. Don’t know what is natural about it :) but may be natural for Apple users.

Open the terminal and edit this file using any editor and edit the .Xmodmap in your home directory for example:

 gedit .Xmodmap

Here you would seet his:

pointer = 1 2 3 5 4  6 7 8 9 10 11 12

You would note that in the sequence of numbers 5 and 4 are interchanged. Change it back to the sequence..

pointer = 1 2 3 4 5 6 7 8 9 10 11 12

Now you are done, logging out and in should do the job.

If you have Ubuntu Tweak installed. Just go to Tweaks-Miscellaneous and you would see an option to toggle Natural Scrolling on/off.

 

 

Read more
brendandonegan

I find that sometimes the Network Manager applet in Ubuntu can be a little temperamental (apologies to the maintainer, cyphermox, if he’s reading this – but such is the nature of software). Sometimes it won’t show available routers and if that’s the case then I’ve established a little workaround that I’m telling you about mainly because it involves a script I wrote that lives in a somewhat obscure place in Ubuntu.

Step one in the workaround is needed if you don’t know which networks are available in advance. If you’re sitting in your home then you’ll probably not need this step since most people know their router SSID. If you don’t then you can scan using:

nmcli dev wifi list

This is really reliable and always works if your WiFi hardware is working.

The second step is to use the SSID to create the connection using the script I wrote:

sudo /usr/share/checkbox/scripts/create_connection $SSID --security=wpa --key=$WPA_KEY

If the router doesn’t use any security (which nmcli dev wifi list will tell you) then you don’t need –security or –key. If the router doesn’t use WPA2 (maybe it uses WEP), then you’re out of luck – and deservedly so. Change the routers security settings if you can!


Read more

The Ubuntu Developer Summit was held in Copenhagen last week, to discuss plans for the next six-month cycle of Ubuntu. This was the most productive UDS that I've been to — maybe it was the shorter four-day schedule, or the overlap with Linaro Connect, but it sure felt like a whirlwind of activity.

I thought I'd share some details about some of the sessions that cover areas I'm working on at the moment. In no particular order:

Improving cross-compilation

Blueprint: foundations-r-improve-cross-compilation

This plan is a part of a mutli-cycle effort to improve cross-compilation support in Ubuntu. Progress is generally going well — the consensus from the session was that the components are fairly close to complete, but we still need some work to pull those parts together into something usable.

So, this cycle we'll be working on getting that done. While we have a few bugfixes and infrastructure updates to do, one significant part of this cycle’s work will be to document the “best-practices” for cross builds in Ubuntu, on wiki.ubuntu.com. This process will be heavily based on existing pages on the Linaro wiki. Because most of the support for cross-building is already done, the actual process for cross-building should be fairly straightforward, but needs to be defined somewhere.

I'll post an update when we have a working draft on the Ubuntu wiki, stay tuned for details.

Rapid archive bringup for new hardware

Blueprint: foundations-r-rapid-archive-bringup

I'd really like for there to be a way to get an Ubuntu archive built “from scratch”, to enable custom toolchain/libc/other system components to be built and tested. This is typically useful when bringing up new hardware, or testing rebuilds with new compiler settings. Because we may be dealing with new hardware, doing this bootstrap through cross-compilation is something we'd like too.

Eventually, it would be great to have something as straightforward as the OpenEmbedded or OpenWRT build process to construct a repository with a core set of Ubuntu packages (say, minbase), for previously-unsupported hardware.

The archive bootstrap process isn't done often, and can require a large amount of manual intervention. At present, there's only a couple of folks who know how to get it working. The plan here is to document the bootstrap process in this cycle, so that others can replicate the process, and possibly improve the bits that are a little too janky for general consumption.

ARM64 / ARMv8 / aarch64 support

Blueprint: foundations-r-aarch64

This session is an update for progress on the support for ARMv8 processors in Ubuntu. While no general-purpose hardware exists at the moment, we want to have all the pieces ready for when we start seeing initial implementations. Because we don't have hardware yet, this work has to be done in a cross-build environment; another reason to keep on with the foundations-r-improve-cross-compilation plan!

So far, toolchain progress is going well, with initial cross toolchains available for quantal.

Although kernel support isn’t urgent at the moment, we’ll be building an initial kernel-headers package for aarch64. There's also a plan to get a page listing the aarch64-cross build status of core packages, so we'll know what is blocked for 64-bit ARM enablement.

We’ve also got a bunch of workitems for volunteers to fix cross-build issues as they arise. If you're interested, add a workitem in the blueprint, and keep an eye on it for updates.

Secure boot support in Ubuntu

Blueprint: foundations-r-secure-boot

This session covered progress of secure boot support as at the 12.10 Quantal Quetzal release, items that are planned for 13.04, and backports for 12.04.2.

As for 12.10, we’ve got the significant components of secure boot support into the release — the signed boot chain. The one part that hasn't hit 12.10 yet is the certificate management & update infrastructure, but that is planned to reach 12.10 by way of a not-too-distant-future update.

The foundations team also mentioned that they were starting the 12.04.2 backport right after UDS, which will bring secure boot support to our current “Long Term Support” (LTS) release. Since the LTS release is often preferred Ubuntu preinstall situations, this may be used as a base for hardware enablement on secure boot machines. Combined with the certificate management tools (described at sbkeysync & maintaining uefi key databases), and the requirement for “custom mode” in general-purpose hardware, this will allow for user-defined trust configuration in an LTS release.

As for 13.04, we're planning to update the shim package to a more recent version, which will have Matthew Garrett's work on the Machine Owner Key plan from SuSE.

We're also planning to figure out support for signed kernel modules, for users who wish to verify all kernel-level code. Of course, this will mean some changes to things like DKMS, which run custom module builds outside of the normal Ubuntu packages.

Netboot with secure boot is still in progress, and will require some fixes to GRUB2.

And finally, the sbsigntools codebase could do with some new testcases, particularly for the PE/COFF parsing code. If you're interested in contributing, please contact me at jeremy.kerr@canonical.com.

Read more
Prakash

Ubuntu 12.10 is here. With this release there is no CD image only DVD image which is 800 MB in size. Torrent is preferred method for me.

Ubuntu 12.10
Torrent Links Direct Downloads
Ubuntu Desktop 64-Bit Edition Torrent Main Server
Ubuntu Desktop 32-Bit Edition Torrent Main Server
Ubuntu Server Edition 64-Bit Torrent Main Server
Ubuntu Server Edition 32-Bit Torrent Main Server

Have fun :)

Ubuntu Unleashed 2012 Edition: Covering 11.10 and 12.04 (7th Edition) (7th Edition)

Read more
sfmadmax

So I use Xchat daily and connect to a private IRC server to talk with my colleagues. I also have a BIP server in the office to record all of the IRC transcripts, this way I never miss any conversations regardless of the time of day. Because the BIP server is behind a firewall on the companies network I can’t access it from the outside.  For the past year I’ve been working around this by connecting to my companies firewall via ssh and creating a SOCKS tunnel then simply directing xchat to talk through my local SOCKS proxy.

To do this ,  open a terminal and issue:

ssh -CND <LOCAL_IP_ADDRESS>:<PORT> <USER>@<SSH HOST>

Ex: ssh -CND 192.168.1.44:9999 sfeole@companyfirewall.com

Starting ssh with -CND:

‘D’ Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. It also adds compression to the datastream ‘C’ and the ‘N’ is a safeguard which protects the user from executing remote commands.

192.168.1.44 is my  IPv4 address

9999 is the local port i’m going to open and direct traffic through

After the SSH tunnel is open I now need to launch xchat, navigate to Settings -> Preferences -> Network Setup and configure xchat to use my local IP (192.168.1.44) and local port (9999) then press OK then Reconnect.

I should now be able to connect to the IRC server behind the firewall. Usually I run through this process a few times a day, so it becomes somewhat of a tedious annoyance after a while.

Recently I finished a cool python3 script that does all of this in quick command.

The following code will do the following:

1.) identify the ipv4 address of the interface device you specify

2.) configure xchat.conf to use the new ipv4 address and port specified by the user

3.) open the ssh tunnel using the SSH -CND command from above

4.) launch xchat and connect to your server (assuming you have it set to auto connect)

To use it simply run

$./xchat.py -i <interface> -p <port>

ex: $./xchat.py -i wlan0 -p 9999

the user can select wlan0 or eth0 and of course their desired port. When your done with the tunnel simply issue <Ctrl-C> to kill it and wala!

https://code.launchpad.net/~sfeole/+junk/xchat

#!/usr/bin/env python3
#Sean Feole 2012,
#
#xchat proxy wrapper, for those of you that are constantly on the go:
#   --------------  What does it do? ------------------
# Creates a SSH Tunnel to Proxy through and updates your xchat config
# so that the user does not need to muddle with program settings

import signal
import shutil
import sys
import subprocess
import argparse
import re
import time

proxyhost = "myhost.company.com"
proxyuser = "sfeole"
localusername = "sfeole"

def get_net_info(interface):
    """
    Obtains your IPv4 address
    """

    myaddress = subprocess.getoutput("/sbin/ifconfig %s" % interface)\
                .split("\n")[1].split()[1][5:]
    if myaddress == "CAST":
        print ("Please Confirm that your Network Device is Configured")
        sys.exit()
    else:
        return (myaddress)

def configure_xchat_config(Proxy_ipaddress, Proxy_port):
    """
    Reads your current xchat.conf and creates a new one in /tmp
    """

    in_file = open("/home/%s/.xchat2/xchat.conf" % localusername, "r")
    output_file = open("/tmp/xchat.conf", "w")
    for line in in_file.readlines():
        line = re.sub(r'net_proxy_host.+', 'net_proxy_host = %s'
                 % Proxy_ipaddress, line)
        line = re.sub(r'net_proxy_port.+', 'net_proxy_port = %s'
                 % Proxy_port, line)
        output_file.write(line)
    output_file.close()
    in_file.close()
    shutil.copy("/tmp/xchat.conf", "/home/%s/.xchat2/xchat.conf"
                 % localusername)

def ssh_proxy(ProxyAddress, ProxyPort, ProxyUser, ProxyHost):
    """
    Create SSH Tunnel and Launch Xchat
    """

    ssh_address = "%s:%i" % (ProxyAddress, ProxyPort)
    user_string = "%s@%s" % (ProxyUser, ProxyHost)
    ssh_open = subprocess.Popen(["/usr/bin/ssh", "-CND", ssh_address,
                 user_string], stdout=subprocess.PIPE, stdin=subprocess.PIPE)

    time.sleep(1)
    print ("")
    print ("Kill this tunnel with Ctrl-C")
    time.sleep(2)
    subprocess.call("xchat")
    stat = ssh_open.poll()
    while stat is None:
        stat = ssh_open.poll()

def main():
    """
    Core Code
    """

    parser = argparse.ArgumentParser()
    parser.add_argument('-i', '--interface',
                        help="Select the interface you wish to use",
                        choices=['eth0', 'wlan0'],
                        required=True)
    parser.add_argument('-p', '--port',
                        help="Select the internal port you wish to bind to",
                        required=True, type=int)
    args = parser.parse_args()

    proxyip = (get_net_info("%s" % args.interface))
    configure_xchat_config(proxyip, args.port)
    print (proxyip, args.port, proxyuser, proxyhost)

    ssh_proxy(proxyip, args.port, proxyuser, proxyhost)

if __name__ == "__main__":
    sys.exit(main())

Refer to the launchpad address above for more info.


Read more
Prakash

From the article:

 

“You’d be a fool to use anything but Linux.” :)

Most Linux people know that Google uses Linux on its desktops as well as its servers. Some know that Ubuntu Linux is Google’s desktop of choice and that it’s called Goobuntu. But almost no one outside of Google knew exactly what was in it or what roles Ubuntu Linux plays on Google’s campus, until now.

Read More.

Related posts:

  1. Microsoft, Google in open war in India Google and Microsoft, two of the world’s largest technology firms, are...
  2. Ubuntu 12.04 LTS is now available for Download Ubuntu 12.04 LTS is here. This is the first time...
  3. Ubuntu 11.10 is here Ubuntu 11.10, code named Oneiric Ocelot,  is now available. It...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more
Prakash

Apple — one of the most closed companies in the world — is actually using lot of open source and software. Licensing information in the Apple iPhone proves this. Go to the legal section on the iPhone and it cites Linux Kernel developer Ted Ts’o for his code. Linux Suse is there, too.

Zemlin made the point that Apple has hundreds of billions of dollars in cash, which is enough to buy HP, Intel and Dell combined. Instead, Apple purchased the copyright to the Common Unix Printing System (CUPS), which now is on every Linux and Apple system.

The list of companies using Linux does not stop at Apple. Microsoft, which once equated open source with communism, is now a top contributor to the Linux Kernel project. And VMware is getting on the bandwagon.

Read More.

Related posts:

  1. Android Kernel and Linux kernel merge Android so far has been maintaining its separate kernel from...
  2. 8000 developer and 800 companies build Linux! Linux is today powering Android phones, TVs, set-top boxes, enterprise...
  3. Eight features Windows 8 borrowed from Linux “Good artists borrow, great artists steal!” — Pablo Picasso said...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more

Most of the components of the 64-bit ARM toolchain have been released, so I've put together some details on building a cross compiler for aarch64. At present, this is only binutils & compiler (ie, no libc), so is probably not useful for applications. However, I have a 64-bit ARM kernel building without any trouble.

Update: looking for an easy way to install a cross-compiler on Ubuntu or debian? Check out aarch64 cross compiler packages for Ubuntu & Debian.

pre-built toolchain

If you're simply looking to download a cross compiler, here's one I've built earlier: aarch64-cross.tar.gz (.tar.gz, 85MB). It's built for an x86_64 build machine, using Ubuntu 12.04 LTS, but should work with other distributions too.

The toolchain is configured for paths in /opt/cross/. To install it, do a:

[jk@pecola ~]$ sudo mkdir /opt/cross
[jk@pecola ~]$ sudo chown $USER /opt/cross
[jk@pecola ~]$ tar -C /opt/cross/ -xf aarch64-x86_64-cross.tar.gz

If you'd like to build your own, here's how:

initial setup

We're going to be building in ~/build/aarch64-toolchain/, and installing into /opt/cross/aarch64/. If you'd prefer to use other paths, simply change these in the commands below.

[jk@pecola ~]$ mkdir -p ~/build/arm64-toolchain/
[jk@pecola ~]$ cd ~/build/arm64-toolchain/
[jk@pecola ~]$ prefix=/opt/cross/aarch64/

We'll also need a few packages for the build:

[jk@pecola ~]$ sudo apt-get install bison flex libmpfr-dev libmpc-dev texinfo

binutils

I have a git repository with a recent version of ARM's aarch64 support, plus a few minor updates at git://kernel.ubuntu.com/jk/arm64/binutils.git (or browse the gitweb view). To build:

Update: arm64 support has been merged into upstream binutils, so you can now use the official git repository. The commit 02b16151 builds successfully for me.

[jk@pecola arm64-toolchain]$ git clone git://gcc.gnu.org/git/binutils.git
[jk@pecola arm64-toolchain]$ cd binutils
[jk@pecola binutils]$ ./configure --prefix=$prefix --target=aarch64-none-linux
[jk@pecola binutils]$ make
[jk@pecola binutils]$ make install
[jk@pecola binutils]$ cd ..

kernel headers

Next up, the kernel headers. I'm using Catalin Marinas' kernel tree on kernel.org here. We don't need to do a full build (we don't have a compiler yet..), just the headers_install target.

[jk@pecola arm64-toolchain]$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64.git
[jk@pecola arm64-toolchain]$ cd linux-aarch64
[jk@pecola linux-aarch64]$ git reset --hard b6fe1645
[jk@pecola linux-aarch64]$ make ARCH=arm64 INSTALL_HDR_PATH=$prefix headers_install
[jk@pecola linux-aarch64]$ cd ..

gcc

And now we should have things ready for the compiler build. I have a git tree up at git://kernel.ubuntu.com/jk/arm64/gcc.git (gitweb), but this is just the aarch64 branch of upstream gcc.

[jk@pecola arm64-toolchain]$ git clone git://kernel.ubuntu.com/jk/arm64/gcc.git
[jk@pecola arm64-toolchain]$ cd gcc/aarch64-branch/
[jk@pecola aarch64-branch]$ git reset --hard d6a1e14b
[jk@pecola aarch64-branch]$ ./configure --prefix=$prefix \
    --target=aarch64-none-linux --enable-languages=c \
    --disable-threads --disable-shared --disable-libmudflap \
    --disable-libssp --disable-libgomp --disable-libquadmath
[jk@pecola aarch64-branch]$ make
[jk@pecola aarch64-branch]$ make install
[jk@pecola aarch64-branch]$ cd ../..

That's it! You should have a working compiler for arm64 kernels in /opt/cross/aarch64.

Read more