Canonical Voices

Posts tagged with 'linux'

Prakash

Hackable Lego Robot Runs Linux

The Lego Mindstorms EV3 is the first major revamp of the Lego Group’s programmable robot kit since 2006, and the first to run embedded Linux.

Unveiled at the CES Show in Las Vegas yesterday, with the first public demos starting today at the Kids Play Summit at the Venetian Hotel, the $350 robot is built around an upgraded “Intelligent Brick” computer. Lego swapped out the previous microcontroller for a 300MHz ARM9 processor capable of running new Linux-based firmware. As a result, the kids-oriented Mindstorms EV3 offers far more programmability than the NXT series, which was last updated in 2009, says Lego.

Read More.

Read more
Prakash

The team behind the Samba file, print, and authentication server suite for Microsoft Windows clients announced the release of Samba version 4 yesterday. This version includes significant new capabilities that offer an open source replacement for many enterprise infrastructure roles currently delivered exclusively by Microsoft software, including acting as a domain controller, providing SMB2.1 protocol support, delivering clustering, and offering a virtual filesystem (VFS) interface. It comes with Coverity security certification and easy upgrade scripts. The release notes include details of all changes.

Notably, this includes the first open source implementation of Microsoft’s Active Directory protocols; Samba previously only offered Windows NT domain controller functions. According to the press release, “Samba 4.0 provides everything needed to serve as an Active Directory Compatible Domain Controller for all versions of Microsoft Windows clients currently supported by Microsoft, including the recently released Windows 8.”

Samba 4 can join existing Active Directory domains and provides all necessary function to host a domain that can be joined by Microsoft Active Directory servers. It provides all the services needed by Microsoft Exchange, as well as opening up the possibility of fully open source alternatives to Exchange such as the OpenChange project.

Read More.

Read more
Prakash

While ARM is gaining a lot of momentum, the challenge with ARM until now was that every architecture is very different from different vendors and requires a separate kernel and entire OS stack.

With Linux Kernel 3.7, this has changed for the better.

ARM’s problem was that, unlike the x86 architecture, where one Linux kernel could run on almost any PC or server, almost every ARM system required its own customized Linux kernel. Now with 3.7, ARM architectures can use one single vanilla Linux kernel while keeping their special device sauce in device trees.

The end result is that ARM developers will be able to boot and run Linux on their devices and then worry about getting all the extras to work. This will save them, and the Linux kernel developers, a great deal of time and trouble.

Just as good for those ARM architects and programmers who are working on high-end, 64-bit ARM systems, Linux now supports 64-bit ARM processors. 64-bit ARM CPUs won’t ship until in commercial quantities until 2013. When they do arrive though programmers eager to try 64-bit ARM processors on servers will have Linux ready for them.

Read More.

Read more
Prakash

From PC World.

Ubuntu is a widely popular open-source Linux distribution with eight years of maturity under its belt, and more than 20 million users. Of the roughly 5 percent of desktop OSs accounted for by Linux, at least one survey suggests that about half are Ubuntu. (Windows, meanwhile, accounts for about 84 percent.)

The timing of this latest Ubuntu release couldn’t be better for Windows users faced with the paradigm-busting Windows 8 and the big decision of whether to take the plunge.

Initial uptake of Windows 8 has been unenthusiastic, according to reports, and a full 80 percent of businesses will never adopt it, Gartner predicts. As a result, Microsoft’s big gamble may be desktop Linux’s big opportunity.

So, now that Canonical has thrown down the gauntlet, let’s take a closer look at Ubuntu 12.10 to see how it compares with Windows 8 from a business user’s perspective.

 

Windows 8 Pro (x86) Ubuntu 12.10
License fee $39 to $69 upgrade Free
CPU architectures supported x86, x86-64 x86, x86-64, ARM, PPC
Minimum RAM 1GB, 2GB 512MB
Minimum hard-disk space 20GB 5GB
Concurrent multiuser support No Yes
Workspaces One Two or more
Virtualization Hyper-V KVM
License Not applicable GPL Open Source: Main, Non-GPL: Restricted
Productivity software included None LibreOffice
Graphics tools included No Yes

Read More.

Read more
alex

In our last exciting episode, we learned how to capture a valgrind log. Today we’re going to take the next step and learn how to actually use it to debug memory leaks.

There are a few prerequisites:

  1. know C. If you don’t know it, go read the C programming language which is often referred to as K&R C. Be sure to understand the sections on pointers, and after you do, come back to my blog. See you in 2 weeks!
  2. a nice supply of your favorite beverages and snacks. I prefer coffee and bacon, myself. Get ready because you’re about to read an epic 2276 word blog entry.

That’s it. Ok, ready? Let’s go!

navigate the valgrind log
Open the valgrind log that you collected. If you don’t have one, you can grab one that I’ve already collected. Take a deep breath. It looks scary but it’s not so bad. I like to skip straight to the good part near the bottom. Search the file for “LEAK SUMMARY”. You’ll see something like:

==13124== LEAK SUMMARY:
==13124==    definitely lost: 916,130 bytes in 37,528 blocks
==13124==    indirectly lost: 531,034 bytes in 12,735 blocks
==13124==      possibly lost: 82,297 bytes in 891 blocks
==13124==    still reachable: 2,578,733 bytes in 42,856 blocks
==13124==         suppressed: 0 bytes in 0 blocks
==13124== Reachable blocks (those to which a pointer was found) are not shown.
==13124== To see them, rerun with: --leak-check=full --show-reachable=yes

You can see that valgrind thinks we’ve definitely leaked some memory. So let’s go figure out what leaked.

Valgrind lists all the leaks, in order from smallest to largest. The leaks are also categorized as “possibly” or “definitely”. We’ll want to focus on “definitely” for now. Right above the summary, you’ll see the worst, definite leak:

==13124== 317,347 (77,312 direct, 240,035 indirect) bytes in 4,832 blocks are definitely lost in loss record 10,353 of 10,353
==13124==    at 0x4C2B6CD: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==13124==    by 0x74E3A78: g_malloc (gmem.c:159)
==13124==    by 0x74F6CA2: g_slice_alloc (gslice.c:1003)
==13124==    by 0x74F7ABD: g_slist_prepend (gslist.c:265)
==13124==    by 0x4275A4: get_menu_item_for_ap (applet-device-wifi.c:879)
==13124==    by 0x427ACE: wireless_add_menu_item (applet-device-wifi.c:1138)
==13124==    by 0x41815B: nma_menu_show_cb (applet.c:1643)
==13124==    by 0x4189EC: applet_update_indicator_menu (applet.c:2218)
==13124==    by 0x74DDD52: g_main_context_dispatch (gmain.c:2539)
==13124==    by 0x74DE09F: g_main_context_iterate.isra.23 (gmain.c:3146)
==13124==    by 0x74DE499: g_main_loop_run (gmain.c:3340)
==13124==    by 0x414266: main (main.c:106)

Wow, we lost 300K of memory in just a few hours. Now imagine if you don’t reboot your laptop for a week. Yeah, that’s not so good. Time for a coffee and bacon break, the next part is about to get fun.

read the stack trace
What you saw above is a stack trace, and it’s printed chronologically “backwards”. In this example, malloc() was called by g_malloc(), which was called by g_slice_alloc(), which in turn was called by g_slist_prepend(), which itself was called by get_menu_item_for_ap() and so forth. The first function ever called was main(), which should hopefully make sense.

At this point, we need to use a little bit of extra knowledge to understand what is happening. The first function, main() is in our program, nm-applet. That’s fairly easy to understand. However, the next few functions that begin with g_main_ don’t actually live inside nm-applet. They are part of glib, which is a library that nm-applet depends on. I happened to have just known this off the top of my head, but if you’re ever unsure, you can just google for the function name. After searching, we can see that those functions are in glib, and while there is some magic that is happening, we can blissfully ignore it because we see that we soon jump back into nm-applet code, starting with applet_update_indicator_menu().

a quick side note
Many Linux programs will have a stack trace similar to the above. The program starts off in its own main(), but will call various other libraries on your system, such as glib, and then jump back to itself. What’s going on? Well, glib provides a feature known as a “main loop” which is used by the program to look for inputs and events, and then react to them. It’s a common programming paradigm, and rather than have every application in the world write their own main loop, it’s easier if everyone just uses the one provided by glib.

The other observation is to note how the function names appear prominently in the stack trace. Pundits wryly say that naming things is one of the hardest things in computer science, and I completely agree. So take care when naming your functions, because people other than you will definitely see them and need to understand them!

Alright, let’s get back to the stack trace. We can see a few functions that look like they belong to nm-applet, based on their names and their associated filenames. For example, the function wireless_add_menu_item() is in the file applet-device-wifi.c on line 1138. Now you see why we wanted symbols from the last episode. Without the debug symbols, all we would have seen would have been a bunch of useless ??? and we’d be gnashing our teeth and wishing for more bacon right now.

Finally, we see a few more g_* functions, which means we’re back in the memory allocation functions provided by glib. It’s important to understand at this point that g_malloc() is not the memory leak. g_malloc() is simply doing whatever nm-applet asks it to do, which is to allocate memory. The leak is highly likely to be in nm-applet losing a reference to the pointer returned by g_malloc().

What does it mean?
Now we’re ready to start the real debugging. We know approximately where we are leaking memory inside nm-applet: get_menu_item_for_ap() which is the last function before calling the g_* memory functions. Time to top off on coffee because we’re about to get our hands dirty.

reading the source
The whole point of open source is being able to read the source. Are you as excited as I am? I know you are!

First, let’s get the source to nm-applet. Assuming you are using Ubuntu and you are using 12.04, you’d simply say:

$ cd Projects
$ mkdir network-manager-gnome
$ cd network-manager-gnome
$ apt-get source network-manager-gnome
$ cd network-manager-applet-0.9.4.1

Woo hoo! That wasn’t hard, right?

side note #2
Contrary to popular belief, reading code is harder than writing code. When you write code, you are transmitting the thoughts of your messy brain into an editor, and as long as it kinda works, you’re happy. When you read code, now you’re faced with the problem of trying to understand exactly what the previous messy brain wrote down and making sense of it. Depending on how messy that previous brain was, you may have real trouble understanding the code. This is where pencil and paper and plenty of coffee come into play, where you literally trace through what the program is doing to try and understand it.

Luckily there are at least a few tools to help you do this. My favorite tools are cscope and ctags, which help me to rapidly understand the skeleton of a program and navigate around its complex structure.

Assuming you are in the network-manager-applet-0.9.4.1 source tree:

$ apt-get install cscope ctags
$ cscope -bqR
$ ctags -R
$ cscope -dp4


You are now presented with a menu. Use control-n and control-p to navigate input fields at the bottom. Try navigating to “Find this C symbol:” and then type in get_menu_item_for_ap, and press enter. The search results are returned, and you can press ’0′ or ’1′ to jump to either of the locations where the function is referenced. You can also press the space bar to see the final search result. Play around with some of the other search types and see what happens. I’ll talk about ctags in a bit.

Alrighty, let’s go looking for our suspicious nm-applet function. Start up cscope as described above. Navigate to “Find this global definition:” and search for get_menu_item_for_ap. cscope should just directly put you in the right spot.

Based on our stack trace, it looks like we’re doing something suspicious on line 879, so let’s go see what it means.

 869         if (dup_data.found) {
 870 #ifndef ENABLE_INDICATOR
 871                 nm_network_menu_item_best_strength (dup_data.found, nm_acce
 872                 nm_network_menu_item_add_dupe (dup_data.found, ap);
 873 #else
 874                 GSList *dupes = NULL;
 875                 const char *path;
 876 
 877                 dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupe
 878                 path = nm_object_get_path (NM_OBJECT (ap));
 879                 dupes = g_slist_prepend (dupes, g_strdup (path));
 880 #endif
 881                 return NULL;
 882         }

Cool, we can now see where the source code is matching up with the valgrind log.

Let’s start doing some analysis. The first thing to note are the #ifdef blocks on lines 870, 873, and 880. You should know that ENABLE_INDICATOR is defined, meaning we do not execute the code in lines 871 and 872. Instead, we do lines 874 to 879, and then we do 881. Why do we do 881 if it is after the #endif? That’s because we fell off the end of the #ifdef block, and then we do whatever is next, after we fall off, namely returning NULL.

Don’t worry, I don’t know what’s going on yet, either. Time for a refill!

Back? Great. Alright, valgrind says that we’re doing something funky with g_slist_prepend().

==13124==    by 0x74F7ABD: g_slist_prepend (gslist.c:265)

And our relevant code is:

 874                 GSList *dupes = NULL;
 875                 const char *path;
 876 
 877                 dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupe
 878                 path = nm_object_get_path (NM_OBJECT (ap));
 879                 dupes = g_slist_prepend (dupes, g_strdup (path));
 880 #endif
 881                 return NULL;

We can see that we declare the pointer *dupes on line 874, but we don’t do anything with it. Then, we assign something to it on line 877. Then, we assign something to it again on line 879. Finally, we end up not doing anything with *dupes at all, and just return NULL on line 881.

This definitely seems weird and worth a second glance. At this point, I’m asking myself the following questions:

  • did g_object_get_data() allocate memory?
  • did g_slist_prepend() allocate memory?
  • are we overwriting *dupes on line 879? that might be a leak.
  • is it safe to just return NULL without doing anything to dupes? maybe that’s our leak?

Let’s take them in order.

did g_object_get_data() allocate memory?
g_object_get_data has online documentation, so that’s our first stop. The documentation says:

Returns :
the data if found, or NULL if no such data exists. [transfer none]

Since I am not 100% familiar with glib terminology, I guess [transfer none] means that g_object_get_data() doesn’t actually allocate memory on its own. But let’s be 100% sure. Time to grab the glib source and find out for ourselves.

$ apt-get source libglib2.0-0
$ cd glib2.0-2.32.1
$ cscope -bqR
$ ctags -R
$ cscope -dp4
search for global definition of g_object_get_data

Pretty simple function.

3208 gpointer
3209 g_object_get_data (GObject     *object,
3210                    const gchar *key)
3211 {
3212   g_return_val_if_fail (G_IS_OBJECT (object), NULL);
3213   g_return_val_if_fail (key != NULL, NULL);
3214 
3215   return g_datalist_get_data (&object->qdata, key);
3216 }

Except I have no idea what g_datalist_get_data() does. Maybe that guy is allocating memory. Now I’ll use ctags to make my life easier. In vim, put your cursor over the “g” in “g_datalist_get_data” and then press control-]. This will “step into” the function. Magic!

 844 gpointer
 845 g_datalist_get_data (GData       **datalist,
 846                      const gchar *key)
 847 {
 848   gpointer res = NULL; 
 ... 
 856   d = G_DATALIST_GET_POINTER (datalist);
 ...
 859       data = d->data;
 860       data_end = data + d->len;
 861       while (data < data_end)
 862         {
 863           if (strcmp (g_quark_to_string (data->key), key) == 0)
 864             {
 865               res = data->data;
 866               break;
 867             }
 868           data++;
 869         }
 ... 
 874   return res;
 875 }

This is a pretty simple loop, walking through an existing list of pointers which have already been allocated somewhere else, starting on line 861. We do our comparison on line 863, and if we get a match, we assign whatever we found to res on line 865. Note that all we are doing here is a simple assignment. We are not allocating any memory!

Finally, we return our pointer on line 874. Press control-t in vim to pop back to your last location.

Now we know for sure that g_object_get_data() and g_datalist_get_data() do not allocate any memory at all, so there can be no possibility of a leak here. Let’s try the next function.

did g_slist_prepend() allocate memory?
First, read the documentation, which says:

The return value is the new start of the list, which may have changed, so make sure you store the new value.

This probably means it allocates memory for us, but let’s double-check just to be sure. Back to cscope!

 259 GSList*
 260 g_slist_prepend (GSList   *list,
 261                  gpointer  data)
 262 {
 263   GSList *new_list;
 264 
 265   new_list = _g_slist_alloc ();
 266   new_list->data = data;
 267   new_list->next = list;
 268 
 269   return new_list;
 270 }

Ah ha! Look at line 265. We are 100% definitely allocating memory, and returning it on line 269. Things are looking up! Let’s keep going with our questions.

are we overwriting *dupes on line 879? that might be a leak.
Remember:

 877                 dupes = g_object_get_data (G_OBJECT (dup_data.found), "dupe
 878                 path = nm_object_get_path (NM_OBJECT (ap));
 879                 dupes = g_slist_prepend (dupes, g_strdup (path));

We’ve already proven to ourselves that line 877 doesn’t allocate any memory. It just sets dupes to some value. However, on line 879, we do allocate memory. It is equivalent to this code:

  int *dupes;
  dupes = 0x12345678;
  dupes = malloc(128);

So simply setting dupes to the return value of g_object_get_data() and later overwriting it with the return value of malloc() does not inherently cause a leak.

By way of counter-example, the below code is a memory leak:

  int *dupes;
  dupes = malloc(64);
  dupes = malloc(128);    /* leak! */

The above essentially illustrates the scenario I was worried about. I was worried that g_object_get_data() allocated memory, and then g_slist_prepend() also allocated memory which would have been a leak because the first value of dupes got scribbled over by the second value. My worry turned out to be incorrect, but that is the type of detective work you have to think about.

As a clearer example of why the above is a leak, consider the next snippet:

  int *dupes1, *dupes2;
  dupes1 = malloc(64);     /* ok */
  dupes2 = malloc(128);    /* ok */
  dupes1 = dupes2;         /* leak! */

First we allocate dupes1. Then allocate dupes2. Finally, we set dupes1 = dupes2, and now we have a leak. No one knows what the old value of dupes1 was, because we scribbled over it, and it is gone forever.

is it safe to just return NULL without doing anything to dupes? maybe that’s our leak?
We can definitively say that it is not safe to return NULL without doing anything to dupes. We definitely allocated memory, stuck it into dupes, and then threw dupes away. This is our smoking gun.

Next time, we’ll see how to actually fix the problem.

Read more
Prakash

Over €10 million (approximately £8 million or $12.8 million) has been saved by the city of Munich, thanks to its development and use of the city’s own Linux platform. The calculation of savings follows a question by the city council’s independent Free Voters (Freie Wähler) group,

Read More.

Urge your city to save money from taxes, its your hard earned money.

 

Read more
Prakash

After installing Ubuntu 12.10, the first thing I wanted to do, was to disable reverse scrolling – you scroll down and it scrolls up! This is also called natural scrolling by Apple. Don’t know what is natural about it :) but may be natural for Apple users.

Open the terminal and edit this file using any editor and edit the .Xmodmap in your home directory for example:

 gedit .Xmodmap

Here you would seet his:

pointer = 1 2 3 5 4  6 7 8 9 10 11 12

You would note that in the sequence of numbers 5 and 4 are interchanged. Change it back to the sequence..

pointer = 1 2 3 4 5 6 7 8 9 10 11 12

Now you are done, logging out and in should do the job.

If you have Ubuntu Tweak installed. Just go to Tweaks-Miscellaneous and you would see an option to toggle Natural Scrolling on/off.

 

 

Read more
brendandonegan

I find that sometimes the Network Manager applet in Ubuntu can be a little temperamental (apologies to the maintainer, cyphermox, if he’s reading this – but such is the nature of software). Sometimes it won’t show available routers and if that’s the case then I’ve established a little workaround that I’m telling you about mainly because it involves a script I wrote that lives in a somewhat obscure place in Ubuntu.

Step one in the workaround is needed if you don’t know which networks are available in advance. If you’re sitting in your home then you’ll probably not need this step since most people know their router SSID. If you don’t then you can scan using:

nmcli dev wifi list

This is really reliable and always works if your WiFi hardware is working.

The second step is to use the SSID to create the connection using the script I wrote:

sudo /usr/share/checkbox/scripts/create_connection $SSID --security=wpa --key=$WPA_KEY

If the router doesn’t use any security (which nmcli dev wifi list will tell you) then you don’t need –security or –key. If the router doesn’t use WPA2 (maybe it uses WEP), then you’re out of luck – and deservedly so. Change the routers security settings if you can!


Read more

The Ubuntu Developer Summit was held in Copenhagen last week, to discuss plans for the next six-month cycle of Ubuntu. This was the most productive UDS that I've been to — maybe it was the shorter four-day schedule, or the overlap with Linaro Connect, but it sure felt like a whirlwind of activity.

I thought I'd share some details about some of the sessions that cover areas I'm working on at the moment. In no particular order:

Improving cross-compilation

Blueprint: foundations-r-improve-cross-compilation

This plan is a part of a mutli-cycle effort to improve cross-compilation support in Ubuntu. Progress is generally going well — the consensus from the session was that the components are fairly close to complete, but we still need some work to pull those parts together into something usable.

So, this cycle we'll be working on getting that done. While we have a few bugfixes and infrastructure updates to do, one significant part of this cycle’s work will be to document the “best-practices” for cross builds in Ubuntu, on wiki.ubuntu.com. This process will be heavily based on existing pages on the Linaro wiki. Because most of the support for cross-building is already done, the actual process for cross-building should be fairly straightforward, but needs to be defined somewhere.

I'll post an update when we have a working draft on the Ubuntu wiki, stay tuned for details.

Rapid archive bringup for new hardware

Blueprint: foundations-r-rapid-archive-bringup

I'd really like for there to be a way to get an Ubuntu archive built “from scratch”, to enable custom toolchain/libc/other system components to be built and tested. This is typically useful when bringing up new hardware, or testing rebuilds with new compiler settings. Because we may be dealing with new hardware, doing this bootstrap through cross-compilation is something we'd like too.

Eventually, it would be great to have something as straightforward as the OpenEmbedded or OpenWRT build process to construct a repository with a core set of Ubuntu packages (say, minbase), for previously-unsupported hardware.

The archive bootstrap process isn't done often, and can require a large amount of manual intervention. At present, there's only a couple of folks who know how to get it working. The plan here is to document the bootstrap process in this cycle, so that others can replicate the process, and possibly improve the bits that are a little too janky for general consumption.

ARM64 / ARMv8 / aarch64 support

Blueprint: foundations-r-aarch64

This session is an update for progress on the support for ARMv8 processors in Ubuntu. While no general-purpose hardware exists at the moment, we want to have all the pieces ready for when we start seeing initial implementations. Because we don't have hardware yet, this work has to be done in a cross-build environment; another reason to keep on with the foundations-r-improve-cross-compilation plan!

So far, toolchain progress is going well, with initial cross toolchains available for quantal.

Although kernel support isn’t urgent at the moment, we’ll be building an initial kernel-headers package for aarch64. There's also a plan to get a page listing the aarch64-cross build status of core packages, so we'll know what is blocked for 64-bit ARM enablement.

We’ve also got a bunch of workitems for volunteers to fix cross-build issues as they arise. If you're interested, add a workitem in the blueprint, and keep an eye on it for updates.

Secure boot support in Ubuntu

Blueprint: foundations-r-secure-boot

This session covered progress of secure boot support as at the 12.10 Quantal Quetzal release, items that are planned for 13.04, and backports for 12.04.2.

As for 12.10, we’ve got the significant components of secure boot support into the release — the signed boot chain. The one part that hasn't hit 12.10 yet is the certificate management & update infrastructure, but that is planned to reach 12.10 by way of a not-too-distant-future update.

The foundations team also mentioned that they were starting the 12.04.2 backport right after UDS, which will bring secure boot support to our current “Long Term Support” (LTS) release. Since the LTS release is often preferred Ubuntu preinstall situations, this may be used as a base for hardware enablement on secure boot machines. Combined with the certificate management tools (described at sbkeysync & maintaining uefi key databases), and the requirement for “custom mode” in general-purpose hardware, this will allow for user-defined trust configuration in an LTS release.

As for 13.04, we're planning to update the shim package to a more recent version, which will have Matthew Garrett's work on the Machine Owner Key plan from SuSE.

We're also planning to figure out support for signed kernel modules, for users who wish to verify all kernel-level code. Of course, this will mean some changes to things like DKMS, which run custom module builds outside of the normal Ubuntu packages.

Netboot with secure boot is still in progress, and will require some fixes to GRUB2.

And finally, the sbsigntools codebase could do with some new testcases, particularly for the PE/COFF parsing code. If you're interested in contributing, please contact me at jeremy.kerr@canonical.com.

Read more
Prakash

Ubuntu 12.10 is here. With this release there is no CD image only DVD image which is 800 MB in size. Torrent is preferred method for me.

Ubuntu 12.10
Torrent Links Direct Downloads
Ubuntu Desktop 64-Bit Edition Torrent Main Server
Ubuntu Desktop 32-Bit Edition Torrent Main Server
Ubuntu Server Edition 64-Bit Torrent Main Server
Ubuntu Server Edition 32-Bit Torrent Main Server

Have fun :)

Ubuntu Unleashed 2012 Edition: Covering 11.10 and 12.04 (7th Edition) (7th Edition)

Read more
sfmadmax

So I use Xchat daily and connect to a private IRC server to talk with my colleagues. I also have a BIP server in the office to record all of the IRC transcripts, this way I never miss any conversations regardless of the time of day. Because the BIP server is behind a firewall on the companies network I can’t access it from the outside.  For the past year I’ve been working around this by connecting to my companies firewall via ssh and creating a SOCKS tunnel then simply directing xchat to talk through my local SOCKS proxy.

To do this ,  open a terminal and issue:

ssh -CND <LOCAL_IP_ADDRESS>:<PORT> <USER>@<SSH HOST>

Ex: ssh -CND 192.168.1.44:9999 sfeole@companyfirewall.com

Starting ssh with -CND:

‘D’ Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. It also adds compression to the datastream ‘C’ and the ‘N’ is a safeguard which protects the user from executing remote commands.

192.168.1.44 is my  IPv4 address

9999 is the local port i’m going to open and direct traffic through

After the SSH tunnel is open I now need to launch xchat, navigate to Settings -> Preferences -> Network Setup and configure xchat to use my local IP (192.168.1.44) and local port (9999) then press OK then Reconnect.

I should now be able to connect to the IRC server behind the firewall. Usually I run through this process a few times a day, so it becomes somewhat of a tedious annoyance after a while.

Recently I finished a cool python3 script that does all of this in quick command.

The following code will do the following:

1.) identify the ipv4 address of the interface device you specify

2.) configure xchat.conf to use the new ipv4 address and port specified by the user

3.) open the ssh tunnel using the SSH -CND command from above

4.) launch xchat and connect to your server (assuming you have it set to auto connect)

To use it simply run

$./xchat.py -i <interface> -p <port>

ex: $./xchat.py -i wlan0 -p 9999

the user can select wlan0 or eth0 and of course their desired port. When your done with the tunnel simply issue <Ctrl-C> to kill it and wala!

https://code.launchpad.net/~sfeole/+junk/xchat

#!/usr/bin/env python3
#Sean Feole 2012,
#
#xchat proxy wrapper, for those of you that are constantly on the go:
#   --------------  What does it do? ------------------
# Creates a SSH Tunnel to Proxy through and updates your xchat config
# so that the user does not need to muddle with program settings

import signal
import shutil
import sys
import subprocess
import argparse
import re
import time

proxyhost = "myhost.company.com"
proxyuser = "sfeole"
localusername = "sfeole"

def get_net_info(interface):
    """
    Obtains your IPv4 address
    """

    myaddress = subprocess.getoutput("/sbin/ifconfig %s" % interface)\
                .split("\n")[1].split()[1][5:]
    if myaddress == "CAST":
        print ("Please Confirm that your Network Device is Configured")
        sys.exit()
    else:
        return (myaddress)

def configure_xchat_config(Proxy_ipaddress, Proxy_port):
    """
    Reads your current xchat.conf and creates a new one in /tmp
    """

    in_file = open("/home/%s/.xchat2/xchat.conf" % localusername, "r")
    output_file = open("/tmp/xchat.conf", "w")
    for line in in_file.readlines():
        line = re.sub(r'net_proxy_host.+', 'net_proxy_host = %s'
                 % Proxy_ipaddress, line)
        line = re.sub(r'net_proxy_port.+', 'net_proxy_port = %s'
                 % Proxy_port, line)
        output_file.write(line)
    output_file.close()
    in_file.close()
    shutil.copy("/tmp/xchat.conf", "/home/%s/.xchat2/xchat.conf"
                 % localusername)

def ssh_proxy(ProxyAddress, ProxyPort, ProxyUser, ProxyHost):
    """
    Create SSH Tunnel and Launch Xchat
    """

    ssh_address = "%s:%i" % (ProxyAddress, ProxyPort)
    user_string = "%s@%s" % (ProxyUser, ProxyHost)
    ssh_open = subprocess.Popen(["/usr/bin/ssh", "-CND", ssh_address,
                 user_string], stdout=subprocess.PIPE, stdin=subprocess.PIPE)

    time.sleep(1)
    print ("")
    print ("Kill this tunnel with Ctrl-C")
    time.sleep(2)
    subprocess.call("xchat")
    stat = ssh_open.poll()
    while stat is None:
        stat = ssh_open.poll()

def main():
    """
    Core Code
    """

    parser = argparse.ArgumentParser()
    parser.add_argument('-i', '--interface',
                        help="Select the interface you wish to use",
                        choices=['eth0', 'wlan0'],
                        required=True)
    parser.add_argument('-p', '--port',
                        help="Select the internal port you wish to bind to",
                        required=True, type=int)
    args = parser.parse_args()

    proxyip = (get_net_info("%s" % args.interface))
    configure_xchat_config(proxyip, args.port)
    print (proxyip, args.port, proxyuser, proxyhost)

    ssh_proxy(proxyip, args.port, proxyuser, proxyhost)

if __name__ == "__main__":
    sys.exit(main())

Refer to the launchpad address above for more info.


Read more
Prakash

From the article:

 

“You’d be a fool to use anything but Linux.” :)

Most Linux people know that Google uses Linux on its desktops as well as its servers. Some know that Ubuntu Linux is Google’s desktop of choice and that it’s called Goobuntu. But almost no one outside of Google knew exactly what was in it or what roles Ubuntu Linux plays on Google’s campus, until now.

Read More.

Related posts:

  1. Microsoft, Google in open war in India Google and Microsoft, two of the world’s largest technology firms, are...
  2. Ubuntu 12.04 LTS is now available for Download Ubuntu 12.04 LTS is here. This is the first time...
  3. Ubuntu 11.10 is here Ubuntu 11.10, code named Oneiric Ocelot,  is now available. It...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more
Prakash

Apple — one of the most closed companies in the world — is actually using lot of open source and software. Licensing information in the Apple iPhone proves this. Go to the legal section on the iPhone and it cites Linux Kernel developer Ted Ts’o for his code. Linux Suse is there, too.

Zemlin made the point that Apple has hundreds of billions of dollars in cash, which is enough to buy HP, Intel and Dell combined. Instead, Apple purchased the copyright to the Common Unix Printing System (CUPS), which now is on every Linux and Apple system.

The list of companies using Linux does not stop at Apple. Microsoft, which once equated open source with communism, is now a top contributor to the Linux Kernel project. And VMware is getting on the bandwagon.

Read More.

Related posts:

  1. Android Kernel and Linux kernel merge Android so far has been maintaining its separate kernel from...
  2. 8000 developer and 800 companies build Linux! Linux is today powering Android phones, TVs, set-top boxes, enterprise...
  3. Eight features Windows 8 borrowed from Linux “Good artists borrow, great artists steal!” — Pablo Picasso said...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more

Most of the components of the 64-bit ARM toolchain have been released, so I've put together some details on building a cross compiler for aarch64. At present, this is only binutils & compiler (ie, no libc), so is probably not useful for applications. However, I have a 64-bit ARM kernel building without any trouble.

Update: looking for an easy way to install a cross-compiler on Ubuntu or debian? Check out aarch64 cross compiler packages for Ubuntu & Debian.

pre-built toolchain

If you're simply looking to download a cross compiler, here's one I've built earlier: aarch64-cross.tar.gz (.tar.gz, 85MB). It's built for an x86_64 build machine, using Ubuntu 12.04 LTS, but should work with other distributions too.

The toolchain is configured for paths in /opt/cross/. To install it, do a:

[jk@pecola ~]$ sudo mkdir /opt/cross
[jk@pecola ~]$ sudo chown $USER /opt/cross
[jk@pecola ~]$ tar -C /opt/cross/ -xf aarch64-x86_64-cross.tar.gz

If you'd like to build your own, here's how:

initial setup

We're going to be building in ~/build/aarch64-toolchain/, and installing into /opt/cross/aarch64/. If you'd prefer to use other paths, simply change these in the commands below.

[jk@pecola ~]$ mkdir -p ~/build/arm64-toolchain/
[jk@pecola ~]$ cd ~/build/arm64-toolchain/
[jk@pecola ~]$ prefix=/opt/cross/aarch64/

We'll also need a few packages for the build:

[jk@pecola ~]$ sudo apt-get install bison flex libmpfr-dev libmpc-dev texinfo

binutils

I have a git repository with a recent version of ARM's aarch64 support, plus a few minor updates at git://kernel.ubuntu.com/jk/arm64/binutils.git (or browse the gitweb view). To build:

Update: arm64 support has been merged into upstream binutils, so you can now use the official git repository. The commit 02b16151 builds successfully for me.

[jk@pecola arm64-toolchain]$ git clone git://gcc.gnu.org/git/binutils.git
[jk@pecola arm64-toolchain]$ cd binutils
[jk@pecola binutils]$ ./configure --prefix=$prefix --target=aarch64-none-linux
[jk@pecola binutils]$ make
[jk@pecola binutils]$ make install
[jk@pecola binutils]$ cd ..

kernel headers

Next up, the kernel headers. I'm using Catalin Marinas' kernel tree on kernel.org here. We don't need to do a full build (we don't have a compiler yet..), just the headers_install target.

[jk@pecola arm64-toolchain]$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64.git
[jk@pecola arm64-toolchain]$ cd linux-aarch64
[jk@pecola linux-aarch64]$ git reset --hard b6fe1645
[jk@pecola linux-aarch64]$ make ARCH=arm64 INSTALL_HDR_PATH=$prefix headers_install
[jk@pecola linux-aarch64]$ cd ..

gcc

And now we should have things ready for the compiler build. I have a git tree up at git://kernel.ubuntu.com/jk/arm64/gcc.git (gitweb), but this is just the aarch64 branch of upstream gcc.

[jk@pecola arm64-toolchain]$ git clone git://kernel.ubuntu.com/jk/arm64/gcc.git
[jk@pecola arm64-toolchain]$ cd gcc/aarch64-branch/
[jk@pecola aarch64-branch]$ git reset --hard d6a1e14b
[jk@pecola aarch64-branch]$ ./configure --prefix=$prefix \
    --target=aarch64-none-linux --enable-languages=c \
    --disable-threads --disable-shared --disable-libmudflap \
    --disable-libssp --disable-libgomp --disable-libquadmath
[jk@pecola aarch64-branch]$ make
[jk@pecola aarch64-branch]$ make install
[jk@pecola aarch64-branch]$ cd ../..

That's it! You should have a working compiler for arm64 kernels in /opt/cross/aarch64.

Read more
Steve George

Someone recently remarked to me that you can think of hardware as software that’s developed really slowly. While the software space has been going wild over cloud computing it’s been pretty quiet on the hardware side of the equation. But, that’s going to change as we see a new class of server hardware that helps businesses take advantage of the power and density savings possible through new CPU architectures and software stacks.

As an illustration IDC reported on the server market recently and it shows the start of the next wave of change. As you’d expect the general server market is pretty poor, growing at just 2.7%. But, blade servers which are commonly used for Web workloads is growing at 7%. Finally, the hyperdense form-factor is growing at 29% – which is an astounding amount.

In some ways the drivers for this change are just the continuation of a long-running story where everything is (has?) moved into a Web infrastructure set-up which enables the horizontal scaling of services. Implicitly this favours buying a lot of cheaper systems and building in redundancy at the software level. But the Cloud accelerates this trend further since it’s stateless and you no longer care about the specifics of the hardware layer in the same way.

The challenge for infrastructure managers is that continually adding more servers means you’re incurring ongoing costs for electricity, space and management. So anything that can drive better performance per watt in a denser arrangement is interesting. As you can see from the diagram below the expected growth in this space is really significant.

At a CPU architecture level ARM chips have been getting more powerful and this year they’re going to enter into the mix for servers. The first reason for this is that they’re relatively low-power which means lower running costs. Since they’re low power they also give off less heat so another advantage is they can be put into a ‘hyperdense’ arrangement that also saves money in terms of space. You’ll see systems this year from both Dell and HP (see Moonshot). It’s pretty astounding to think that the same chip that’s powering your phone could be powering Facebook!

If we’re truly going to get the benefit from the new hyperdense form-factor then the software layer will also need to reflect the capabilities of these systems. So for Ubuntu we’re continuing our work on ARM and recently announced the availability of 12.04 LTS as an ARM server – the first commercial Linux to come to the platform. We’re also exploring how these hardware systems unique strengths are expressed and how this impacts the software stack. For example if you’ve got a few hundred systems in a half-rack then the problem of managing those systems is far more significant – so service orchestration (such as Juju) is really critical. It’s exciting times in this space and a really interesting project.

If you’re interested in a quick summary of ARM server check out this Prezi by Victor Palau.


Tagged: arm chips, enterprise-it

Read more
Prakash

OpenStack has the potential to become as widely used in cloud computing as Linux in servers, according to Rackspace’s chief executive Lanham Napier.

Napier noted that OpenStack has more code contributors than Linux did when it started: it had 206 code contributors by its 84th week, whereas Linux took 615 weeks to get to that level. Similarly, OpenStack had 166 companies adding to it by its 84th week, whereas Linux reached 180 companies by its 828th week.

OpenStack is already well on the way to building that community, given the broad adoption the technology has seen since its launch two years ago. At the moment, more than 100 companies have put OpenStack into production, including AT&T, Korea Telecom, the San Diego Supercomputer Centre, HP and the US Department of Energy’s Argonne National Laboratory.

Read More.

Related posts:

  1. If AWS is the Walmart of cloud, is OpenStack the Soviet Union? The Cloud Faceoff! The stage was set for a lively...
  2. Why OpenStack is important OpenStack is the future of Cloud computing. Founded by NASA...
  3. How HP Cloud Will Differentiate from Amazon, Rackspace HP has now jumped on the bandwagon as a cloud...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more
Prakash

The Chinese, who also developed the Loongson MIPS CPU, were looking to order at least ten million graphics processors. The problem is that the GeForce / Quadro driver from NVIDIA is only available for Linux x86 and x86_64 architectures, not MIPS or even ARM (only the Tegra driver is for ARMv7). NVIDIA refused to release the source-code to their high-performance feature-complete cross-platform driver to the Chinese, and it would cost them millions of dollars to port the code-base, so they went to AMD for their GPU order.

The order was at least for ten million GPUs, which given the current low-end parts, would value the order at least 250 to 350 million dollars (USD).

Read More.

A few days back, Linux founder Linus Torvalds was unhappy with NVidia because of their Linux drivers are binary only. In the talk he called NVIDIA “the single worst company we have ever dealt with” and he said a few other nice words too. :) Hope NVidia open sources it drivers.

 

Related posts:

  1. Galaxy S III sets Android pre-order record From TG Daily.. UK carrier Vodafone has confirmed that it...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more
Prakash

After Dell and HP, now MiTAC has announced that they are doing an ARM server.

  • 1.6 GHz
  • 4 Cores
  • Ubuntu 12.04
  • 4U Rack server with 64 CPU and 256 Cores
  • 32-Bit processor

Read More.

Related posts:

  1. Ubuntu 11.10 Oneiric Ocelot is now available for download Ubuntu 11.10 is here, The 64-bit version offers multi-arch support,...
  2. MIT Genius Stuffs 100 Processors Into Single Chip Anand Agarwal directs the Massachusetts Institute of Technology’s vaunted Computer...
  3. Ubuntu 12.04 LTS is now available for Download Ubuntu 12.04 LTS is here. This is the first time...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more
Prakash

In the battle of the desktop operating systems (OS), there are only three dominant players left – Windows, Mac and Linux. At some point, Windows was cast as the platform for the common man, Mac as the one for the artist, and Linux as the geek’s playground.

Linux found favour in powering servers, supercomputers, large businesses and even stock exchanges. And Google even used it as the platform to build its popular Android mobile operating system. But in the desktop and notebook space, it still failed to gain traction.

There’s an image associated with Linux that can be frightening for a normal user, invoking pictures of command lines and terminal windows. But over the past 20 years, some massive steps have been taken to make the OS more accessible.

Read More.

The same was also published on Economic Times.

Related posts:

  1. Eight features Windows 8 borrowed from Linux “Good artists borrow, great artists steal!” — Pablo Picasso said...
  2. Desktop Linux gains market share As per PC World, Desktop Linux gains market share of...
  3. The Story of Linux: Commemorating 20 Years of the Linux Operating System Nice video created by Linux Foundation to celebrate 20 years...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more
Michael Hall

My big focus during the week of UDS will be on improving our Application Developer story, tools and services.  Ubuntu 12.04 is already an excellent platform for app developers, now we need to work on spreading awareness of what we offer and polishing any rough edges we find.  Below are the list of sessions I’ll be leading or participating in that focus on these tasks.

And if you’re curious about what else I’ll be up to, my full schedule for the week can be found here: http://summit.ubuntu.com/uds-q/participant/mhall119/

Read more