Canonical Voices

Posts tagged with 'linux'

John Pugh

Oh boy. June stormed in and the May installment is late! Not much changed at the top. The Northern Hemisphere spring storms keep Stormcloud at the top with Fluendo DVD staying put at the number two spot. Steam continues its top of the chart spree on the Free Top 10.

Want to develop for the new Phone and Tablet OS, Ubuntu Touch? Be sure to check out the “Go Mobile” site for details.

Top 10 paid apps

  1. Stormcloud
  2. Fluendo DVD Player
  3. Filebot
  4. Quick ‘n Easy Web Builder
  5. MC-Launcher
  6. Mini Minecraft Launcher
  7. Braid
  8. UberWriter
  9. Drawers
  10. Bastion

Top 10 free apps

  1. Steam
  2. Motorbike
  3. Master PDF Editor
  4. Youtube to MP3
  5. Screencloud
  6. Nitro
  7. Splashtop Remote Desktop App for Linux
  8. CrossOver (Trial)
  9. Plex Media Server
  10. IntelliJ IDEA 12 Community Edition

Would you like to see your app featured in this list and on millions of user’s computers? It’s a lot easier than you think:


  • The lists of top 10 app downloads includes only those applications submitted through My Apps on the Ubuntu App Developer Site. For more information about of usage of other applications in the Ubuntu archive, check out the Ubuntu Popularity Contest statistics.
  • The top 10 free apps list contains gratis applications that are distributed under different types of licenses, some of which may not be open source. For detailed license information, please check each application’s description in the Ubuntu Software Center.

Follow Ubuntu App Development on:

Social Media Icons by Paul Robert Lloyd

Read more

In my daily work at Canonical I use VM's quite often to test Ubuntu Web Apps and new browser releases in different environments. I use a MacBook Pro 15" (mid 2012) as the main computer, currently running Ubuntu 13.04. This computer had rEFIt to boot and the default OSX system. Of the 500 GB, just 75 were dedicated to Linux, so I was forced to delete VM's or backup them in an external hard disk to re-use them. Finally, I decided to buy a SSD, which are quite cheap nowadays.

The SSD I bought was a Samsung 840 250 GB (not Pro version, which has some additional features). It costs 179 €.

This are the steps I followed to move my Ubuntu setup from a HD to a SSD.

  1. Burn a DVD with Ubuntu. I reused an old 12.04 disk, the -mac version.
  2. Boot Ubuntu from the DVD.
  3. Attach an external USB drive. This is used to (backup and) copy the partition from the HD to the SSD.
  4. Run gparted as root to copy the Linux partition (in my case, an ext4).

To copy the partition, I resized the USB drive partitions to live room to the HD's Linux partition, which is 75 GB. Then, in gparted I selected the partition from the HD, selected "Copy" from the partition menu and then "Paste" it in the spare space in the USB drive. This step took +40 minutes. At this point, the Linux partition is available in the external disk.

After that, I switched off the computer and replaced the HD with the SSD. Follow the link to see how. Now, the second part: to move and setup the system to the solid state disk. The SSD disk was blank, so it needed proper configuration.

  1. Boot Ubuntu from the DVD.
  2. Attach the external USB drive.
  3. Run gparted.
  4. Create a partition table in the SSD. I used GUID Partition Table (gpt) format, the one the original HD uses.
  5. Partition the SDD.
    • Create an EFI partition. The first partition has FAT32 format, 200 MB in size, "EFI" as label and "grub_boot" flag. 
    • Create the swap partition. At the end of the disk, I created a 10 GB "linux-swap" partition.
    • The rest of the disk will be available for the main Linux partition.
  6. Copy the Linux partition to the SSD. In gparted, select the Linux partition in the USB drive, copy and paste it in the SSD. This takes +45 minutes.
  7. Resize the Linux partition to fill the entire disk and flag it as "boot".

Congratulations! The Linux partition is now copied bit-by-bit in the SSD. However, it cannot boot. The reasons are: rEFIt (the bootloader) is not installed; the Linux partitions  are not properly configured. To do that, we need to modify the file /etc/fstab in the Linux partition (SSD).

And this is the final third step: setup the system to properly boot. In my SSD, /dev/sda1 is the EFI partition, /dev/sda2 the Linux partition and /dev/sda3 the swap partition. You may have different setup. /etc/fstab must be changed to reflect this addresses. From a terminal:

$ sudo mkdir /media/root 

$ sudo mount /dev/sda2 /media/root

The Linux partition is accesible in the directory /media/root/. The file /etc/fstab/ of the Linux partition can be edited now at /media/root/etc/fstab/

$ gksu gedit /media/root/etc/fstab

This is the original content:

# /etc/fstab: static file system information.
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda5 during installation
UUID=24acabd4-2fcb-49aa-9fb3-ce9e657d4465 / ext4 errors=remount-ro,user_xattr 0 1
# swap was on /dev/sda7 during installation
UUID=c40532ab-5716-47bc-9b95-3672b834c6a2 none swap sw 0 0

Here, the modifications:

# / was on /dev/sda5 during installation
/dev/sda2 / ext4 discard,errors=remount-ro,user_xattr 0 1
# swap was on /dev/sda7 during installation
/dev/sda3 none swap sw 0 0

The blkid command can be run as root to find out the UUID strings of the devices and use them instead.

Then, I downloaded the compiled version of rEFInd, which is a fork of the rEFIt bootloader and able to run from Linux, Mac and Windows.

$ cd /media/root/root/

$ sudo wget

This is almost done. We now "log" into the Linux partition (SSD) to apply the changes and setup the bootloaders. In order to do that successfully, the /proc and /dev from the live DVD are mounted to the Linux partition and the EFI partition (this is needed by rEFInd).

$ sudo mkdir /media/root/boot/efi

$ sudo mount -B /proc /media/root/proc

$ sudo mount -B /dev /media/root/dev

$ sudo mount /dev/sda1 /media/root/boot/efi

$ sudo chroot /media/root/

Now we're "logged" in the Linux partition as the root user. This is when GRUB and rEFInd are installed in the SSD to be able to boot Linux from the Mac.

# mount -t sysfs sysfs /sys

# cd /root/

# unzip

# cd refind-bin-0.6.11

# ./ --esp --alldrivers

# grub-install /dev/sda

# update-grub

# exit

And that's all. The system should be able to boot the Linux partition from the SSD.

Some caveats and open questions:

  • I installed rEFInd first, without GRUB. rEFInd and the system wasn't able to boot.
  • For some reasons, rEFInd in my system is much slower than rEFIt. It takes around 40 seconds to show up.
  • After installing GRUB, I'm not able to mount the EFI partition. Now has an unkown partition format.
  • Does GRUB really needs rEFInd?


Read more

While it is not certain if Google is going to offer Android or ChromeOS for PCs, but Intel is already working on making the $200 Android PC to boost the sagging PC sales.

So far, the notebook market is dominated by two players, Windows and OS X, but there’s an operating system that could drop into this mix and be highly disruptive — Android.

There’s been a lot of discussion bouncing around the tech blogosphere about Intel’s plans to get all disruptive and start supporting Android on devices that will cost in the region of $200.

While Microsoft might not be happy about being sidelined by a company that was once one of its biggest supporters, this is exactly what the PC industry needs.

Think this is a huge leap? It isn’t. Some of Intel’s Atom processors are already compatible with Android 4.2 Jelly Bean.

Read More.




Read more

Ubuntu 13.04.10 is here.  Torrent is the preferred method for me.

Ubuntu 13.04
Torrent Links Direct Downloads
Ubuntu Desktop 13.04 64-Bit Torrent Main Server
Ubuntu Desktop 13.04 32-Bit Torrent Main Server
Ubuntu Server 13.04 64-Bit Torrent Main Server
Ubuntu Server 13.0432-Bit Torrent Main Server

Other releases. (Ubuntu Desktop and Server) (Ubuntu Cloud Server) (Ubuntu Netboot) (Ubuntu Core) (Edubuntu DVD) (Kubuntu) (Lubuntu) (Ubuntu Studio) (Ubuntu-GNOME) (UbuntuKylin) (Xubuntu)

As always Have fun :)

Ubuntu Unleashed 2012 Edition: Covering 11.10 and 12.04 (7th Edition) (7th Edition)

Read more


Black Duck and North Bridge announce the results of the seventh annual Future of Open Source Survey. The 2013 survey represents the insights of more than 800 respondents – the largest in the survey’s history – from both non-vendor and vendor communities.

Read more

Netflix, the popular video-streaming service that takes up a third of all internet traffic during peak traffic hours isn’t just the single largest internet traffic service. Netflix, without doubt, is also the largest pure cloud service.

Netflix, with more than a billion video delivery instances per month, is the largest cloud application in the world.

At the Linux Foundation’s Linux Collaboration Summit in San Francisco, California, Adrian Cockcroft, director of architecture for Netflix’s cloud systems team, after first thanking everyone “for building the internet so we can fill it with movies”, said that Netflix’s Linux, FreeBSD, and open-source based services are “cloud native”.

By this, Cockcroft meant that even with more than a billion video instances delivered every month over the internet, “there is no datacenter behind Netflix”. Instead, Netflix, which has been using Amazon Web Services since 2009 for some of its services, moved its entire technology infrastructure to AWS in November 2012.

Read More.

Read more

Linux Containers

A nivel de máquinas virtuales de uso genérico (por lo tanto descartando ScummVM o cosas similares) siempre me manejé con VirtualBox. Aunque ahora es de Oracle y no lo miro con buenos ojos, siempre funcionó bastante bien (sin pedirles cosas demasiado locas) y es una buena manera de tener un Windor corriendo aunque uno esté todo el día en Linux (por ejemplo, para poder hacer facturas en la AFIP, la puta que los parió).

Incluso, cuando laburaba en Ericsson, que me obligaban a usar Windor, tenía un VMWare con un Ubuntu instalado (un Gutsy, o un Hardy, creo... cuanta agua bajo el puente!) que me servía para cuando tenía que hacer cosas serias a nivel de red, o para el caso cualquier cosa divertida.

Pero nunca había encontrado una forma piola de tener máquinas virtuales de Linux bajo Linux. Y con "piola" me refiero a que funcione bien y que sea relativamente fácil de armar.

Y acá entra LXC.

Linux container

Aunque LXC no es propiamente dicho una "máquina virtual" (es más bien un "entorno virtual"), igual permite la ejecución de un linux que no se mezcla a nivel de configuraciones ni de paquetes instalados ni de lo que uno puede romper del sistema con la máquina que uno tiene.

¿Para qué se puede usar? En mi caso lo uso mucho en el laburo, ya que mi máquina de desarrollo es un Ubuntu Quantal, pero los sistemas que corren en los servers son bajo Precise o Lucid (entonces tengo un container para cada uno). Y también los tengo pensado usar para probar instalaciones desde cero (por ejemplo, al armar un .deb por primera vez, probar de instalarlo en una máquina limpia).

¿Cómo se arma y usa un contenedor? Luego de instalar los paquetes necesarios (sudo apt-get install lxc libvirt-bin), la creación de un contenedor es bastaaaaante simple (de acá en adelante reemplazar en todos lados el "mi-lxc" por el nombre que ustedes quieran para el contenedor):

    sudo lxc-create -t ubuntu -n mi-lxc -- -r precise -a i386 -b $USER

Desmenucemos. El -t es el template a tomar, y el -n es para el nombre que le vamos a dar. A partir de ahí vemos un "--", lo que significa que el resto son opciones para el template propiamente dicho. En este caso, que use el release Precise, la arquitectura i386, y mi mismo usuario.

Lo maravilloso de esto es que el container, adentro, tiene mi usuario, porque el home es compartido! Y con esto todas las configuraciones de bash, vim, ssh, gnupg, etc, con lo cual "hacer cosas" adentro del lxc es directo, no hay que configurar todo (pero, al mismo tiempo, podemos "romper" el home desde adentro del container, ojo al piojo).

Para arrancar el container podemos hacer

    sudo lxc-start -n mi-lxc

Y esto nos va a dejar con el prompt listo para loguear, y acá alcanza con usar los propios usuario y password. Una vez adentro, usamos el container como si fuera una máquina nuevita.

Todo muy lindo, pero igual me gustan hacerle algunas configuraciones que hacen el uso aún más directo y sencillo. Y estas configuraciones, a nivel de sistema, son basicamente para que podamos entrar al container más fácil, y usar desde adentro aplicaciones gráficas.

Para entrar más fácil, tenemos que tener Avahi configurado. Más allá de instalarlo (sudo apt-get update; sudo apt-get install avahi-daemon), hay un detalle a toquetear.  Adentro del lxc, abrir el archivo /etc/avahi/avahi-daemon.conf y aumentar bastante el rlimit-nproc (por ejemplo, de 3 a 300).

Con esto ya estamos listos para entrar fácil al container. Lo podemos probar en otra terminal; hacer:

    ssh mi-lxc.local

Lindo, ¿no?. Pero también está bueno poder forwardear los eventos de X, así podemos levantar aplicaciones gráficas. Para eso tenemos que tocar lo siguiente en el host (o sea, no en el container sino en la máquina "real"): editar /var/lib/lxc/mi-lxc/fstab y agregarle la linea:

    /tmp/.X11-unix tmp/.X11-unix none bind

En el container, tenemos que estar seguros que /tmp/.X11-unix exista, y reiniciarlo luego de estas configuraciones.

También necesitamos setear DISPLAY. Esto yo lo mezclé en el .bashrc, poniendo algo también para que cuando entro por ssh me cambie el color del prompt (incluso, poniendo distintos colores a distintos containers). Lo que estoy usando es:

    if [ `hostname` = "mi-lxc" ]; then
        export PS1='\[\e[1;34m\]\u@\h:\w${text}$\[\e[m\] ';
        export DISPLAY=:0

Para terminar, entonces, les dejo los tres comandos que más uso en el día a día con los containers, más allá de la instalación en sí: levantar el container (fíjense el -d, que lo levanta como un demonio, total nos conectamos por ssh); conectarnos (fíjense el -A, para que forwardee la conexión del agente de autenticación); y finalmente apagar el container:

    sudo lxc-start -n mi-lxc -d
    ssh -A mi-lxc.local
    sudo lxc-stop -n mi-lxc

Que lo disfruten.

Read more

With Windows 8 pushing a “touch-first” desktop interface—Microsoft’s words, not ours—and with Valve’s Steam on Linux beginning to bring much-needed games and popular attention to the oft-overlooked operating system, there’s never been a better time to take Linux out for a test drive.

Dipping your toes into the penguin-filled waters of the most popular open-source ecosystem is easy, and you don’t have to commit to switching outright to Linux. You can install it alongside your current Windows system, or even try it without installing anything at all.

Ubuntu is the most popular Linux distribution for desktop and laptop Linux users, so we’ll focus on Ubuntu throughout this guide. For the most part, Ubuntu just plain works. It sports a subtle interface that stays out of your way. It enjoys strong support from software developers (including Valve, since Steam on Linux only officially supports Ubuntu). And you can find tons of information online if you run into problems.

Read more.

Read more

The inaugural online UDS (or vUDS as it’s becoming known) is underway. This brings with it a number of new challenges in terms of running a good session. Having sat in on a few sessions yesterday and been the session lead for several sessions at physical UDS’s going back nearly two years now, I thought I’d jot down a few tips on how to run a good session.


Regardless of whether the session is physical or virtual, it’s always important to prepare. The purpose of a UDS session is to get feedback on some proposed plan of work (even if it is extremely nebulous at the time of the session). Past experience shows that sessions are always more productive where most of the plan is already fleshed out prior to the session and the session basically functions as a review/comments meeting. This depends on your particular case though, since the thing you are planning may not be possible to flesh out in a lot of detail without feedback. I personally find this is rarely the case though.

Be punctual

UDS runs on a tight schedule, at least in the physical version, although I don’t see any good reason why this should change for vUDS. Punctuality is therefore important not as good manners but from a practical point of view. You need to time to compose yourself, find notes and make sure everything is set up. For a physical UDS this would have been to check microphones are working and projectors are projecting. For a vUDS, in my brief experience, this means making sure everyone who needs to be is invited into the hangout and that the etherpad is up and the video feed is working on the session page.


As the session lead it is your responsibility to run a good session, however it will be impossible for you to perform all the tasks required to achieve this on your own. Someone needs to be making notes on the Etherpad and someone needs to be monitoring IRC. You should also be looking out for questions yourself but since you may be concentrating on conveying information and answering other questions, you do need help with this.

Avoid going off track

Time is limited in a UDS session and you may have a lot of points to get through. Be wary of getting distracted from the point of the session and discussing things that may not be entirely relevant. Don’t be afraid to cut people short – if the question is important to them then you can follow up offline later.

Manage threads of communication

This one is quite vUDS specific, but especially now that audiovisual participation is limited, it is important that all of the conversation take place in one spot. Particularly for people who are catching up with the video streams later on. Don’t allow a parallel conversation to develop on IRC if possible. If someone asks a question in IRC, repeat it to the video audience and answer it in the hangout, not on IRC. If someone is talking a lot in IRC and even answering questions, do invite them into the hangout so that what they’re saying can be recorded. It may not be possible to avoid this entirely, but as session lead you need to do your best to mitigate it.

Follow up

Not so much a tip for running a good session, but for getting the best from a good session. Remember to read the notes from the session and rewatch the video so that you can use the feedback to adapt your plan and find places to follow up.

That’s all there is to say, I really hope this first virtual UDS goes very well and that sessions are productive for everyone involved.

Read more

Hackable Lego Robot Runs Linux

The Lego Mindstorms EV3 is the first major revamp of the Lego Group’s programmable robot kit since 2006, and the first to run embedded Linux.

Unveiled at the CES Show in Las Vegas yesterday, with the first public demos starting today at the Kids Play Summit at the Venetian Hotel, the $350 robot is built around an upgraded “Intelligent Brick” computer. Lego swapped out the previous microcontroller for a 300MHz ARM9 processor capable of running new Linux-based firmware. As a result, the kids-oriented Mindstorms EV3 offers far more programmability than the NXT series, which was last updated in 2009, says Lego.

Read More.

Read more

The team behind the Samba file, print, and authentication server suite for Microsoft Windows clients announced the release of Samba version 4 yesterday. This version includes significant new capabilities that offer an open source replacement for many enterprise infrastructure roles currently delivered exclusively by Microsoft software, including acting as a domain controller, providing SMB2.1 protocol support, delivering clustering, and offering a virtual filesystem (VFS) interface. It comes with Coverity security certification and easy upgrade scripts. The release notes include details of all changes.

Notably, this includes the first open source implementation of Microsoft’s Active Directory protocols; Samba previously only offered Windows NT domain controller functions. According to the press release, “Samba 4.0 provides everything needed to serve as an Active Directory Compatible Domain Controller for all versions of Microsoft Windows clients currently supported by Microsoft, including the recently released Windows 8.”

Samba 4 can join existing Active Directory domains and provides all necessary function to host a domain that can be joined by Microsoft Active Directory servers. It provides all the services needed by Microsoft Exchange, as well as opening up the possibility of fully open source alternatives to Exchange such as the OpenChange project.

Read More.

Read more

While ARM is gaining a lot of momentum, the challenge with ARM until now was that every architecture is very different from different vendors and requires a separate kernel and entire OS stack.

With Linux Kernel 3.7, this has changed for the better.

ARM’s problem was that, unlike the x86 architecture, where one Linux kernel could run on almost any PC or server, almost every ARM system required its own customized Linux kernel. Now with 3.7, ARM architectures can use one single vanilla Linux kernel while keeping their special device sauce in device trees.

The end result is that ARM developers will be able to boot and run Linux on their devices and then worry about getting all the extras to work. This will save them, and the Linux kernel developers, a great deal of time and trouble.

Just as good for those ARM architects and programmers who are working on high-end, 64-bit ARM systems, Linux now supports 64-bit ARM processors. 64-bit ARM CPUs won’t ship until in commercial quantities until 2013. When they do arrive though programmers eager to try 64-bit ARM processors on servers will have Linux ready for them.

Read More.

Read more

From PC World.

Ubuntu is a widely popular open-source Linux distribution with eight years of maturity under its belt, and more than 20 million users. Of the roughly 5 percent of desktop OSs accounted for by Linux, at least one survey suggests that about half are Ubuntu. (Windows, meanwhile, accounts for about 84 percent.)

The timing of this latest Ubuntu release couldn’t be better for Windows users faced with the paradigm-busting Windows 8 and the big decision of whether to take the plunge.

Initial uptake of Windows 8 has been unenthusiastic, according to reports, and a full 80 percent of businesses will never adopt it, Gartner predicts. As a result, Microsoft’s big gamble may be desktop Linux’s big opportunity.

So, now that Canonical has thrown down the gauntlet, let’s take a closer look at Ubuntu 12.10 to see how it compares with Windows 8 from a business user’s perspective.


Windows 8 Pro (x86) Ubuntu 12.10
License fee $39 to $69 upgrade Free
CPU architectures supported x86, x86-64 x86, x86-64, ARM, PPC
Minimum RAM 1GB, 2GB 512MB
Minimum hard-disk space 20GB 5GB
Concurrent multiuser support No Yes
Workspaces One Two or more
Virtualization Hyper-V KVM
License Not applicable GPL Open Source: Main, Non-GPL: Restricted
Productivity software included None LibreOffice
Graphics tools included No Yes

Read More.

Read more

Over €10 million (approximately £8 million or $12.8 million) has been saved by the city of Munich, thanks to its development and use of the city’s own Linux platform. The calculation of savings follows a question by the city council’s independent Free Voters (Freie Wähler) group,

Read More.

Urge your city to save money from taxes, its your hard earned money.


Read more

After installing Ubuntu 12.10, the first thing I wanted to do, was to disable reverse scrolling – you scroll down and it scrolls up! This is also called natural scrolling by Apple. Don’t know what is natural about it :) but may be natural for Apple users.

Open the terminal and edit this file using any editor and edit the .Xmodmap in your home directory for example:

 gedit .Xmodmap

Here you would seet his:

pointer = 1 2 3 5 4  6 7 8 9 10 11 12

You would note that in the sequence of numbers 5 and 4 are interchanged. Change it back to the sequence..

pointer = 1 2 3 4 5 6 7 8 9 10 11 12

Now you are done, logging out and in should do the job.

If you have Ubuntu Tweak installed. Just go to Tweaks-Miscellaneous and you would see an option to toggle Natural Scrolling on/off.



Read more

I find that sometimes the Network Manager applet in Ubuntu can be a little temperamental (apologies to the maintainer, cyphermox, if he’s reading this – but such is the nature of software). Sometimes it won’t show available routers and if that’s the case then I’ve established a little workaround that I’m telling you about mainly because it involves a script I wrote that lives in a somewhat obscure place in Ubuntu.

Step one in the workaround is needed if you don’t know which networks are available in advance. If you’re sitting in your home then you’ll probably not need this step since most people know their router SSID. If you don’t then you can scan using:

nmcli dev wifi list

This is really reliable and always works if your WiFi hardware is working.

The second step is to use the SSID to create the connection using the script I wrote:

sudo /usr/share/checkbox/scripts/create_connection $SSID --security=wpa --key=$WPA_KEY

If the router doesn’t use any security (which nmcli dev wifi list will tell you) then you don’t need –security or –key. If the router doesn’t use WPA2 (maybe it uses WEP), then you’re out of luck – and deservedly so. Change the routers security settings if you can!

Read more

The Ubuntu Developer Summit was held in Copenhagen last week, to discuss plans for the next six-month cycle of Ubuntu. This was the most productive UDS that I've been to — maybe it was the shorter four-day schedule, or the overlap with Linaro Connect, but it sure felt like a whirlwind of activity.

I thought I'd share some details about some of the sessions that cover areas I'm working on at the moment. In no particular order:

Improving cross-compilation

Blueprint: foundations-r-improve-cross-compilation

This plan is a part of a mutli-cycle effort to improve cross-compilation support in Ubuntu. Progress is generally going well — the consensus from the session was that the components are fairly close to complete, but we still need some work to pull those parts together into something usable.

So, this cycle we'll be working on getting that done. While we have a few bugfixes and infrastructure updates to do, one significant part of this cycle’s work will be to document the “best-practices” for cross builds in Ubuntu, on This process will be heavily based on existing pages on the Linaro wiki. Because most of the support for cross-building is already done, the actual process for cross-building should be fairly straightforward, but needs to be defined somewhere.

I'll post an update when we have a working draft on the Ubuntu wiki, stay tuned for details.

Rapid archive bringup for new hardware

Blueprint: foundations-r-rapid-archive-bringup

I'd really like for there to be a way to get an Ubuntu archive built “from scratch”, to enable custom toolchain/libc/other system components to be built and tested. This is typically useful when bringing up new hardware, or testing rebuilds with new compiler settings. Because we may be dealing with new hardware, doing this bootstrap through cross-compilation is something we'd like too.

Eventually, it would be great to have something as straightforward as the OpenEmbedded or OpenWRT build process to construct a repository with a core set of Ubuntu packages (say, minbase), for previously-unsupported hardware.

The archive bootstrap process isn't done often, and can require a large amount of manual intervention. At present, there's only a couple of folks who know how to get it working. The plan here is to document the bootstrap process in this cycle, so that others can replicate the process, and possibly improve the bits that are a little too janky for general consumption.

ARM64 / ARMv8 / aarch64 support

Blueprint: foundations-r-aarch64

This session is an update for progress on the support for ARMv8 processors in Ubuntu. While no general-purpose hardware exists at the moment, we want to have all the pieces ready for when we start seeing initial implementations. Because we don't have hardware yet, this work has to be done in a cross-build environment; another reason to keep on with the foundations-r-improve-cross-compilation plan!

So far, toolchain progress is going well, with initial cross toolchains available for quantal.

Although kernel support isn’t urgent at the moment, we’ll be building an initial kernel-headers package for aarch64. There's also a plan to get a page listing the aarch64-cross build status of core packages, so we'll know what is blocked for 64-bit ARM enablement.

We’ve also got a bunch of workitems for volunteers to fix cross-build issues as they arise. If you're interested, add a workitem in the blueprint, and keep an eye on it for updates.

Secure boot support in Ubuntu

Blueprint: foundations-r-secure-boot

This session covered progress of secure boot support as at the 12.10 Quantal Quetzal release, items that are planned for 13.04, and backports for 12.04.2.

As for 12.10, we’ve got the significant components of secure boot support into the release — the signed boot chain. The one part that hasn't hit 12.10 yet is the certificate management & update infrastructure, but that is planned to reach 12.10 by way of a not-too-distant-future update.

The foundations team also mentioned that they were starting the 12.04.2 backport right after UDS, which will bring secure boot support to our current “Long Term Support” (LTS) release. Since the LTS release is often preferred Ubuntu preinstall situations, this may be used as a base for hardware enablement on secure boot machines. Combined with the certificate management tools (described at sbkeysync & maintaining uefi key databases), and the requirement for “custom mode” in general-purpose hardware, this will allow for user-defined trust configuration in an LTS release.

As for 13.04, we're planning to update the shim package to a more recent version, which will have Matthew Garrett's work on the Machine Owner Key plan from SuSE.

We're also planning to figure out support for signed kernel modules, for users who wish to verify all kernel-level code. Of course, this will mean some changes to things like DKMS, which run custom module builds outside of the normal Ubuntu packages.

Netboot with secure boot is still in progress, and will require some fixes to GRUB2.

And finally, the sbsigntools codebase could do with some new testcases, particularly for the PE/COFF parsing code. If you're interested in contributing, please contact me at

Read more

Ubuntu 12.10 is here. With this release there is no CD image only DVD image which is 800 MB in size. Torrent is preferred method for me.

Ubuntu 12.10
Torrent Links Direct Downloads
Ubuntu Desktop 64-Bit Edition Torrent Main Server
Ubuntu Desktop 32-Bit Edition Torrent Main Server
Ubuntu Server Edition 64-Bit Torrent Main Server
Ubuntu Server Edition 32-Bit Torrent Main Server

Have fun :)

Ubuntu Unleashed 2012 Edition: Covering 11.10 and 12.04 (7th Edition) (7th Edition)

Read more

So I use Xchat daily and connect to a private IRC server to talk with my colleagues. I also have a BIP server in the office to record all of the IRC transcripts, this way I never miss any conversations regardless of the time of day. Because the BIP server is behind a firewall on the companies network I can’t access it from the outside.  For the past year I’ve been working around this by connecting to my companies firewall via ssh and creating a SOCKS tunnel then simply directing xchat to talk through my local SOCKS proxy.

To do this ,  open a terminal and issue:


Ex: ssh -CND

Starting ssh with -CND:

‘D’ Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. It also adds compression to the datastream ‘C’ and the ‘N’ is a safeguard which protects the user from executing remote commands. is my  IPv4 address

9999 is the local port i’m going to open and direct traffic through

After the SSH tunnel is open I now need to launch xchat, navigate to Settings -> Preferences -> Network Setup and configure xchat to use my local IP ( and local port (9999) then press OK then Reconnect.

I should now be able to connect to the IRC server behind the firewall. Usually I run through this process a few times a day, so it becomes somewhat of a tedious annoyance after a while.

Recently I finished a cool python3 script that does all of this in quick command.

The following code will do the following:

1.) identify the ipv4 address of the interface device you specify

2.) configure xchat.conf to use the new ipv4 address and port specified by the user

3.) open the ssh tunnel using the SSH -CND command from above

4.) launch xchat and connect to your server (assuming you have it set to auto connect)

To use it simply run

$./ -i <interface> -p <port>

ex: $./ -i wlan0 -p 9999

the user can select wlan0 or eth0 and of course their desired port. When your done with the tunnel simply issue <Ctrl-C> to kill it and wala!

#!/usr/bin/env python3
#Sean Feole 2012,
#xchat proxy wrapper, for those of you that are constantly on the go:
#   --------------  What does it do? ------------------
# Creates a SSH Tunnel to Proxy through and updates your xchat config
# so that the user does not need to muddle with program settings

import signal
import shutil
import sys
import subprocess
import argparse
import re
import time

proxyhost = ""
proxyuser = "sfeole"
localusername = "sfeole"

def get_net_info(interface):
    Obtains your IPv4 address

    myaddress = subprocess.getoutput("/sbin/ifconfig %s" % interface)\
    if myaddress == "CAST":
        print ("Please Confirm that your Network Device is Configured")
        return (myaddress)

def configure_xchat_config(Proxy_ipaddress, Proxy_port):
    Reads your current xchat.conf and creates a new one in /tmp

    in_file = open("/home/%s/.xchat2/xchat.conf" % localusername, "r")
    output_file = open("/tmp/xchat.conf", "w")
    for line in in_file.readlines():
        line = re.sub(r'net_proxy_host.+', 'net_proxy_host = %s'
                 % Proxy_ipaddress, line)
        line = re.sub(r'net_proxy_port.+', 'net_proxy_port = %s'
                 % Proxy_port, line)
    shutil.copy("/tmp/xchat.conf", "/home/%s/.xchat2/xchat.conf"
                 % localusername)

def ssh_proxy(ProxyAddress, ProxyPort, ProxyUser, ProxyHost):
    Create SSH Tunnel and Launch Xchat

    ssh_address = "%s:%i" % (ProxyAddress, ProxyPort)
    user_string = "%s@%s" % (ProxyUser, ProxyHost)
    ssh_open = subprocess.Popen(["/usr/bin/ssh", "-CND", ssh_address,
                 user_string], stdout=subprocess.PIPE, stdin=subprocess.PIPE)

    print ("")
    print ("Kill this tunnel with Ctrl-C")
    stat = ssh_open.poll()
    while stat is None:
        stat = ssh_open.poll()

def main():
    Core Code

    parser = argparse.ArgumentParser()
    parser.add_argument('-i', '--interface',
                        help="Select the interface you wish to use",
                        choices=['eth0', 'wlan0'],
    parser.add_argument('-p', '--port',
                        help="Select the internal port you wish to bind to",
                        required=True, type=int)
    args = parser.parse_args()

    proxyip = (get_net_info("%s" % args.interface))
    configure_xchat_config(proxyip, args.port)
    print (proxyip, args.port, proxyuser, proxyhost)

    ssh_proxy(proxyip, args.port, proxyuser, proxyhost)

if __name__ == "__main__":

Refer to the launchpad address above for more info.

Read more

From the article:


“You’d be a fool to use anything but Linux.” :)

Most Linux people know that Google uses Linux on its desktops as well as its servers. Some know that Ubuntu Linux is Google’s desktop of choice and that it’s called Goobuntu. But almost no one outside of Google knew exactly what was in it or what roles Ubuntu Linux plays on Google’s campus, until now.

Read More.

Related posts:

  1. Microsoft, Google in open war in India Google and Microsoft, two of the world’s largest technology firms, are...
  2. Ubuntu 12.04 LTS is now available for Download Ubuntu 12.04 LTS is here. This is the first time...
  3. Ubuntu 11.10 is here Ubuntu 11.10, code named Oneiric Ocelot,  is now available. It...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more