Canonical Voices

Michael Hall

I’ve just finished the last day of a week long sprint for Ubuntu application development. There were many people here, designers, SDK developers, QA folks and, which excited me the most, several of the Core Apps developers from our community!

image20140520_0048I haven’t been in attendance at many conferences over the past couple of years, and without an in-person UDS I haven’t had a chance to meetup and hangout with anybody outside of my own local community. So this was a very nice treat for me personally to spend the week with such awesome and inspiring contributors.

It wasn’t a vacation though, sprints are lots of work, more work than UDS.  All of us were jumping back and forth between high information density discussions on how to implement things, and then diving into some long heads-down work to get as much implemented as we could. It was intense, and now we’re all quite tired, but we all worked together well.

I was particularly pleased to see the community guys jumping right in and thriving in what could have very easily been an overwhelming event. Not only did they all accomplish a lot of work, fix a lot of bugs, and implement some new features, but they also gave invaluable feedback to the developers of the toolkit and tools. They never cease to amaze me with their talent and commitment.

It was a little bitter-sweet though, as this was also the last sprint with Jono at the head of the community team.  As most of you know, Jono is leaving Canonical to join the XPrize foundation.  It is an exciting opportunity to be sure, but his experience and his insights will be sorely missed by the rest of us. More importantly though he is a friend to so many of us, and while we are sad to see him leave, we wish him all the best and can’t wait to hear about the things he will be doing in the future.

Read more
Inayaili de León Persson

This post is part of the series ‘Making responsive‘.

If you’re transitioning a fixed-width website into a responsive one with several time and resource constraints, updating all your content to be mobile-friendly will likely not be an option.

It’s important to understand what your constraints are and work within them. This is what makes a good designer great — you could even say it’s the definition of our jobs.

This was certainly our case: very early in the process of converting into a responsive site we knew we wouldn’t be able to edit the existing content. We did, however, follow a few ‘content rules’, and this is something you can define within your projects too.

Evergreen content

We created the Ubuntu Insights site to hold dated content like case studies, news and white papers, and to keep a constant influx of fresh content into and other Ubuntu sites. Not only did creating Ubuntu Insights allow us to keep fresh, it gave us a place to move a lot of the detailed content that previously existed on the main site to. We end up with fewer pages and also with shorter pages, which is one of the challenges of converting a site to be responsive with no content updates: the pages become too long.

Ubuntu InsightsThe latest iteration of Ubuntu Insights.

We’ve been working on this project for a few months now, and will be releasing its final update soon, which will include a dedicated press area.

No content or information architecture updates

Once you go back to work you’ve completed some time ago, it’s natural that you start seeing lots of things, big and small, you want to improve. However, when the scope of a project is really tight (and which project isn’t?), it’s important not to fall into temptation.

Updating the structure and content of the website in preparation for making it responsive was not an option for us, as that would involve a fair number of people and time that were not at our disposal.

We decided to flag anything we’d want to look at again in the future, but moving things around was out of the question.

A couple of sections of the site were going to suffer some changes that might impact content and information architecture, but those had been flagged at earlier stages, and we knew to only start reviewing and working on those later on in the responsive project.

No hidden content

A decision we made early on was that we weren’t going to hide any content from small screens.

We could still use common patterns like accordions and tabs to show content in a more digestible format, but all content should be available in small screens, just as it would in larger screens.

AccordionAccordions chunk the content nicely at smaller screen sizes.

Future plans

Improving the content on will be a gradual process. As new pieces of content are added and updated, we’re now making sure that content is optimised for a smaller screen experience, being mindful of endless scrolling and keeping the message clear and focused on each page and section of the site.

Looking at mobile first has already pushed us towards simplifying our content. We’re trying to think about shorter, more carefully written text that relies less on images and animations. This includes paring down on charts, cutting out text that really is there to support images, and considering the reason for existence of any new fourth-level pages.

In the future, we’ll likely want to do a content revamp of the entire site, but that’s a huge project on its own and probably one that deserves its own series of posts.

We’d love to hear about your experiences and tips on improving content for a responsive iteration of your sites: add your thoughts in the comments section!

Read the next post in this series: “Making responsive: making our grid responsive”

Reading list

Read more

En este post detallé todo lo que querría tener en el testrunner ideal, con la idea de trabajar un poco sobre eso en el último PyCamp.

Así fue. Nos sentamos un rato largo con Martín Gaitán y empezamos a ver si con nosetest podíamos lograr parte o todo de lo que queremos.

Algo en lo que no nos metimos mucho es la integración de nosetests con frameworks que proveen un reactor (o main loop). Buscando un poco ví que hay algo para integrarlo con Twisted, bastante sencillo, pero no encontré nada para GTK o Qt... no sé si porque no se puede, o es automático :p

Entonces, vayamos a los bifes... ¿qué necesitamos para tener el testrunner ideal?

Keep testing

Los componentes

El primer paso, obvio, es instalar el nosetests base. Con esto tenemos el primer par de puntos de lo que queríamos en mi post original: que soporte recibir un directorio y que busque de ahí para abajo, y que al recibir un archivo que corra los tests de ese archivo.

El primer plugin que necesitamos para ir a donde queremos es "nose-progressive". Este es un plugin que nos cambia bastante la forma de ver los resultados de los tests. Por ejemplo, no hace falta que muestre cada linea de cada test ejecutado, en jerarquía, porque ante un error nos va a dar un pequeño resumen donde podemos ver la info del test que falló.

También nos va a dar un lindo OK en verde si todo salió bien... y si hubieron problemas vamos a ver esos resúmenes, coloridos, con un montón de info copada.

De la info que nos da en ese resumen también podemos extraer el path para correr el test sólo, o toda la clase, todo el archivo etc. Pero también tenemos otro plugin, el "nosecomplete" que hace que podamos ir escribiendo el path para un test, autocompletando de una forma muy piola, descubriendo lo que hay para correr.

Para correr más de un test, un subconjunto que matchee con una regex, tenemos el "nose-selecttests", que nos permite pasarle un --select-tests= que hace que le podamos pasar luego aquello que queremos que coincida.

Finalmente, tenemos varios detalles. Le podemos decir que corte en el primer test que falla, y que no siga, con -x. También podemos pedirle que no esconda ni los prints que hagamos ni lo que logueamos, con --nocapture y --nologcapture. Y le podemos pedir que nos tire un buen resumen de cuales tests tardan más con --with-timer (necesitamos el plugin "nose-timer").

Armando el entorno

Lo primero, obviamente en el virtualenv de tu proyecto, es instalar nose y todos sus plugins:

    pip install nose nose-progressive nose-selecttests nose-timer nosecomplete

Para el plugin de autocomplete, como autocompletamos desde el shell, realmente, tenemos que hookear al mismo con el plugin de nose. Es copiar y pegar algo, nada más, las instrucciones acá.

Finalmente, hay que decirle a nose, cuando lo ejecutamos, que use tal o cual plugin, y de qué forma. Acá viene mezclada la mano... algunas configs la podemos poner en el el $HOME/.noserc...


..., pero otras las tenemos que especificar al ejecutarlo:

    nosetests --progressive-bar-filled=2 --progressive-function-color=1 --progressive-dim-color=5

Esto último se podría meter como un alias del bash, o simplemente encapsularlo en un script 'test' en el proyecto (junto con algún pyflakes o pylint, etc).

En fin. Lo importante es: Keep Testing :)

Read more
Inayaili de León Persson

Latest from the web team — May 2014

We’re fast approaching the summer, and the first few sunny days have already arrived in London. The web team cannot slow its pace though…

In the last few weeks we’ve worked on:

  • Responsive we’ve had a sprint to clean up our processes and CSS files after the big responsive release last month
  • we’ve updated our Jumpstart service to include the exciting new Orange Box Micro-cluster and Your cloud product pages in preparation for the OpenStack Developer Summit
  • Juju GUI: we’ve finished creating new personas
  • Ubuntu OpenStack Interoperability Lab: we’ve completed the report design
  • Ubuntu OpenStack Installer: the installer was presented at the OpenStack Developer Summit last week, and we’ve done iterations on the designs based on recent user research
  • Fenchurch: we’ve moved Fenchurch into a proper Django project, nearly completed the first phase of a new asset server with a new Juju charm, and set up a new Fenchurch instance for the new legal website
  • Ubuntu Insights: we’ve made the move from Ubuntu Resources to Ubuntu Insights, and launched the desktop version of the site
  • Las Vegas sprint: we worked on updated, mobile-first bundle and charm details pages and started planning for the next cycle
  • Partners: we’ve completed the final UX and copy for this new Ubuntu website

And we’re currently working on:

  • Responsive we’re now in the process of updating our web style guide documents before the public release of the new styles
  • Ubuntu Insights: we’re adding the final touches before launching the press centre in the next few weeks
  • Juju GUI: we’re planning the work for the next cycle
  • Fenchurch: we’re working on getting the Juju charms in production for the new legal site, finishing up the asset server and planning the development of our new partners website
  • Partners: we’re currently building the new partners website
  • Legal pages: we’re now in the process of building the new hub that will hold all our legal information
  • Chinese website: we’ve finalised UX and copy for this upcoming Ubuntu site

If you’d like to join the web team, we are currently looking for experienced user experience and web designers to join the team!

Design team moving desksThe design team getting ready to move desks, at the end of April.

Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Let us know your thoughts in the comments.

Read more

Train Stops

And the sons of pullman porters and the sons of engineers
Ride their father’s magic carpets made of steel
Mothers with their babes asleep are rockin’ to the gentle beat
And the rhythm of the rails is all they feel

– The City of New Orleans, Willie Nelson interpreting Steve Goodman

So, LibreOffice does its releases on a train release schedule and since we recently modified the schedule a bit (by putting out the alpha1 release earlier), I took the opportunity to take a closer look and explain a bit on what we are doing. With this every 6 months of LibreOffice development currently roughly look like this:
week after x.y.0 development release candidates finalized releases
fresh stable fresh stable
0 x.y.0
1 x.y.1~rc1 x.(y-1).4~rc1
3 x.y.1~rc2 x.(y-1).4~rc2
4 x.y.1 x.(y-1).4
6 x.y.2~rc1
8 x.y.2~rc2 x.(y-1).5~rc1
9 x.y.2
10 x.(y-1).5~rc2
11 x.y.3~rc1 x.(y-1).5
13 x.(y+1)~alpha1 x.y.3~rc2
14 x.y.3
18 x.(y+1)~beta1
20 x.(y+1)~beta2 x.(y-1).6~rc1
22 x.(y+1)~rc1 x.(y-1).6~rc2
23 x.(y-1).6
24 x.(y+1)~rc2
25 x.(y+1)~rc3

The last two columns are most visible to most visitors of the LibreOffice website. Those are the versions found on the LibreOffice Fresh and LibreOffice Stable download pages. We are in roughly at week 18 after 4.2.0 release now, and the versions available are 4.2.4 fresh and 4.1.6 stable. A careful reader will note that according to that schedule we should be at 4.2.3 and 4.1.5 — that is true, but the 4.2 series still had an extra 4.2.1 intermediate release to adjust the schedule of 4.2 in direction of the current plan. This is not expected for future releases (also note that there is always some flexibility in the plan to allow for holidays etc.)

If you count all the prereleases, release candidates and releases, you will find that we do 25 of those in 26 weeks. Beside the fact that this is a lot of work for release engineers, one might wonder if anyone can keep up with that, and if so — how? The answer to that depends on how you are using LibreOffice.

self deployment on LibreOffice fresh

If you are an user or a small business installing LibreOffice yourself, you will probably run LibreOffice fresh and the table above simplifies for you as follows:

week after x.y.0 development release candidates finalized releases
0 x.y.0
1 x.y.1~rc1
4 x.y.1
6 x.y.2~rc1
9 x.y.2
11 x.y.3~rc1
14 x.y.3
18 x.(y+1)~beta1
20 x.(y+1)~beta2
22 x.(y+1)~rc1
24 x.(y+1)~rc2
25 x.(y+1)~rc3

The last column shows the releases you are running. If you are a member of the LibreOffice community it would be very helpful if you also spend some time of this 6 months period for three actions:

  • running at least one of the release candidates in the table (available for download here) before the final is released.
  • running at least one beta releases in the table. Note that there will be a bug hunting session on the 4.3.0 beta release this week, that will help you get started.
  • running a nightly build once anywhere in the weeks 1-18. Note that if you are getting excited about seeing the latest and greatest builds while they are still steaming, there are tools that can help you with this on Linux and Windows.

If you do these each of these three things once in the timeframe of six months and report any issues you find, you are helping LibreOffice already a lot — and you are making sure that the finalized releases of the fresh series are not only containing all the latest features, but also free of severe regressions.

bigger deployments on LibreOffice stable

If you are not installing LibreOffice yourself, but instead have a major deployment administrated centrally, things are a bit different. You might be more conservative and interested in the releases from LibreOffice stable. And you probably have professional support from a certified developer or a company employing certified developers.

week after x.y.0 development release candidates finalized releases
1 x.(y-1).4~rc1
4 x.(y-1).4
8 x.(y-1).5~rc1
11 x.(y-1).5
13 x.(y+1)~alpha1
18 x.(y+1)~beta1
20 x.(y-1).6~rc1
23 x.(y-1).6

If you intend to deploy one series of LibreOffice (e.g. 4.3), there are two things that are highly recommended to be done:

  • make the alpha or beta releases available quickly to interested volunteers in your deployment early. They might find bugs or regressions that are specific to your use of the software.
  • make the release candidates of versions that you intent to deploy available early to your users.

Of these two actions, the first is by far the most important: It identifies issues early on in the life cycle and gives both your support provider and the LibreOffice developer community at large time to resolve the issue. In fact, I would argue that if you have a major deployment, the only excuse for not making available prereleases, is that you made available nightly builds.


So, Ubuntu qualifies as a “bigger deployment” and I have to take care of LibreOffice on it. Also people want to be able to run the latest and greatest LibreOffice releases from the LibreOffice fresh series. Do I follow my recommendations here? Yes, mostly I do:

  • both LibreOffice fresh and LibreOffice stable series are available from PPAs for Ubuntu and are updated regularly and quickly when an rc2 is available.
  • prereleases are made available as bibisect repositories rather quick (build on Ubuntu 12.04 LTS). In addition, fully packaged versions of LibreOffice are build in the prereleases PPA as early as starting with beta1.

So, you are invited to run or test builds from these PPAs — or download the bibisect repositories — to keep LibreOffice releases coming in the steady and stable fashion they do. Finally, there is a bug hunting session for LibreOffice this week and as said above, no matter if you are running a huge deployment or installing on your own, you are helping LibreOffice — and yourself, as a user of LibreOffice — a lot by testing the prereleases:

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20140520 Meeting Agenda

ARM Status

Nothing new to report this week

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

Milestone Targeted Work Items

Here’s our todo list until we formulate a better plan for tracking work
next week at our team sprint:

   apw    core-1405-kernel    2 work items   
   ogasawara    core-1405-kernel    2 work items   

Status: Utopic Development Kernel

We have uploaded our first v3.15 based kernel, 3.15.0-1.5, to the Utopic
archive. It is currently based on the v3.15-rc5 upstream kernel.
Important upcoming dates:
Mon-Wed June 9 – 11, vUDS (~3 weeks away)
Thurs Jun 26 – Alpha 1 (~5 weeks away)
Fri Jun 27 – Kernel Freeze for 12.04.5 and 14.04.1 (~5 weeks away)

  • NOTE: The PrecisePangolin/ReleaseSchedule notes Kernel Freeze as Aug 9. I believe this should be amended to Jun 27.

Status: CVE’s

The current CVE status can be reviewed at the following link:

Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (May. 20):

  • Lucid – Prep week
  • Precise – Prep week
  • Quantal – Prep week
  • Saucy – Prep week
  • Trusty – Prep week

    Current opened tracking bugs details:


    For SRUs, SRU report is a good source of information:



    cycle: 18-May through 07-Jun
    16-May Last day for kernel commits for this cycle
    18-May – 24-May Kernel prep week.
    25-May – 31-May Bug verification & Regression testing.
    01-Jun – 07-Jun Regression testing & Release to -updates.

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more

A few years ago, while I started to participate to the packaging of makedumpfile and kdump-tools for Debian and ubuntu. I am currently applying for the formal status of Debian Maintainer to continue that task.

For a while now, I have been noticing that our version of the kernel dump mechanism was lacking from a functionality that has been available on RHEL & SLES for a long time : remote kernel crash dumps. On those distribution, it is possible to define a remote server to be the receptacle of the kernel dumps of other systems. This can be useful for centralization or to capture dumps on systems with limited or no local disk space.

So I am proud to announce the first functional beta-release of kdump-tools with remote kernel crash dump functionality for Debian and Ubuntu !

For those of you eager to test or not interested in the details, you can find a packaged version of this work in a Personal Package Archive (PPA) here :

New functionality : remote SSH and NFS

In the current version available in Debian and Ubuntu, the kernel crash dumps are stored on local filesystems. Starting with version 1.5.1, they are stored in a timestamped directory under /var/crash. The new functionality allow to either define a remote host accessible through SSH or an NFS mount point to be the receptacle for the kernel crash dumps.

A new section of the /etc/default/kdump-tools file has been added :

# ---------------------------------------------------------------------------
# Remote dump facilities:
# SSH - username and hostname of the remote server that will receive the dump
# and dmesg files.
# SSH_KEY - Full path of the ssh private key to be used to login to the remote
# server. use kdump-config propagate to send the public key to the
# remote server
# HOSTTAG - Select if hostname of IP address will be used as a prefix to the
# timestamped directory when sending files to the remote server.
# 'ip' is the default.
# NFS - Hostname and mount point of the NFS server configured to receive
# the crash dump. The syntax must be {HOSTNAME}:{MOUNTPOINT} 
# (e.g. remote:/var/crash)
# SSH="<user@server>"
# SSH_KEY="<path>"
# HOSTTAG="hostname|[ip]"
# NFS="<nfs mount>"

The kdump-config command also gains a new option : propagate which is used to send a public ssh key to the remote server so passwordless ssh commands can be issued to the remote SSH host.

Those options and commands are nothing new : I simply based my work on existing functionality from RHEL & SLES. So if you are well acquainted with RHEL remote kernel crash dump mechanisms, you will not be lost on Debian and Ubuntu. So I want to thank those who built the functionality on those distributions; it was a great help in getting them ported to Debian.

Testing on Debian

First of all, you must enable the kernel crash dump mechanism at the kernel level. I will not go in details as it is slightly off topic but you should :

  1. Add crashkernel=128M to /etc/default/grub in GRUB_CMDLINE_LINUX_DEFAULT
  2. Run udpate-grub
  3. reboot

Install the beta packages

The package in the PPA can be installed on Debian with add-apt-repository. This command is in the software-properties-common package so you will have to install it first :

$ apt-get install software-properties-common
$ add-apt-repository ppa:louis-bouchard/networked-kdump

Since you are on Debian, the result of the last command will be wrong, as the serie defined in the PPA is for Utopic. Just use the following command to fix that :

$ sed -i -e 's/sid/utopic/g' /etc/apt/sources.list.d/louis-bouchard-networked-kdump-sid.list 
$ apt-get update
$ apt-get install kdump-tools makedumpfile

Configure kdump-tools for remote SSH capture

Edit the file /etc/default/kdump-tools and enable the kdump mechanism by setting USE_KDUMP to 1 . Then set the SSH variable to the remote hostname & credentials that you want to use to send the kernel crash dump. Here is an example :


You will need to propagate the ssh key to the remote SSH host, so make sure that you have the password of the remote server’s user you defined (ubuntu in my case) for this command :

root@sid:~# kdump-config propagate
Need to generate a new ssh key...
The authenticity of host 'trustys-netcrash (' can't be established.
ECDSA key fingerprint is 04:eb:54:de:20:7f:e4:6a:cc:66:77:d0:7c:3b:90:7c.
Are you sure you want to continue connecting (yes/no)? yes
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/kdump_id_rsa to server ubuntu@TrustyS-netcrash

If you have an existing ssh key that you want to use, you can use the SSH_KEY option to point to your own key in /etc/default/kdump-tools :


Then run the propagate command as previously :

root@sid:~/.ssh# kdump-config propagate
Using existing key /root/.ssh/mykey_id_rsa
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/mykey_id_rsa to server ubuntu@TrustyS-netcrash

It is a safe practice to verify that the remote SSH host can be accessed without password. You can use the following command to test (with your own remote server as defined in the SSH variable in /etc/default/kdump-tools) :

root@sid:~/.ssh# ssh -i /root/.ssh/mykey_id_rsa ubuntu@TrustyS-netcrash pwd

If the passwordless connection can be achieved, then everything should be all set. You can proceed with a real crash dump test if your setup allows for it (not a production environment for instance).

Configure kdump-tools for remote NFS capture

Edit the /etc/default/kdump-tools file and set the NFS variable with the NFS mount point that will be used to transfer the crash dump :


The format needs to be the syntax that normally would be used to mount the NFS filesystem. You should test that your NFS filesystem is indeed accessible by mounting it manually :

root@sid:~/.ssh# mount -t nfs TrustyS-netcrash:/var/crash /mnt
root@sid:~/.ssh# df /mnt
Filesystem 1K-blocks Used Available Use% Mounted on
TrustyS-netcrash:/var/crash 6815488 1167360 5278848 19% /mnt
root@sid:~/.ssh# umount /mnt

Once you are sure that your NFS setup is correct, then you can proceed with a real crash dump test.

Testing on Ubuntu

As you would expect, setting things on Ubuntu is quite similar to Debian.

Install the beta packages

The package in the PPA can be installed on Debian with add-apt-repository. This command is in the software-properties-common package so you will have to install it first :

$ sudo add-apt-repository ppa:louis-bouchard/networked-kdump

Packages are available for Trusty and Utopic.

$ sudo apt-get update
$ sudo apt-get -y install linux-crashdump

Configure kdump-tools for remote SSH capture

Edit the file /etc/default/kdump-tools and enable the kdump mechanism by setting USE_KDUMP to 1 . Then set the SSH variable to the remote hostname & credentials that you want to use to send the kernel crash dump. Here is an example :


You will need to propagate the ssh key to the remote SSH host, so make sure that you have the password of the remote server’s user you defined (ubuntu in my case) for this command :

ubuntu@TrustyS-testdump:~$ sudo kdump-config propagate
[sudo] password for ubuntu: 
Need to generate a new ssh key...
The authenticity of host 'trustys-netcrash (' can't be established.
ECDSA key fingerprint is 04:eb:54:de:20:7f:e4:6a:cc:66:77:d0:7c:3b:90:7c.
Are you sure you want to continue connecting (yes/no)? yes
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/kdump_id_rsa to server ubuntu@TrustyS-netcrash
If you have an existing ssh key that you want to use, you can use the SSH_KEY option to point to your own key in /etc/default/kdump-tools :

Then run the propagate command as previously :

ubuntu@TrustyS-testdump:~$ kdump-config propagate
Using existing key /root/.ssh/mykey_id_rsa
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/mykey_id_rsa to server ubuntu@TrustyS-netcrash

It is a safe practice to verify that the remote SSH host can be accessed without password. You can use the following command to test (with your own remote server as defined in the SSH variable in /etc/default/kdump-tools) :

ubuntu@TrustyS-testdump:~$sudo ssh -i /root/.ssh/mykey_id_rsa ubuntu@TrustyS-netcrash pwd

If the passwordless connection can be achieved, then everything should be all set.

Configure kdump-tools for remote NFS capture

Edit the /etc/default/kdump-tools file and set the NFS variable with the NFS mount point that will be used to transfer the crash dump :


The format needs to be the syntax that normally would be used to mount the NFS filesystem. You should test that your NFS filesystem is indeed accessible by mounting it manually (you might need to install the nfs-common package) :

ubuntu@TrustyS-testdump:~$ sudo mount -t nfs TrustyS-netcrash:/var/crash /mnt 
ubuntu@TrustyS-testdump:~$ df /mnt
Filesystem 1K-blocks Used Available Use% Mounted on
TrustyS-netcrash:/var/crash 6815488 1167488 5278720 19% /mnt
ubuntu@TrustyS-testdump:~$ sudo umount /mnt

Once you are sure that your NFS setup is correct, then you can proceed with a real crash dump test.

 Miscellaneous commands and options

A few other things are under the control of the administrator

The HOSTTAG modifier

When sending the kernel crash dump, kdump-config will use the IP address of the server to as a prefix to the timestamped directory on the remote host. You can use the HOSTTAG variable to change that default. Simply define in /etc/default/kdump-tools :


The hostname of the server will be used as a prefix instead of the IP address.

Currently, this is only implemented for the SSH method, but it will be available for NFS as well in the final version.

kdump-config show

To verify the configuration that you have defined in /etc/default/kdump-tools, you can use kdump-config’s show command to review your options.

ubuntu@TrustyS-testdump:~$ sudo kdump-config show
KDUMP_SYSCTL: kernel.panic_on_oops=1
KDUMP_COREDIR: /var/crash
crashkernel addr: 0x2d000000
SSH: ubuntu@TrustyS-netcrash
SSH_KEY: /root/.ssh/kdump_id_rsa
current state: ready to kdump
kexec command:
 /sbin/kexec -p --command-line="BOOT_IMAGE=/vmlinuz-3.13.0-24-generic root=/dev/mapper/TrustyS--vg-root ro console=ttyS0,115200 irqpoll maxcpus=1 nousb" --initrd=/boot/initrd.img-3.13.0-24-generic /boot/vmlinuz-3.13.0-24-generic

If the remote crash kernel dump functionality is setup, you will see the options listed in the output of the commands.


As outlined at the beginning, this is the first functional beta version of the code. If you are curious, you can find the code I am working on here :;a=shortlog;h=refs/heads/networked_kdump_beta1

Don’t hesitate to test & let me know if you find issues

Read more
Carla Berkers

As the number of Juju users has been rapidly increasing over the past year, so has the number of new solutions in the form of charms and bundles. To help users assess and choose solutions we felt it would be useful to improve the visual presentation of charm and bundle details on

While we were in Las Vegas, we took advantage of the opportunity to work with the Juju developers and solutions team to find out how they find and use existing charms and bundles in their own deployments. Together we evaluated the existing browsing experience in the Juju GUI and went through JSON-files line by line to understand what information we hold on charms.


We used post-its to capture every piece of information that the database holds about a bundle or charm that is submitted to charmworld.


We created small screen wireframes first to really focus on the most important content and how it could potentially be displayed in a linear way. After showing the wireframes to a couple more people we used our guidelines to create mobile designs that we can scale out to tablet and desktop.

With the grouped and prioritised information in mind we created the first draft of the wireframes.


In order to verify and test our designs, we made them modular. Over time it will be easy to move content around if we want to test if another priority works better for a certain solution. The mobile-first approach is a great tool for making sense of complex information and forced us to prioritise the content around user’s needs.


First version designs.

Read more
Inayaili de León Persson

This post is part of the series ‘Making responsive‘.

Following the designers and developers sprint, we had a full web team workshop day to discuss our findings and plan the work for the following weeks.

Planning and scoping was tricky because we had to balance the work required to make the site responsive with incoming work requests from the business — there was a big release of Ubuntu coming up and lots of content updates along with it.

We carried out a few team exercises that helped us to deconstruct the project into smaller chunks that we could prioritise and plan around other commitments, so that it didn’t feel like building the Titanic but rather something more manageable.

Creating a wishlist

Initially, it’s good to have an idea of what each person considers important for the project.

The simplest way to capture everything you want to do in a project is to write all ideas on separate sticky notes. This approach helps to identify common themes and priority tasks.

This is a good opportunity to get the entire team together in a room and give everyone space to say what they hope to achieve in the near, and not so near, future.

At the end, though, it’s only natural that you’ll be left with a huge amount of ideas, so it’s necessary to organise them into groups, like projects or topics.

Dividing work into phases

Following the wishlist exercise, and based on the resources and time available, fixed deadlines, and business goals, the list was trimmed down into four time frames:

  • To be completed before the Ubuntu 14.04 LTS release
  • To be completed for the release
  • To be completed soon after release
  • To be completed later

If you have hard deadlines for other projects that are in your team’s plate, start adding those into a calendar overview (we used four sticky notes columns) and discuss what can be done within the time that is left available.

For example, we knew that, in preparation for Ubuntu’s presence at the Mobile World Congress at the end of February, the content of the tablet and phone sections of the website would have to be updated ahead of any responsive work going live.

Defining high priority tasks

In terms of the responsive project itself, we defined the priority tasks:

  • Update our CSS with the components that had been created for the two initial responsive projects and that would be useful across our sites
  • Find an initial solution for main, second and third level navigation, and multiple footers
  • Create updated image assets where necessary

And we also defined what we wouldn’t do at this stage:

  • Rewrite copy
  • Restructure and reorder content
  • Change the information architecture of the site
  • Update the site’s visual style

It was important to have these restrictions in place in order to keep the scope of the project as small and as feasible as possible. Most times, it’s impossible to do everything you wanted in the first instance of moving to responsive, so deciding what would be the biggest wins for the amount of time you have available is an important step in planning the work.

At the end of the workshop, we all felt more comfortable with the amount of work ahead: following a few simple exercises, we had identified pain points, set realistic goals and expectations, and established priorities.

Content risk assessment

Matt went through all the pages of the site to assess what, in terms of the design, could become an issue once the site was responsive and once the content had to fit into small screens. These findings were added to a document divided into five different types of content:

  • Images
  • Graphs
  • Tables
  • Layout and behaviour
  • Text

With this document we could see how much work we potentially would have when transitioning all the pages of the site to the updated responsive styles, and which would be the trickier problems to solve.

Estimating time

With the content inventory at hand, Ant estimated the degree of difficulty of converting each page for responsive, using a scale of 1 to 3. He then estimated how many ‘points’ he should be able to get done in one day, which left us with an estimated time of completion of the first pass at fixing rendering issues.

Something to bear in mind when estimating times is that, while fixing the rendering issues that came with converting all the pages of the site to the responsive styles proved faster than initially estimated, the testing across different devices and screen sizes that followed was time-consuming for both designers and developers.

The complexity of testing and how long you should allow for it will depend on the site’s design and the CSS being used: for example, when using newer techniques you should allow for enough time to create suitable fallbacks for browsers with fewer capabilities. Another thing to keep in mind is that testing across devices should be done as you go, rather than at the very end of the process. Just a quick look at a couple of different devices and browsers (for example, previous Android versions and Opera Mini) before you start the estimation process will you give you clearer idea of the amount of work that lies ahead.

Even though our time estimates were a little off, creating those spreadsheets and dividing the work into very small blocks made us feel more in control, and, as we ticked pages off, it made us feel motivated.


When you’re working on a large living and breathing website, you know that all the updates and changes that come along with it don’t stop just because you want to make your site responsive. It’s important that everyone involved understands that you should be putting your website first, and that responsive is not necessarily the top priority. That’s why it’s important to be smart about the way you plan the project and give yourself some parameters to work within — the transition isn’t going to happen overnight.

Read the next post in this series: “Making responsive: approach to content”

Read more
Colin Ian King

Back in April I wrote smemstat in to dump out the per process shared memory usage based on the memory mapping data from /proc/$pid/smaps.   I tried to make smemstat as compact as possible with minimised changes in heap size to reduce the impact on the total system shared memory statistics.

smemstat reports the memory utilised per process taking in consideration pages that are shared with other processes. So, if two processes share 64K of memory, each process will report that memory as 32K each.

smemstat has to modes of operation: "dump all current memory stats" and a "dump periodic change in memory stats".  The former is useful for a single snapshot view of memory utilisation, where as the latter is useful to observe memory size changes over time. Exiting or new processes that share memory with other processes will cause a change the reported amounted of memory used by the processes because this memory is shared amongst a changed number of processes.

There are various options to smemstat, so consult the man page for more details.  One useful option is -o that collects the smemstat statistics into a JSON formatted output file. This can be useful for memory based tests as the structured JSON data can be easily parsed and analysed.

smemstat  has landed in Ubuntu Utopic 14.10 and is also available in Debian. Examples of smemstat can be found here. I hope it proves to be a useful utility.

Read more
David Owen

There's a simple process you can use to hit your big goals, reliably.

Perhaps you've tried and failed in the past. Maybe you can hit smaller goals, but have trouble maintaining focus on bigger goals, or connecting them to your day-to-day life.

I used to struggle with this, too, until I figured out the right process. Then, I hit every single big goal I had for the next several years—and all of them ahead of schedule!

And, the process is pretty simple.

Continue reading "How to hit your goals"

Read more

Now that all the responsible disclosure processes have been followed through, I’d like to tell everyone a story of my very bad week last week. Don’t worry, it has a happy ending.


Part 1: Exposition

On May 5th we got a support request from a user who observed confusing behaviour in one of our systems. Our support staff immediately escalated it to me and my team sprung into action for what ended up being a 48-hour rollercoaster ride that ended with us reporting upstream to Django a security bug.

The bug, in a nutshell, is that when the following conditions lines up, a system could end up serving a request to one user that was meant for another:

- You are authenticating requests with cookies, OAuth or other authentication mechanisms
- The user is using any version of Internet Explorer or Chromeframe (to be more precise, anything with “MSIE” in the request user agent)
- You (or an ISP in the middle) are caching requests between Django and the internet (except Varnish’s default configuration, for reasons we’ll get to)
- You are serving the same URL with different content to different users

We rarely saw this combination of conditions because users of services provided by Canonical generally have a bias towards not using Internet Explorer, as you’d expect from a company who develops the world’s most used Linux distribution.


Part 2: Rising Action

Now, one may think that the bug is obvious, and wonder how it went unnoticed since 2008, but this really was one was one of those elusive “ninja-bugs” you hear about on the Internet and it took us quite a bit of effort to track it down.

In debugging situations such as this, the first step is generally to figure out how to reproduce the bug. In fact, figuring out how to reproduce it is often the lion’s share of the effort of fixing it.  However, no matter how much we tried we could not reproduce it. No matter what we changed, we always got back the right request. This was good, because it ruled out a widespread problem in our systems, but did not get us closer to figuring out the problem.

Putting aside reproducing it for a while, we then moved on to combing very carefully through our code, trying to find any hints of what could be causing this. Several of us looked at it with fresh eyes so we wouldn’t be tainted by having developed or reviewed the code, but we all still came up empty each and every time. Our code seemed perfectly correct.

We then went on to a close examination of all related requests to get new clues to where the problem was hiding. But we had a big challenge with this. As developers we don’t get access to any production information that could identify people. This is good for user privacy, of course, but made it hard to produce useful logs. We invested some effort to work around this while maintaining user privacy by creating a way to anonymise the logs in a way that would still let us find patterns in them. This effort turned up the first real clue.

We use Squid to cache data for each user, so that when they re-request the same data, it’s queued up right in memory and can be quickly served to them without having to recreate the data from the databases and other services. In those anonymized  Squid logs, we saw cookie-authenticated requests that didn’t contain an HTTP Vary header at all, where we expected it to have at the very least “Vary: Cookie” to ensure Squid would only serve the correct content all the time. So we then knew what was happening, but not why. We immediately pulled Squid out of the middle to stop this from happening.

Why was Squid not logging Vary headers? There were many possible culprits for this, so we got a *lot* of people were involved searching for the problem. We combed through everything in our frontend stack (Apache, Haproxy and Squid) that could sometimes remove Vary headers.

This was made all the harder because we had not yet fully Juju charmed every service, so could not easily access all configurations and test theories locally. Sometimes technical debt really gets expensive!

After this exhaustive search, we determined that nothing our code removed headers. So we started following the code up to Django middlewares, and went as far as logging the exact headers Django was sending out at the last middleware layer. Still nothing.


Part 3: The Climax

Until we got a break. Logs were still being generated, and eventually a pattern emerged. All the initial requests that had no Vary headers seemed for the most part to be from Internet Explorer. It didn’t make sense that a browser could remove headers that were returned from a server, but knowing this took us to the right place in the Django code, and because Django is open source, there was no friction in inspecting it deeply.  That’s when we saw it.

In a function called fix_IE_for_vary, we saw the offending line of code.

del response['Vary']

We finally found the cause.

It turns out IE 6 and 7 didn’t have the HTTP Vary header implemented fully, so there’s a workaround in Django to remove it for any content that isn’t html or plain text. In hindsight, if Django would of implemented this instead as a middleware, even if default, it would have been more likely that this would have been revised earlier. Hindsight is always 20/20 though, and it easy to sit back and theorise on how things should have been done.

So if you’ve been serving any data that wasn’t html or plain text with a caching layer in the middle that implements Vary header management to-spec (Varnish doesn’t trust it by default, and checks the cookie in the request anyway), you may have improperly returned a request.

Newer versions if Internet Explorer have since fixed this, but who knew in 2008 IE 9 would come 3 years later?


Part 4: Falling Action

We immediately applied a temporary fix to all our running Django instances in Canonical and involved our security team to follow standard responsible disclosure processes. The Canonical security team was now in the driving seat and worked to assign a CVE number and email the Django security contact with details on the bug, how to reproduce it and links to the specific code in the Django tree.

The Django team immediately and professionally acknowledged the bug and began researching possible solutions as well as any other parts of the code where this scenario could occur. There was continuous communication among our teams for the next few days while we agreed on lead times for distributions to receive and prepare the security fix,


Part 5: Resolution

I can’t highlight enough how important it is to follow these well-established processes to make sure we keep the Internet at large a generally safe place.
To summarise, if you’re running Django, please update to the latest security release as quickly as possible, and disable any internal caching until then to minimise the chances of hitting this bug.

If you're running squid and want to check if you could be affected, here's a small python script to run against your logs we put together you can use as a base, you may need to tweak it based on your log format. Be sure to run it only against cookie-authenticated URLs, otherwise you will hit a lot of false positives.

Read more
Giorgio Venturi

With the unstoppable rise of mobile apps, some pundits within the tech industry have hastily demoted the mobile web to a second-class citizen, or even dismissed it as ‘dead’. Who cares about websites and webapps when you can deliver a superior user experience with a native app?

Well, we care because the reality is a bit different. New apps are hard to discover; their content is locked, with no way to access it from the outside. People browse the web more than ever on their mobile phones. The browser is the most used app on the phone, both as starting point and a destination in the user journey.

Source: xkcd

At Ubuntu, we decided to focus on improving the user experience of browsing and searching the web. Our approach is underpinned by our design principles, namely:

  1. Content is king: UI should recede in the background once user starts interacting with content
  2. Leverage natural interaction by using gestures and spatial metaphors.

In designing the browser, there’s one more principle we took into account. If content is our king, then recency should be our queen.

Recency is queen

People forget about things. That’s why tasks such as finding a page you visited yesterday or last week can be very hard: UIs are not designed to support the long-term memory of the user. For example, when browsing tabs on a smartphone touchscreen, it is hard to recognise what’s on screen as we forgot what that page is and why we arrived there.

Similarly, bookmarks are often a meaningless list of webpages, as their value was linked to the specific time when they were taken. For example, let’s imagine we are planning our next holiday and we start bookmarking a few interesting places. We may even create a new ‘holidays’ folder and add the bookmarks to it. However, once the holiday is the bookmarks are still there, they don’t expire once they have lost their value. This happens pretty much every time; old bookmarks and folders will eventually start cluttering our screen and make it difficult to find the information we need.

Therefore we redesigned tabs, history and bookmarks to display the most recent information first. Consequently, the display and the retrieval of information is simplified.

Browser tabs

In our browser, most recent tabs come first. Here is how it works:

Browser tabs

In this way, users don’t have to painstakingly browse an endless list of tabs that may have been opened weeks or days ago, like in Mobile Safari or Chrome.


Browser history has not changed much since Netscape Navigator; modern browser still display a chronological log of all the web pages we visited starting from today. Finding a website or a page is hard because of the sheer amount of information. In our browser we employ a clustered model where you display the last visited websites, not every single page. On tap, you then display all webpages for that websites, starting from the most recent. In this way scanning the history log is much easier and less painful.

Browser history

Loving the bottom edge

We believe the bottom edge is the most pleasurable edge to use. It is easily accessible at any time and ergonomically friendly to the typical one-hand phone hold. Once discovered, it will slowly build into our muscle memory and become a natural and intuitive way of interacting with the application.

Bottom edge

This is why we combined tabs and history and made them accessible through the bottom edge. As a team, we spent months building and refining a sleek, intuitive and fluid user experience.

Here’s a sneak preview of how it will look like:

Video: Browser interactions

Bottom edge gesture will have three stages:

  1. Dragging from the bottom edge will hint and reveal the most recently viewed tab
  2. Continue dragging and the full tab spread is revealed
  3. Keep on dragging and browser history will be fully revealed.

All elements will support gestural interaction: user can swipe to delete a tab or a website from history.

That’s all for now. In the next blog post, we will talk more about gestural interaction in Browser. Stay tuned!

Read more
David Planella

We’re thrilled to announce a new release of Ubuntu Dual boot, now supporting enhanced Ubuntu upgrades either from the Android or Ubuntu side.

The new Ubuntu Dualboot release, codenamed M9, enables developers to run both Ubuntu and Android on a single device and is packed with new features that make it the power tool to use for those doing development in both platforms.

For developers only

Dual boot is not a feature suitable for regular users. It is recommended to be installed only by developers who are comfortable with flashing devices and with their partition layout. Dual boot rewrites the Android recovery partition and those installing it should be intimately familiar with re-flashing it in case something goes wrong.

Multiple Android flavours are supported (AOSP or stock, CyanogenMod) and installation of Ubuntu can be done for all versions available in the regular distribution channels.

What’s new

The new release fixes a number of bugs, brings under-the-hood enhancements and includes a bunch of exciting features. Here are the highlights:

Enhanced Ubuntu upgrades

The most prominent feature is the addition of support for the upgrades on the Ubuntu side. Now image upgrades can be downloaded using the standard procedure in System Settings › Updates from Ubuntu. To complete the installation, a reboot to Android will have the Dualboot app pick up the downloaded image upgrade, install it in the right location and reboot to the new Ubuntu image.

As an alternative, installations can still be done fully on the Android side. In a nutshell:

  • Download of a new Ubuntu version can happen on either the Ubuntu or Android side
  • Installation of a new Ubuntu version needs to be done from the Android side via the Dualboot app

Learn more about upgrading to a new Ubuntu image ›

Android notifications and background execution improvements

The Dualboot Android app now provides notifications for when new Ubuntu images are available, so no more excuses not to be running the latest Ubuntu! In addition, improvements have been added to download and install Ubuntu in the background, while showing progress also using standard Android notifications.

Sideload support

For those cases in which bandwidth is at a premium, the dual boot installer now supports sideload mode. This enables downloading images on a fast network and saving them for later installation: these can be downloaded on a laptop and then transferred via USB to the device. It also opens the door for easily flashing custom images other than the ones downloaded from the official channels.

Learn more about sideload support ›

Custom servers

A nifty feature our heroic community of porters of Ubuntu images to devices not officially supported, and for users of those ports: dual boot now supports setting a custom server to directly install new Ubuntu images from there

Learn more about using a custom server ›

Installing dual boot

Installing and running dual boot can be done in a few easy steps. In a nutshell, it requires performing a one-off installation of the dual boot app in Android, which will enable you to both install the version of Ubuntu of your choice, and to reboot into Ubuntu.

Install dual boot on your device

Read more
Prakash Advani

Ubuntu 32-Bit or 64-Bit ?

I often get asked this question 32 or 64-Bit? In the past I have told people to stick to 32-Bit because of compatibility issues but things have changed now. Quoting for Phoronix which recently did a benchmark test between 32-Bit and 64-Bit.

Going back years we have run 32-bit vs. 64-bit Linux benchmarks. While the results seldom change, we keep running them as the question of choosing between a 32-bit and 64-bit Linux distribution image is still a popular question… These tests drive in a surprising amount of traffic and I continue to be flabbergasted by the number of people still asking this question when nearly all modern x86 Intel/AMD hardware fully supports x86_64 and it generally means much better performance. Usually the only caveat in not using a 64-bit Linux image is if running a system with less than 2GB of RAM.

In the past there were issues surrounding the Java and Flash support for 64-bit Linux along with an assortment of other possible problems (e.g. with Wine), but all those major issues are a matter of the past. 64-bit Linux is in great shape and as long as you have a decent amount of RAM you really should be running 64-bit Linux.

If you are still in doubt, read the full article for the benchmark results.

Read more
Michael Hall

Ubuntu has always been about breaking new ground. We broke the ground with the desktop back in 2004, we have broken the ground with cloud orchestration across multiple clouds and providers, and we are building a powerful, innovative mobile and desktop platform that is breaking ground with convergence.

The hardest part about breaking new ground and innovating is not having the vision and creating the technology, it is getting people on board to be part of it.

We knew this was going to be a challenge when we first took the wraps off the Ubuntu app developer platform: we have a brand new platform that was still being developed, and when we started many of the key pieces were not there such as a solid developer portal, documentation, API references, training and more. Today the story is very different with a compelling, end-to-end, developer story for building powerful convergent apps.

We believed and always have believed in the power of this platform, and every single one of those people who also believed in what we are doing and wrote apps have shared the same spirit of pioneering a new platform that we have.

As such, we want to acknowledge those people.

And with this, I present Ubuntu Pioneers.

The idea is simple, we want to celebrate the first 200 app developers who get their apps in Ubuntu. We are doing this in two ways.

Firstly, we have created which displays all of these developers and lists the apps that they have created. This will provide a permanent record of those who were there right at the beginning.

Secondly, we have designed a custom, limited-edition Ubuntu Pioneers t-shirt that we want to send to all of our pioneers. For those of you who are listed on this page, please ensure that your email address is correct in MyApps as we will be getting in touch soon.

Thank-you so much to every single person listed on that page. You are an inspiration for me, my team, and the wider Ubuntu project.

If you have that pioneering spirit and wished you were up there, fear not! We still have some space before we hit 200 developers, so go here to get started building an app.

Original by Jono Bacon

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20140513 Meeting Agenda

ARM Status

nothing new to report this week

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

Milestone Targeted Work Items

I have reservations about BP’s being the appropriate method for tracking
work items as we move forward. I’ve tentatively set up a discussion
point for the team sprint at the end of the month to figure out a better

Status: Utopic Development Kernel

We are preparing to upload our first v3.15 based kernel to the Utopic
archive. We’re awaiting some DKMS package fixes before doing so. We’ve
currently rebased to the latest v3.15-rc5 upstream kernel.
Additionally, at a minimum, we will converge on the v3.16 kernel for the
Utopic 14.10 release.
Important upcoming dates:
Mon-Wed June 9 – 11, vUDS (~4 weeks away)
Thurs Jun 26 – Alpha 1 (~6 weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:

Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (May. 6):

  • Lucid – Verification and Testing
  • Precise – Verification and Testing
  • Quantal – Verification and Testing
  • Saucy – Verification and Testing
  • Trusty – Verification and Testing

    Current opened tracking bugs details:


    For SRUs, SRU report is a good source of information:



    cycle: 27-Apr through 17-May
    25-Apr Last day for kernel commits for this cycle
    27-Apr – 03-May Kernel prep week.
    04-May – 10-May Bug verification & Regression testing.
    11-May – 17-May Regression testing & Release to -updates.

Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Read more
Dustin Kirkland

Click and drag to rotate, zoom with middle mouse button

It was September of 2009.  I answered a couple of gimme trivia questions and dropped my business card into a hat at a Linux conference in Portland, Oregon.  A few hours later, I received an email...I had just "won" a developer edition HTC Dream -- the Android G1.  I was quite anxious to have a hardware platform where I could experiment with Android.  I had, of course, already downloaded the SDK, compiled Android from scratch, and fiddled with it in an emulator.  But that experience fell far short of Android running on real hardware.  Until the G1.  The G1 was the first device to truly showcase the power and potential of the Android operating system.

And with that context, we are delighted to introduce the Orange Box!

The Orange Box

Conceived by Canonical and custom built by TranquilPC, the Orange Box is a 10-node cluster computer, that fits in a suitcase.

Ubuntu, MAAS, Juju, Landscape, OpenStack, Hadoop, CloudFoundry, and more!

The Orange Box provides a spectacular development platform, showcasing in mere minutes the power of hardware provisioning and service orchestration with Ubuntu, MAAS, Juju, and Landscape.  OpenStack, Hadoop, CloudFoundry, and hundreds of other workloads deploy in minutes, to real hardware -- not just instances in AWS!  It also makes one hell of a Steam server -- there's a charm for that ;-)

OpenStack deployed by Juju, takes merely 6 minutes on an Orange Box

Most developers here certainly recognize the term "SDK", or "Software Development Kit"...  You can think of the Orange Box as a "HDK", or "Hardware Development Kit".  Pair an Orange Box with MAAS and Juju, and you have yourself a compact cloud.  Or a portable big data number cruncher.  Or a lightweight cluster computer.

The underside of an Orange Box, with its cover off

Want to get your hands on one?

Drop us a line, and we'd be delighted to hand-deliver an Orange Box to your office, and conduct 2 full days of technical training, covering MAAS, Juju, Landscape, and OpenStack.  The box is yours for 2 weeks, as you experiment with the industry leading Ubuntu ecosystem of cloud technologies at your own pace and with your own workloads.  We'll show back up, a couple of weeks later, to review what you learned and discuss scaling these tools up, into your own data center, on your own enterprise hardware.  (And if you want your very own Orange Box to keep, you can order one from our friends at TranquilPC.)

Manufacturers of the Orange Box

Gear head like me?  Interested in the technical specs?

Remember those posts late last year about Intel NUCs?  Someone took notice, and we set out to build this ;-)

Each Orange Box chassis contains:
  • 10x Intel NUCs
  • All 10x Intel NUCs contain
    • Intel HD Graphics 4000 GPU
    • 16GB of DDR3 RAM
    • 120GB SSD root disk
    • Intel Gigabit ethernet
  • D-Link DGS-1100-16 managed gigabit switch with 802.1q VLAN support
    • All 10 nodes are internally connected to this gigabit switch
  • 100-240V AC/DC power supply
    • Adapter supplied for US, UK, and EU plug types
    • 19V DC power supplied to each NUC
    • 5V DC power supplied to internal network switch

Intel NUC D53427RKE board

That's basically an Amazon EC2 m3.xlarge ;-)

The first node, node0, additionally contains:
  • A 2TB Western Digital HDD, preloaded with a full Ubuntu archive mirror
  • USB and HDMI ports are wired and accessible from the rear of the box

Most planes fly in clouds...this cloud flies in planes!

In aggregate, this micro cluster effectively fields 40 cores, 160GB of RAM, 1.2TB of solid state storage, and is connected over an internal gigabit network fabric.  A single fan quietly cools the power supply, while all of the nodes are passively cooled by aluminum heat sinks spanning each side of the chassis. All in a chassis the size of a tower PC!

It fits in a suit case, and can travel anywhere you go.

Pelican iM2875 Storm Case

How are we using them at Canonical?

If you're here at the OpenStack Summit in Atlanta, GA, you'll see at least a dozen Orange Boxes, in our booth, on stage during Mark Shuttleworth's keynote, and in our breakout conference rooms.

Canonical sales engineer, Ameet Paranjape,
demonstrating OpenStack on the Orange Box in the Ubuntu booth
at the OpenStack Summit in Atlanta, GA
We are also launching an update to our OpenStack Jumpstart program, where we'll deliver and Orange Box and 2 full days of training to your team, and leave you the box while you experiment with OpenStack, MAAS, Juju, Hadoop, and more for 2 weeks.  Without disrupting your core network or production data center workloads,  prototype your OpenStack experience within a private sandbox environment. You can experiment with various storage alternatives, practice scaling services, destroy and rebuild the environment repeatedly. Safe. Risk free.

This is Cloud, for the Free Man.


Read more
Prakash Advani

If your usage is not much, use LibreOffice and save on your licensing costs.

If new data are true, Microsofthas a big problem.

According to a study done by SoftWatch, seven out of 10 employees don’t use Microsoft Office to any extent.

The three-month study involved 148,500 employees at 51 global firms. SoftWatch found that  most employees were using the applications, largely for viewing documents or very light editing.

Read More.

Read more

I hope that someone gets my message in a bottle

– Message in a Bottle, the Police

So, there was some minor confusion about the wording in the LibreOffice 4.2.4 release notes.

This needs some background first: LibreOffice 4.2 modified the UNO API to pop up a message box in a slight way against LibreOffice 4.1. This was properly announced in our LibreOffice 4.2 release notes many moons ago:

The following UNO interfaces and services were changed [...],

Luckily, LibreOffice extensions can specify a minimal version, so extensions using the new MessageBox-API can explicitly request a version of LibreOffice 4.2 or newer. This change in our sdk-examples shows how an extension can be updated to use the new API and explicitly require a version of LibreOffice 4.2 and higher. All this happened already with LibreOffice 4.2.0 being released and has nothing yet to do with the change in LibreOffice 4.2.4.

So what was changed in LibreOffice 4.2.4? Well, in addition to the LibreOffice version, old extensions sometimes just ask for an “ version”. Most LibreOffice versions answered its version was “3.4”, so this old backwards compatible check was not very helpful anyway. So in LibreOffice 4.2.4 this value was changed to  “4.1”, which might make some old extensions aware of the incompatible API change. That’s all.

Note that:

So, the short answer to the question to “what changed in LibreOffice 4.2.4?” is: Nothing, if your extension uses LibreOffice-minimal-version as recommended.

Read more