Canonical Voices

Posts tagged with 'ubuntu'

Daniel Holbach

We are in the second week of the Snappy Playpen and it’s simply beautiful to see how new folks are coming in and collaborate on getting snaps done, improve existing ones, answer questions and work together. The team spirit is strong and we’re all learning loads.

Keep up the good work everyone! </p>
            <a href=Read more


Take your time, hurry up
The choice is yours, don’t be late
Take a rest as a friend
— Nirvana, Come As You Are

I have just updated the LibreOffice snap package. The size of the package available for download created some confusion. As LibreOffice 5.2 is still in beta, I built and packed it with full debug symbols to allow analysis of possible problems. Comparing this to the size of e.g. the default install from Ubuntu *.deb packages is misleading:

  • The Ubuntu default install misses LibreOffice Base and Java unless you explicitly install them
  • The Ubuntu default install misses debug symbols unless you install the package libreoffice-dbg too

As many people are just curious about running LibreOffice 5.2 without wanting to debug it right now, I replaced the snap package. The download and install instructions are still the same as noted here — but it is now 287MB instead of 1015MB (and it still contains Base, but no debug symbols).

The package file including full debug symbols — in case you are interested in that — has been renamed to libreoffice-debug.

(Note that if you downloaded the file while I moved files around, you might need to redo your download.)

Read more

What’s been happening in your world?
What have you been up to?
— Arctic Monkeys, Snap out of it

So — here is what I have been up to:

LibreOffice 5.2.0 beta2 installed as a snap on Ubuntu 16.04
LibreOffice 5.2.0 beta2 installed as a snap on Ubuntu 16.04

The upcoming LibreOffice 5.2 packaged as a nice new snap package. This:

  • is pretty much a vanilla build of LibreOffice 5.2 beta2, using snapcraft, which is making packaging quite easy
  • contains all the applications: Writer, Calc, Impress, Draw, Math, Base
  • installs easily on the released current LTS version of Ubuntu: 16.04
  • allows you to test and play with the upcoming LibreOffice version to your hearts delight without having to switch to a development version of Ubuntu

So — how can you “test and play with the upcoming LibreOffice version to your hearts delight” with this on Ubuntu 16.04? Like this:

sha512sum -c libreoffice_5.2.0.0.beta2_amd64.snap.sha512sum && sudo snap install --devmode libreoffice_5.2.0.0.beta2_amd64.snap

and there you have a version of LibreOffice 5.2 running — for example, you can prepare yourself for the upcoming LibreOffice Bug Hunting Session. And its even quite easy to remove again:

sudo snap remove libreoffice

This is one of the things that snap packages will make a lot easier: upgrading or  downgrading versions of an application, having multiple installed in parallel and much more. Watch out as there are more exciting news about this coming up!

Update: As this has been asked a few times: Yes, snap packages are available on Ubuntu. No, snap packages are not only available on Ubuntu. This text has more details.

Update 2: The original download included debug symbols and thus was quite big. The download now has 287MB. This post has all the details.

Read more

Historically, the “adt-run” command line has allowed multiple tests; as a consequence, arguments like --binary or --override-control were position dependent, which confused users a lot (#795274, #785068, #795274, LP #1453509). On the other hand I don’t know anyone or any CI system which actually makes use of the “multiple tests on a single command line” feature.

The command line also was a bit confusing in other ways, like the explicit --built-tree vs. --unbuilt-tree and the magic / vs. // suffixes, or option vs. positional arguments to specify tests.

The other long-standing confusion is the pervasive “adt” acronym, which is still from the very early times when “autopkgtest” was called “autodebtest” (this was changed one month after autodebtest’s inception, in 2006!).

Thus in some recent night/weekend hack sessions I’ve worked on a new command line interface and consistent naming. This is now available in autopkgtest 4.0 in Debian unstable and Ubuntu Yakkety. You can download and use the deb package on Debian jessie and Ubuntu ≥ 14.04 LTS as well. (I will provide official backports after the first bug fix release after this got some field testing.)

New “autopkgtest” command

The adt-run program is now superseded by autopkgtest:

  • It accepts only exactly one tested source package, and gives a proper error if none or more than one (often unintend) is given. Binaries to be tested, --override-control, etc. can now be specified in any order, making the arguments position independent. So you now can do things like:
    autopkgtest *.dsc *.deb [...]

    Before, *.deb only applied to the following test.

  • The explicit --source, --click-source etc. options are gone, the type of tested source/binary packages, including built vs. unbuilt tree, is detected automatically. Tests are now only specified with positional arguments, without the need (or possibility) to explicitly specify their type. The one exception is --installed-click com.example.myapp as possible names are the same as for apt source package names.
    # Old:
    adt-run --unbuilt-tree pkgs/foo-2 [...]
    # or equivalently:
    adt-run pkgs/foo-2// [...]
    # New:
    autopkgtest pkgs/foo-2
    # Old:
    adt-run --git-source [...]
    # New:
    autopkgtest [...]
  • The virtualization server is now separated with a double instead of a tripe dash, as the former is standard Unix syntax.
  • It defaults to the current directory if that is a Debian source package. This makes the command line particularly simple for the common case of wanting to run tests in the package you are just changing:
    autopkgtest -- schroot sid

    Assuming the current directory is an unbuilt Debian package, this will build the package, and run the tests in ./debian/tests against the built binaries.

  • The virtualization server must be specified with its “short” name only, e. g. “ssh” instead of “adt-virt-ssh”. They also don’t get installed into $PATH any more, as it’s hardly useful to call them directly.

README.running-tests got updated to the new CLI, as usual you can also read the HTML online.

The old adt-run CLI is still available with unchanged behaviour, so it is safe to upgrade existing CI systems to that version.

Image build tools

All adt-build* tools got renamed to autopkgtest-build*, and got changed to build images prefixed with “autopkgtest” instead of “adt”. For example, adt-build-lxc ubuntu xenial now produces an autopkgtest-xenial container instead of adt-xenial.

In order to not break existing CI systems, the new autopkgtest package contains symlinks to the old adt-build* commands, and when being called through them, also produce images with the old “adt-” prefix.

Environment variables in tests

Finally there is a set of environment variables that are exported by autopkgtest for using in tests and image customization tools, which now got renamed from ADT_* to AUTOPKGTEST_*:


As these are being used in existing tests and tools, autopkgtest also exports/checks those under their old ADT_* name. So tests can be converted gradually over time (this might take several years).


As usual, if you find a bug or have a suggestion how to improve the CLI, please file a bug in Debian or in Launchpad. The new CLI is recent enough that we still have some liberty to change it.

Happy testing!

Read more
Dustin Kirkland

A few years ago, I wrote and released a fun little script that would carve up an Ubuntu Byobu terminal into a bunch of splits, running various random command line status utilities.

100% complete technical mumbo jumbo.  The goal was to turn your terminal into something that belongs in a Hollywood hacker film.

I am proud to see it included in this NBCNews piece about "Ransomware".  All of the screenshots, demonstrating what a "hacker" is doing with a system are straight from Ubuntu, Byobu, and Hollywood!

Here are a few screenshots, and the video is embedded below...


Read more
Michael Hall

screenshot20160506_103257823During the Ubuntu Online Summit last week, my colleague Daniel Holbach came up with what he called a “10 day challenge” to some of the engineering manager directing the convergence work in Ubuntu. The idea is simple, try and use only the Unity 8 desktop for 10 working days (two weeks). I thought this was a great way to really identify how close it is to being usable by most Ubuntu users, as well as finding the bugs that cause the most pain in making the switch. So on Friday of last week, with UOS over, I took up the challenge.

Below I will discuss all of the steps that I went through to get it working to my needs. They are not the “official” way of doing it (there isn’t an official way to do all this yet) and they won’t cover every usage scenario, just the ones I faced. If you want to try this challenge yourself they will help you get started. If at any time you get stuck, you can find help in the #ubuntu-unity channel on Freenode, where the developers behind all of these components are very friendly and helpful.

Getting Unity 8

To get started you first need to be on the latest release of Ubuntu. I am using Ubuntu 16.04 (Xenial Xerus), which is the best release for testing Unity 8. You will also need the stable-phone-overlay PPA. Don’t let the name fool you, it’s not just for phones, but it is where you will find the very latest packages for Mir, Unity 8, Libertine and other components you will need. You can install is with this command:

sudo add-apt-repository ppa:ci-train-ppa-service/stable-phone-overlay

Then you will need to install the Unity 8 session package, so that you can select it from the login screen:

sudo apt install unity8-desktop-session

Note: The package above used to be unity8-desktop-session-mir but was renamed to just unity-desktop-session.

When I did this there was a bug in the libhybris package that was causing Mir to try and use some Android stuff, which clearly isn’t available on my laptop. The fix wasn’t yet in the PPA, so I had to take the additional step of installing a fix from our continuous integration system (Note: originally the command below used silo 53, but I’ve been told it is now in silo 31). If you get a black screen when trying to start your Unity 8 session, you probably need this too.

sudo apt-get install phablet-tools phablet-tools-citrain
citrain host-upgrade 031

Note: None of the above paragraph is necessary anymore.

This was enough to get Unity 8 to load for me, but all my apps would crash within a half second of being launched. It turned out to be a problem with the cgroups manager, specifically the cgmanager service was disabled for me (I suspect this was leftover configurations from previous attempts at using Unity 8). After re-enabling it, I was able to log back into Unity 8 and start using apps!

sudo systemctl enable cgmanager

Essential Core Apps

The first thing you’ll notice is that you don’t have many apps available in Unity 8. I had probably more than most, having installed some Ubuntu SDK apps natively on my laptop already. If you haven’t installed the webbrowser-app already, you should. It’s in the Xenial archive and the PPA you added above, so just

sudo apt install webbrowser-app

But that will only get you so far. What you really need are a terminal and file manager. Fortunately those have been created as part of the Core Apps project, you just need to install them. Because the Ubuntu Store wasn’t working for me (see bottom of this post) I had to manually download and install them:

sudo click install --user mhall
sudo click install --user mhall

If you want to use these apps in Unity 7 as well, you have to modify their .desktop files located in ~/.local/share/applications/ and add the -x flag after aa-exec-click, this is because by default it prevents running these apps under X11 where they won’t have the safety of confinement that they get under Mir.

The file manager needed a bit of extra effort to get working. It contains many Samba libraries that allow it to access windows network shares, but for some reason the app was looking for them in the wrong place. As a quick and dirty hack, I ended up copying whatever libraries it needed from /opt/ to /usr/lib/i386-linux-gnu/samba/. It’s worth the effort, though, because you need the file manager if you want do things like upload files through the webbrowser.

Using SSH

IRC is a vital communication tool for my job, we all use it every day. In fact, I find it so important that I have a remote client that stays connected 24/7, which I connect to via ssh. Thanks to the Terminal core app, I have quick and easy access to that. But when I first tried to connect to my server, which uses public-key authentication (as they all should), my connection was refused. That is because the Unity 8 session doesn’t run the ssh-agent service on startup. You can start it manually from the terminal:


This will output some shell commands to setup environment variables, copy those and paste them right back into your terminal to set them. Then you should be able to ssh like normal, and if your key needs a passphrase you will be prompted for it in the terminal rather than in a dialog like you get in Unity 7.

Getting traditional apps

Now that you’ve got some apps running natively on Mir, you probably want to try out support for all of your traditional desktop apps, as you’ve heard advertised. This is done by a project called Libertine, which creates an LXC container and XMir to keep those unconfined apps safely away from your new properly confined setup. The first thing you will need to do is install the libertine packages:

apt-get install libertine libertine-scope

screenshot20160506_105035760Once you have those, you will see a Libertine app in your Apps scope. This is the app that lets you manage your Libertine containers (yes, you can have more than one), and install apps into them. Creating a new container is simply a matter of pressing the “Install” button. You can give it a name of leave it blank to get the default “Xenial”.

screenshot20160506_105618896Once your container is setup, you can install as many apps into it as you want, again using the Libertine container manager. You can even use it to search the archives if you don’t know the exact package name. It will also install any dependencies that package needs into your Libertine container.

screenshot20160506_105942480Now that you have your container setup and apps installed into it, you are ready to start trying them out. For now you have to access them from a separate scope, since the default Apps scope doesn’t look into Libertine containers. That is why you had to install the libertine-scope package above. You can find this scope by clicking on the Dash’s bottom edge indicator to open the Scopes manger, and selecting the Legacy Applications Scope. There you will see launchers for the apps you have installed.

Libertine uses a special container manager to launch apps. If it isn’t running, as was the case for me, your legacy app windows will remain black. To fix that, open up the terminal and manually start the manager:

initctl --session start libertine-lxc-manager

Theming traditional apps

screenshot20160506_122713187By default the legacy apps don’t look very nice. They default to the most basic of themes that look like you’ve time-traveled back to the mid-1990s, and nobody wants to do that. The reason for this is because these apps (or rather, the toolkit they use) expect certain system settings to tell them what theme to use, but those settings aren’t actually a dependency of the application’s package. They are part of a default desktop install, but not part of the default Libertine image.

screenshot20160506_112259969I found a way to fix this, at least for some apps, by installing the light-themes and ubuntu-settings packages into the Libertine container. Specifically it should work for any Gtk3 based application, such as GEdit. It does not, however, work for apps that still use the Gtk2 toolkit, such as Geany. I have not dug deeper to try and figure out how to fix Gtk2 themes, if anybody has a suggestion please leave it in the comments.

What works

It has been a couple of months since I last tried the Unity 8 session, back before I upgraded to Xenial, and at that time there wasn’t much working. I went into this challenge expecting it to be better, but not by much. I honestly didn’t expect to spend even a full day using it. So I was really quite surprised to find that, once I found the workarounds above, I was not only able to spend the full day in it, but I was able to do so quite easily.

screenshot20160509_121832656Whenever you have a new DE (which Unity 8 effectively is) and the latest UI toolkit (Qt 5) you have to be concerned about performance and resource use, and given the bleeding-edge nature of Unity 8 on the desktop, I was expecting to sacrifice some CPU cycles, battery life and RAM. If anything, the opposite was the case. I get at least as many hours on my battery as I do with Unity 7, and I was using less than half the RAM I typically do.

screenshot20160509_103139434Moreover, things that I was expecting to cause me problems surprisingly didn’t. I was able to use Google Hangouts for my video conferences, which I knew had just been enabled in the browser. But I fully expected suspend/resume to have trouble with Mir, given the years I spent fighting it in X11 in the past, but it worked nearly flawlessly (see below). The network indicator had all of my VPN configurations waiting to be used, and they worked perfectly. Even pulse audio was working as well as it did in Unity 7, though this did introduce some problems (again, see below). It even has settings to adjust the mouse speed and disable the trackpad when I’m typing. Most imporantly, nearly all of the keyboard shortcuts that have become subconcious to me in Unity 7 are working in Unity 8.

Most importantly, I was able to write this blog post from Unity 8. That includes taking all of the screenshots and uploading them to WordPress. Switching back and forth between my browser and my notes document to see what I had done over the last few days, or going to the terminal to verify the commands I mentioned above.

What doesn’t

Of course, it wasn’t all unicorns and rainbows, Unity 8 is still very bleeding edge as a desktop shell, and if you want to use it you need to be prepared for some pain. None of it has so far been bad enough to stop me, but your mileage may vary.

One of the first minor pain-points is the fact that middle-click doesn’t paste the active text highlight. I hadn’t realized how much I have become dependent on that until I didn’t have it. You also can’t copy/paste between a Mir and an XMir window, which makes legacy apps somewhat less useful, but that’s on the roadmap to be fixed.

Speaking of windows, Unity 8 is still limited to one per app. This is going to change, but it is the current state of things. This doesn’t matter so much for native apps, which were build under this restriciton, and the terminal app having tabs was a saving grace here. But for legacy apps it presents a bigger issue, especially apps like GTG (Getting Things Gnome) where multi-window is a requirement.

Some power-management is missing too, such as dimming the screen after some amount of inactivity, or turning it off altogether. The session also will not lock when you suspend it, so don’t depend on this in a security-critical way (but really, if you’re running bleeding-edge desktops in security-critical environments, you have bigger problems).

I also had a minor problem with my USB headset. It’s actually a problem I have in Unity 7 too, since upgrading to Xenial the volume and mute controls don’t automatically switch to the headset, even though the audio output and input do. I had a workaround for that in Unity 7, I could open the sound settings and manually change it to the headset, at which point the controls work on it. But in Unity 8’s sound settings there is no such option, so my workaround isn’t available.

The biggest hurdle, from my perspective, was not being able to install apps from the store. This is due to something in the store scope, online accounts, or Ubuntu One, I haven’t figured out which yet. So to install anything, I had to get the .click package and do it manually. But asking around I seem to be the only one having this problem, so those of you who want to try this yourself may not have to worry about that.

The end?

No, not for me. I’m on day 3 of this 10 day challenge, and so far things are going well enough for me to continue. I have been posting regular small updates on Google+, and will keep doing so. If I have enough for a new blog post, I may write another one here, but for the most part keep an eye on my G+ feed. Add your own experiences there, and again join #ubuntu-unity if you get stuck or need help.

Read more
Dustin Kirkland

Below you can find the audio/video recording of my OpenStack Austin presentation, where I demonstrated Ubuntu OpenStack Mitaka, running on top of Ubuntu 16.04 LTS, entirely within LXD machine containers.  You can also download the PDF of the slides here.  And there are a number of other excellent talks here!


Read more
Dustin Kirkland

I'm delighted to share the slides from our joint IBM and Canonical webinar about Ubuntu on IBM POWER8 and LinuxOne servers.  You can download the PDF here, watch the recording here, or tab through the slides or watch the video embedded below.  Enjoy!


Read more
Daniel Holbach

Ubuntu 16.04 LTS is out of the door, we started work on the Yakkety Yak already, now it’s time for the Ubuntu Online Summit. It’s all happening 3-5 May 14-20 UTC. This is where we discuss upcoming features, get feedback and demo all the good work which happened recently.

If you want to join the event, just head to the registration page and check out the UOS 16.05 schedule afterwards. You can star (☆) sessions and mark them as important to you and thus plan your attendance for the event.

Now let’s take a look on the bits which are in one way or another related to Ubuntu Core at UOS:

  • Snappy Ubuntu 16 – what’s new
    16.04 has landed and with it came big changes in the world of snapd and friends. Some of them are still in the process of landing, so you’re in for more goodness coming down the pipe for Ubuntu 16.04 LTS.
  • The Snapcraft roadmap
    Publishing software through snaps is super easy and snapcraft is the tool to use for this. Let’s take a look at the roadmap together and see which exciting features are going to come up next.
  • Snappy interfaces
    Interfaces in Ubuntu Core allow snaps to communicate or share resources, so it’s important we figure out how interfaces work, which ones we’d like to implement next and which open questions there are.
  • Playpen – Snapping software together
    Some weeks ago the Community team set up a small branch in which we collaborated on snapping software. It was good fun, we worked on things together, learnt from each other and quickly worked out common issues. We’d like to extend the project and get more people involved. Let’s discuss the project and workflow together.
  • How to snap your software
    If you wanted to start snapping software (yours or somebody else’s) and wanted to see a presentation of snapcraft and a few demos, this is exactly the session you’ve been looking for.
  • Snappy docs – next steps
    Snappy and snapcraft docs are luckily being written by the developers as part of the development process, but we should take a look at the docs together again and see what we’re missing, no matter if it’s updates, more coherence, more examples or whatever else.
  • Demo: Snaps on the desktop
    Here’s the demo on how to get yourself set up as a user or developer of snaps on your regular Ubuntu desktop.

I’m looking forward to see you in all these sessions!

Read more
Daniel Holbach

Ubuntu 16.04 is out!

Ubuntu 16.04 – yet another LTS?

Ubuntu 16.04 LTS, aka the Xenial Xerus, has just been released. It’s incredible that it’s already the 24th Ubuntu release and the 6th LTS release. If you have been around for a while and need a blast from the past, check out this video:

Click here to view it on youtube. It’s available in /usr/share/example-content on a default desktop install as well.

You would think that after such a long time releases get somewhat inflationary and less important and while I’d very likely always say on release day “yes, this one is the best of all so far”, Ubuntu 16.04 is indeed very special to me.

Snappy snappy snappy

Among the many pieces of goodness to come to your way is the snapd package. It’s installed by default on many flavours of Ubuntu including Ubuntu Desktop and is your snappy connection to myApps.

Snappy Ubuntu Core 2.0 landing just in time for the 16.04 LTS release only happened due to the great and very hard work of many teams and individuals. I also see it as the implementation of lots of feedback we have been getting from third party app developers, ISVs and upstream projects over the years. Basically what all of them wanted was in a nutshell: a solid Ubuntu base, flexibility in handling their app and the relevant stack, independence from distro freezes, dead-simple packaging, bullet-proof upgrades and rollbacks, and an app store model established with the rise of the smartphones. Snappy Ubuntu Core is exactly that and more. What it also brings to Ubuntu is a clear isolation between apps and a universal trust model.

As most of you know, I’ve been trying to teach people how to do packaging for Ubuntu for years and it continued to improve and get easier, but all in all, it still is hard to get right. snapcraft makes this so much easier. It’s just beautiful. If you have been doing some packaging in the past, just take a look at some of the examples.

Landing a well-working and stable snapd with clear-cut and stable set of APIs was the most important aspect, especially considering that almost everyone will be basing their work on 16.04 LTS, which is going to be supported for five years. This includes being able to use snapcraft on the LTS.

Today you can build a snap, upload it to the store using snapcraft upload, having it automatically reviewed and published by the store and Desktop users can install it on their system. This brings you in a position where you can easily share your software with millions of users, without having to wait for somebody to upload it to the distro for you, without having your users add yet another PPA, etc.

So, what’s still missing? Quite a few things actually. Because you have to bundle your dependencies, packages are still quite big. This will change as soon as the specifics of OS and library snaps are figured out. Apart from that many new interfaces will need to be added to make Ubuntu Core really useful and versatile. There are also still a few bugs which need figuring out.

If you generally like what you’re reading here, come and talk to us. Introduce yourselves, talk to us and we’ll figure out if anything you need it still missing.

If you’re curious you can also check out some blog posts written by people who worked on this relentlessly in the last weeks:

Thanks a lot everyone – I thoroughly enjoyed working with you on this and I’m looking forward to all the great things we are going to deliver together!

Bring your friends, bring your questions!

The Community team moved the weekly Ubuntu Community Q&A to be tomorrow, Friday 2016-04-22 15:00 UTC on as usual. If you have questions, tune in and bring your friends as well!

Read more
Dustin Kirkland

I'm thrilled to introduce Docker 1.10.3, available on every Ubuntu architecture, for Ubuntu 16.04 LTS, and announce the General Availability of Ubuntu Fan Networking!

That's Ubuntu Docker binaries and Ubuntu Docker images for:
  • armhf (rpi2, et al. IoT devices)
  • arm64 (Cavium, et al. servers)
  • i686 (does anyone seriously still run 32-bit intel servers?)
  • amd64 (most servers and clouds under the sun)
  • ppc64el (OpenPower and IBM POWER8 machine learning super servers)
  • s390x (IBM System Z LinuxOne super uptime mainframes)
That's Docker-Docker-Docker-Docker-Docker-Docker, from the smallest Raspberry Pi's to the biggest IBM mainframes in the world today!  Never more than one 'sudo apt install' command away.

Moreover, we now have Docker running inside of LXD!  Containers all the way down.  Application containers (e.g. Docker), inside of Machine containers (e.g. LXD), inside of Virtual Machines (e.g. KVM), inside of a public or private cloud (e.g. Azure, OpenStack), running on bare metal (take your pick).

Let's have a look at launching a Docker application container inside of a LXD machine container:

kirkland@x250:~⟫ lxc launch ubuntu-daily:x -p default -p docker
Creating magical-damion
Starting magical-damion
kirkland@x250:~⟫ lxc list | grep RUNNING
| magical-damion | RUNNING | (eth0) | | PERSISTENT | 0 |
kirkland@x250:~⟫ lxc exec magical-damion bash
root@magical-damion:~# apt update >/dev/null 2>&1 ; apt install -y >/dev/null 2>&1
root@magical-damion:~# docker run -it ubuntu bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
759d6771041e: Pull complete
8836b825667b: Pull complete
c2f5e51744e6: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:b4dbab2d8029edddfe494f42183de20b7e2e871a424ff16ffe7b15a31f102536
Status: Downloaded newer image for ubuntu:latest
root@0577bd7d5db1:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:ac:11:00:02
inet addr: Bcast: Mask:
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1296 (1.2 KB) TX bytes:648 (648.0 B)

Oh, and let's talk about networking...  We're also pleased to announce the general availability of Ubuntu Fan networking -- specially designed to connect all of your Docker containers spread across your network.  Ubuntu's Fan networking feature is an easy way to make every Docker container on your local network easily addressable by every other Docker host and container on the same network.  It's high performance, super simple, utterly deterministic, and we've tested it on every major public cloud as well as OpenStack and our private networks.

Simply installing Ubuntu's Docker package will also install the ubuntu-fan package, which provides an interactive setup script, fanatic, should you choose to join the Fan.  Simply run 'sudo fanatic' and answer the questions.  You can trivially revert your Fan networking setup easily with 'sudo fanatic deconfigure'.

kirkland@x250:~$ sudo fanatic 
Welcome to the fanatic fan networking wizard. This will help you set
up an example fan network and optionally configure docker and/or LXD to
use this network. See fanatic(1) for more details.
Configure fan underlay (hit return to accept, or specify alternative) []:
Configure fan overlay (hit return to accept, or specify alternative) []:
Create LXD networking for underlay: overlay: [Yn]: n
Create docker networking for underlay: overlay: [Yn]: Y
Test docker networking for underlay: overlay:
(NOTE: potentially triggers large image downloads) [Yn]: Y
local docker test: creating test container ...
test master: ping test ( ...
test slave: ping test ( ...
test master: ping test ... PASS
test master: short data test ( -> ...
test slave: ping test ... PASS
test slave: short data test ( -> ...
test master: short data ... PASS
test slave: short data ... PASS
test slave: long data test ( -> ...
test master: long data test ( -> ...
test master: long data ... PASS
test slave: long data ... PASS
local docker test: destroying test container ...
local docker test: test complete PASS (master=0 slave=0)
This host IP address:

I've run 'sudo fanatic' here on a couple of machines on my network -- x250 ( and masterbr (, and now I'm going to launch a Docker container on each of those two machines, obtain each IP address on the Fan (250.x.y.z), install iperf, and test the connectivity and bandwidth between each of them (on my gigabit home network).  You'll see that we'll get 900mbps+ of throughput:

kirkland@x250:~⟫ sudo docker run -it ubuntu bash
root@c22cf0d8e1f7:/# apt update >/dev/null 2>&1 ; apt install -y iperf >/dev/null 2>&1
root@c22cf0d8e1f7:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:fa:00:2d:00
inet addr: Bcast: Mask:
inet6 addr: fe80::42:faff:fe00:2d00/64 Scope:Link
RX packets:6423 errors:0 dropped:0 overruns:0 frame:0
TX packets:4120 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:22065202 (22.0 MB) TX bytes:227225 (227.2 KB)

root@c22cf0d8e1f7:/# iperf -c
multicast ttl failed: Invalid argument
Client connecting to, TCP port 5001
TCP window size: 45.0 KByte (default)
[ 3] local port 54274 connected with port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.05 GBytes 902 Mbits/sec

And the second machine:
kirkland@masterbr:~⟫ sudo docker run -it ubuntu bash
root@effc8fe2513d:/# apt update >/dev/null 2>&1 ; apt install -y iperf >/dev/null 2>&1
root@effc8fe2513d:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:fa:00:08:00
inet addr: Bcast: Mask:
inet6 addr: fe80::42:faff:fe00:800/64 Scope:Link
RX packets:7659 errors:0 dropped:0 overruns:0 frame:0
TX packets:3433 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:22131852 (22.1 MB) TX bytes:189875 (189.8 KB)

root@effc8fe2513d:/# iperf -s
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
[ 4] local port 5001 connected with port 54274
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.05 GBytes 899 Mbits/sec

Finally, let's have another long hard look at the image from the top of this post.  Download it in full resolution to study very carefully what's happening here, because it's pretty [redacted] amazing!

Here, we have a Byobu session, split into 6 panes (Shift-F2 5x Times, Shift-F8 6x times).  In each pane, we have an SSH session to Ubuntu 16.04 LTS servers spread across 6 different architectures -- armhf, arm64, i686, amd64, ppc64el, and s390x.  I used the Shift-F9 key to simultaneously run the same commands in each and every window.  Here are the commands I ran:

lxc launch ubuntu-daily:x -p default -p docker
lxc list | grep RUNNING
uname -a
dpkg -l | grep
sudo docker images | grep -m1 ubuntu
sudo docker run -it ubuntu bash
apt update >/dev/null 2>&1 ; apt install -y net-tools >/dev/null 2>&1
ifconfig eth0

That's right.  We just launched Ubuntu LXD containers, as well as Docker containers against every Ubuntu 16.04 LTS architecture.  How's that for Ubuntu everywhere!?!

Ubuntu 16.04 LTS will be one hell of a release!


Read more
Dustin Kirkland

I happen to have a full mirror of the entire Ubuntu Xenial archive here on a local SSD, and I took the opportunity to run a few numbers...
  • 6: This is our 6th Ubuntu LTS
    • 6.06, 8.04, 10.04, 12.04, 14.04, 16.04
  • 7: With Ubuntu 16.04 LTS, we're supporting 7 CPU architectures
    • armhf, arm64, i386, amd64, powerpc, ppc64el, s390x
  • 25,671: Ubuntu 16.04 LTS is comprised of 25,671 source packages
    • main, universe, restricted, multiverse
  • 150,562+: Over 150,562 (and counting!) cloud instances of Xenial have launched to date
    • and we haven't even officially released yet!
  • 216,475: A complete archive of all binary .deb packages in Ubuntu 16.04 LTS consists of 216,475 debs.
    • 24,803 arch independent
    • 27,159 armhf
    • 26,845 arm64
    • 28,730 i386
    • 28,902 amd64
    • 27,061 powerpc
    • 26,837 ppc64el
    • 26,138 s390x
  • 1,426,792,926: A total line count of all source packages in Ubuntu 16.04 LTS using cloc yields 1,426,792,926 total lines of source code
  • 250,478,341,568: A complete archive all debs, all architectures in Ubuntu 16.04 LTS requires 250GB of disk space
Yes, that's 1.4 billion lines of source code comprising the entire Ubuntu 16.04 LTS archive.  What an amazing achievement of open source development!

Perhaps my fellow nerds here might be interested in a breakdown of all 1.4 billion lines across 25K source packages, and throughout 176 different programming languages, as measured by Al Danial's cloc utility.  Interesting data!

You can see the full list here.  What further insight can you glean?


Read more
Dustin Kirkland

On July 7, 2010, I received the above email.  In hindsight, this note effectively changed the landscape of cloud computing forever.  I was one of 3 Canonical employees in attendance (Nick Barcet, Neil Levine) and among a number former colleagues (Theirry Carrez, Soren Hansen, Rick Clark) at the first OpenStack Design Summit at the Omni hotel in Austin, Texas, in July of 2010.

These are the only pictures I snapped with my phone (metadata says it was an HTC Hero) of the event, which, almost unbelievably fit entirely within a single conference room :-)

The "fishbowl" round table discussion format was modeled after Ubuntu Developer Summits.

It was so much fun to see so many unfamiliar, non-Ubuntu people using the fishbowl discussion format.

Also borrowed from Ubuntu Developer Summits was the collaborative, community-sourced note taking in Etherpad-Lite.

Breakfast, in the beautiful Omni lobby.

Lots of natural light, but thankfully, air conditioned.  By the way, does anyone have pictures from the 120oF Whole Foods roof top event?

My, my, my, how far we've come in 6 short years!

This month's OpenStack Summit returns to Austin, Texas, and fills the entire Austin Convention Center, and overflows into at least two nearby hotels, with 5,000+ OpenStack developers, users, and enthusiasts!

In fact, if you're reading this post on, you're being served by Wordpress and MySQL hosted on a production Ubuntu OpenStack at Canonical.

Welcome back home, OpenStack!


Read more
Dustin Kirkland

As announced last week, Microsoft and Canonical have worked together to bring Ubuntu's userspace natively into Windows 10.

As of today, Windows 10 Insiders can now take Ubuntu on Windows for a test drive!  Here's how...

1) You need to have a system running today's 64-bit build of Windows 10 (Build 14316).

2) To do so, you may need to enroll into the Windows Insider program here,

3) You need to notify your Windows desktop that you're a Windows Insider, under "System Settings --> Advanced Windows Update options"

4) You need to set your update ambition to the far right, also known as "the fast ring".

5) You need to enable "developer mode", as this new feature is very pointedly directed specifically at developers.

6) You need to check for updates, apply all updates, and restart.

7) You need to turn on the new Windows feature, "Windows Subsystem for Linux (Beta)".  Note (again) that you need a 64-bit version of Windows!  Without that, you won't see the new option.

8) You need to reboot again.  (Windows sure has a fetish for rebooting!)

9) You press the start button and type "bash".

10) The first time you run "bash.exe", you'll accept the terms of service, download Ubuntu, and then you're off and running!

If you screw something up, and you want to start over, simply open a Windows command shell, and run: lxrun /uninstall /full and then just run bash again.

For bonus points, you might also like to enable the Ubuntu monospace font in your console.  Here's how!

a) Download the Ubuntu monospace font, from

b) Install the Ubuntu monospace font, by opening the zip file you downloaded, finding UbuntuMono-R.ttf, double clicking on it, and then clicking Install.

c) Enable the Ubuntu monospace font for the command console in the Windows registry.  Open regedit and find this key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Console\TrueTypeFont and add a new string value name "000" with value data "Ubuntu Mono"

d) Edit your command console preferences to enable the Ubuntu monospace font.


Read more
Dustin Kirkland

This makes me so incredibly happy!

Here's how...

First, start with a fully up-to-date Ubuntu 16.04 LTS desktop.

sudo apt update
sudo apt dist-upgrade -y

Then, install dconf-editor.

sudo apt install -y dconf-editor

Launch dconf-editor and find the "launcher" key and change it to "bottom".


For good measure, I triggered a reboot, to make sure my changes stuck.  And voila!  Beauty!


Read more
Michael Hall

Somehow I missed the fact that I never wrote Community Donations report for Q3 2015. I only realized it because it’s time for me to start working on Q4. Sorry for the oversight, but that report is now published.

The next report should be out soon, in the mean time you can look at all of the past reports so see the great things we’ve been able to do with and for the Ubuntu community through this program. Everybody who has recieved these funds have used them to contribute to the project in one way or another, and we appreciate all of their work.

As you may notice, we’ve been regularly paying out more than we’ve been getting in donations. While we’ve had a carry-over balance ever since we started this program, that balance is running down. If you like the things we’ve been able to support with this program, please consider sending it a contribution and helping us spread the word about it.


Read more
Dustin Kirkland

Update: Here's how to get started using Ubuntu on Windows

See also Scott Hanselman's blog here
I'm in San Francisco this week, attending Microsoft's Build developer conference, as a sponsored guest of Microsoft.

That's perhaps a bit odd for me, as I hadn't used Windows in nearly 16 years.  But that changed a few months ago, as I embarked on a super secret (and totally mind boggling!) project between Microsoft and Canonical, as unveiled today in a demo during Kevin Gallo's opening keynote of the Build conference....

An Ubuntu user space and bash shell, running natively in a Windows 10 cmd.exe console!

Did you get that?!?  Don't worry, it took me a few laps around that track, before I fully comprehended it when I first heard such crazy talk a few months ago :-)

Here's let's break it down slowly...
  1. Windows 10 users
  2. Can open the Windows Start menu
  3. And type "bash" [enter]
  4. Which opens a cmd.exe console
  5. Running Ubuntu's /bin/bash
  6. With full access to all of Ubuntu user space
  7. Yes, that means apt, ssh, rsync, find, grep, awk, sed, sortxargs, md5sum, gpg, curl, wget, apache, mysql, python, perl, ruby, php, gcc, tar, vim, emacs, diff, patch...
  8. And most of the tens of thousands binary packages available in the Ubuntu archives!
"Right, so just Ubuntu running in a virtual machine?"  Nope!  This isn't a virtual machine at all.  There's no Linux kernel booting in a VM under a hypervisor.  It's just the Ubuntu user space.

"Ah, okay, so this is Ubuntu in a container then?"  Nope!  This isn't a container either.  It's native Ubuntu binaries running directly in Windows.

"Hum, well it's like cygwin perhaps?"  Nope!  Cygwin includes open source utilities are recompiled from source to run natively in Windows.  Here, we're talking about bit-for-bit, checksum-for-checksum Ubuntu ELF binaries running directly in Windows.

[long pause]

"So maybe something like a Linux emulator?"  Now you're getting warmer!  A team of sharp developers at Microsoft has been hard at work adapting some Microsoft research technology to basically perform real time translation of Linux syscalls into Windows OS syscalls.  Linux geeks can think of it sort of the inverse of "wine" -- Ubuntu binaries running natively in Windows.  Microsoft calls it their "Windows Subsystem for Linux".  (No, it's not open source at this time.)

Oh, and it's totally shit hot!  The sysbench utility is showing nearly equivalent cpu, memory, and io performance.

So as part of the engineering work, I needed to wrap the stock Ubuntu root filesystem into a Windows application package (.appx) file for suitable upload to the Windows Store.  That required me to use Microsoft Visual Studio to clone a sample application, edit a few dozen XML files, create a bunch of icon .png's of various sizes, and so on.

Not being Windows developer, I struggled and fought with Visual Studio on this Windows desktop for a few hours, until I was about ready to smash my coffee mug through the damn screen!

Instead, I pressed the Windows key, typed "bash", hit enter.  Then I found the sample application directory in /mnt/c/Users/Kirkland/Downloads, and copied it using "cp -a".  I used find | xargs | rename to update a bunch of filenames.  And a quick grep | xargs | sed to comprehensively search and replace s/SampleApp/UbuntuOnWindows/. And Ubuntu's convert utility quickly resized a bunch of icons.   Then I let Visual Studio do its thing, compiling the package and uploading to the Windows Store.  Voila!

Did you catch that bit about /mnt/c...  That's pretty cool...  All of your Windows drives, like C: are mounted read/write directly under /mnt.  And, vice versa, you can see all of your Ubuntu filesystem from Windows Explorer in C:\Users\Kirkland\AppData\Local\Lxss\rootfs\

Meanwhile, I also needed to ssh over to some of my other Ubuntu systems to get some work done.  No need for Putty!  Just ssh directly from within the Ubuntu shell.

Of course apt install and upgrade as expected.

Is everything working exactly as expected?  No, not quite.  Not yet, at least.  The vast majority of the LTP passes and works well.  But there are some imperfections still, especially around tty's an the vt100.  My beloved byobu, screen, and tmux don't quite work yet, but they're getting close!

And while the current image is Ubuntu 14.04 LTS, we're expecting to see Ubuntu 16.04 LTS replacing Ubuntu 14.04 in the Windows Store very, very soon.

Finally, I imagine some of you -- long time Windows and Ubuntu users alike -- are still wondering, perhaps, "Why?!?"  Having dedicated most of the past two decades of my career to free and open source software, this is an almost surreal endorsement by Microsoft on the importance of open source to developers.  Indeed, what a fantastic opportunity to bridge the world of free and open source technology directly into any Windows 10 desktop on the planet.  And what a wonderful vector into learning and using more Ubuntu and Linux in public clouds like Azure.  From Microsoft's perspective, a variety of surveys and user studies have pointed to bash and Linux tools -- very specifically, Ubuntu -- be available in Windows, and without resource-heavy full virtualization.

So if you're a Windows Insider and have access to the early beta of this technology, we certainly hope you'll try it out!  Let us know what you think!

If you want to hear more, hopefully you'll tune into the Channel 9 Panel discussion at 16:30 PDT on March 30, 2016.


Read more
Dustin Kirkland

Still have questions about Ubuntu on Windows?
Watch this Channel 9 session, recorded live at Build this week, hosted by Scott Hanselman, with questions answered by Windows kernel developers Russ Alexander, Ben Hillis, and myself representing Canonical and Ubuntu!

For fun, watch the crowd develop in the background over the 30 minute session!

And here's another recorded session with a demo by Rich Turner and Russ Alexander.  The real light bulb goes off at about 8:01.


Read more
Michael Hall

As most you you know by now, Ubuntu 16.04 will be dropping the old Ubuntu Software Center in favor of the newer Gnome Software as the graphical front-end to both the Ubuntu archives and 3rd party application store.

Gnome Software

Gnome Software provides a lot of the same enhancements over simple package managers that USC did, and it does this using a new metadata format standard called AppStream. While much of the needed AppStream data can be extracted from the existing packages in the archives, sometimes that’s not sufficient, and that’s when we need people to help fill the gaps.

It turns out that the bulk of the missing or incorrect data is caused by the application icons being used by app packages. While most apps already have an icon, it was never strictly enforced before, and the size and format allowed by the desktop specs was more lenient than what’s needed now.  These lower resolution icons might have been fine for a menu item, but they don’t work very well for a nice, beautiful App Store interface like Gnome Software. And that’s where you can help!

Don’t worry, contributing icons isn’t hard, and it doesn’t require any knowledge of programming or packing to do. Best of all, you’ll not only be helping Ubuntu, but you’ll also be contributing to any other distro that uses the AppStream standard too! In the steps below I will walk you through the process of finding an app in need, getting the correct icon for it, and contributing it to the upstream project and Ubuntu.

1) Pick an App

Because the AppStream data is being automatically extracted from the contents of existing packages, we are able to tell which apps are in need of new icons, and we’ve generated a list of them, sorted by popularity (based on PopCon stats) so you can prioritize your contributions to where they will help the most users. To start working on one, first click the “Create” link to file a new bug report against the package in Ubuntu. Then replace that link in the wiki with a link to your new bug, and put your name in the “Claimed” column so that others know you’ve already started work on it.

Apps with Icon ErrorsNote that a package can contain multiple .desktop files, each of which has it’s own icon, and your bug report will be specific to just that one metadata file. You will also need to be a member of the ~ubuntu-etherpad team (or sub-team like ~ubuntumembers) in order to edit the wiki, you will be asked to verify that membership as part of the login process with Ubuntu SSO.

2) Verify that an AppStream icon is needed

While the extraction process is capable of identifying what packages have a missing or unsupported image in them, it’s not always smart enough to know which packages should have this AppStream data in the first place. So before you get started working on icons, it’s best to first make sure that the metadata file you picked should be part of the AppStream index in the first place.

Because AppStream was designed to be application-centric, the metadata extraction process only looks at those with Type=Application in their .desktop file. It will also ignore any .desktop files with NoDisplay=True in them. If you find a file in the list that shouldn’t be indexed by AppStream, chances are one or both of these values are set incorrectly. In that case you should change your bug description to state that, rather than attaching an icon to it.

3) Contact Upstream

Since there is nothing Ubuntu-specific about AppStream data or icons, you really should be sending your contribution upstream to the originating project. Not only is this best for Ubuntu (carrying patches wastes resources), but it’s just the right thing to do in the open source community. So the after you’ve chosen an app to work on and verfied that it does in fact need a new icon for AppStream, the very next thing you should do is start talking to the upstream project developers.

Start by letting them know that you want to contribute to their project so that it integrates better with AppStream enabled stores (you can reference these Guidelines if they’re not familiar with it), and opening a similar bug report in their bug tracker if they don’t have one already. Finally, be sure to include a link to that upstream bug report in the Ubuntu bug you opened previously so that the Ubuntu developers know the work is also going into upstream to (your contribute might be rejected otherwise).

4) Find or Create an Icon

Chances are the upstream developers already have an icon that meets the AppStream requirements, so ask them about it before trying to find one on your own. If not, look for existing artwork assets that can be used as a logo, and remember that it needs to be at least 64×64 pixels (this is where SVGs are ideal, as they can be exported to any size). Whatever you use, make sure that it matches the application’s current branding, we’re not out to create a new logo for them after all. If you do create a new image file, you will need to make it available under the CC-BY-SA license.

While AppStream only requires a 64×64 pixel image, many desktops (including Unity) will benefit from having even higher resolution icons, and it’s always easier to scale them down than up. So if you have the option, try to provide a 256×256 icon image (or again, just an SVG).

5) Submit your icon

Now that you’ve found (or created) an appropriate icon, it’s time to get it into both the upstream project and Ubuntu. Because each upstream will be different in how they want you to do that, you will need to ask them for guidance (and possibly assistance) in order to do that. Just make sure that you update the upstream bug report with your work, so that the Ubuntu developers can see that it’s been done.

Ubuntu 16.04 has already synced with Debian, so it’s too late for these changes in the upstream project to make their way into this release. In order to get them into 16.04, the Ubuntu packages will have to carry a patch until the changes that land in upstream have the time to make their way into the Ubuntu archives. That’s why it’s so important to get your contribution accepted into the upstream project first, the Ubuntu developers want to know that the patches to their packages will eventually be replaced by the same change from upstream.

attach_file_to_bugTo submit your image to Ubuntu, all you need to do is attach the image file to the bug report you created way back in step #1.

launchpad-subscribeThen, subscribe the “ubuntu-sponsors” team to the bug, these are the Ubuntu developers who will review and apply your icon to the target package, and get it into the Ubuntu archives.

6) Talk about it!

Congratulations, you’ve just made a contribution that is likely to affect millions of people and benefit the entire open source community! That’s something to celebrate, so take to Twitter, Google+, Facebook or your own blog and talk about it. Not only is it good to see people doing these kinds of contributions, it’s also highly motivating to others who might not otherwise get involved. So share your experience, help others who want to do the same, and if you enjoyed it feel free to grab another app from the list and do it again.

Read more
Nicholas Skaggs


The joys of Spring (or Fall for our friends in the Southern Hemisphere) are now upon us. The change of seasons spurs us to implement our own changes, to start anew. It's a time to reflect on the past, appreciate it, and then do a little Spring cleaning.

As I write this post to you, I'm doing my own reflecting. It's been quite a journey we've undertaken within the QA community. It's not always been easy, but I think we are poised for even greater success with Xenial than Trusty and Precise LTS's. We have continued ramping up our quality efforts to test new platforms, such as the phone and IOT devices, while also implementing automated testing via things like autopkgtest and autopilot. Nevertheless, the desktop images have continued to release like clockwork. We're testing more things, more often, while still managing to raise our quality bar.

I want to thank all of the volunteers who've helped make each of those releases a reality. Oftentimes quality can be a background job, with thank you's going unsaid, while complaints are easy to find. Truly, it's been wonderful learning and hacking on quality efforts with you. So thank you!

So if this post sounds a bit like a farewell, that's because it is. At least in a way. Moving forward, I'll be transitioning to working on a new challenge. Don't worry, I'm keeping my QA hat on, and staying firmly within the realm of ubuntu! However, the time has come to try my hand at a different side of ubuntu. That's right, it's time to head to the last frontier, juju!

I'll be working on improving the quality story for juju, but I believe juju has real opportunities to enable the testing story within ubuntu too. I'm looking forward to the new challenges, and sharing best practices. We're all working on ubuntu at it's heart, no matter our focus.

Moving forward, I'll still be around in my usual haunts. You'll still be able to poke me on IRC, or send me a mail, and I'm certainly still going to be watching what happens within quality with interest. That said, you are much more likely to find me discussing juju, servers and charms in #juju.

As with anything, please feel free to contact me directly if you have any concerns or questions. I plan to wind down my involvement during the next few weeks. I'll be handing off any lingering project roles, and stepping down gracefully. Ubuntu 'Y' will begin anew, with fresh challenges and fresh opportunities. I know there are folks waiting to tackle them!

Read more