Canonical Voices

Posts tagged with 'ubuntu'

Michael Hall

A couple of weeks ago I had the opportunity to attend the thirteenth Southern California Linux Expo, more commonly known at SCaLE 13x. It was my first time back in five years, since I attended 9x, and my first time as a speaker. I had a blast at SCaLE, and a wonderful time with UbuCon. If you couldn’t make it this year, it should definitely be on your list of shows to attend in 2016.


Thanks to the efforts of Richard Gaskin, we had a room all day Friday to hold an UbuCon. For those of you who haven’t attended an UbuCon before, it’s basically a series of presentations by members of the Ubuntu community on how to use it, contribute to it, or become involved in the community around it. SCaLE was one of the pioneering host conferences for these, and this year they provided a double-sized room for us to use, which we still filled to capacity.

image20150220_100226891I was given the chance to give not one but two talks during UbuCon, one on community and one on the Ubuntu phone. We also had presentations from my former manager and good friend Jono Bacon, current coworkers Jorge Castro and Marco Ceppi, and inspirational community members Philip Ballew and Richard Gaskin.

I’d like thank Richard for putting this all together, and for taking such good care of those of us speaking (he made sure we always had mints and water). UbuCon was a huge success because of the amount of time and work he put into it. Thanks also to Canonical for providing us, on rather short notice, a box full of Ubuntu t-shirts to give away. And of course thanks to the SCaLE staff and organizers for providing us the room and all of the A/V equipment in it to use.

The room was recorded all day, so each of these sessions can be watched now on youtube. My own talks are at 4:00:00 and 5:00:00.

Ubuntu Booth

In addition to UbuCon, we also had an Ubuntu booth in the SCaLE expo hall, which was registered and operated by members of the Ubuntu California LoCo team. These guys were amazing, they ran the booth all day over all three days, managed the whole setup and tear down, and did an excellent job talking to everybody who came by and explaining everything from Ubuntu’s cloud offerings, to desktops and even showing off Ubuntu phones.

image20150221_162940413Our booth wouldn’t have happened without the efforts of Luis Caballero, Matt Mootz, Jose Antonio Rey, Nathan Haines, Ian Santopietro, George Mulak, and Daniel Gimpelevich, so thank you all so much! We also had great support from Carl Richell at System76 who let us borrow 3 of their incredible laptops running Ubuntu to show off our desktop, Canonical who loaned us 2 Nexus 4 phones running Ubuntu as well as one of the Orange Box cloud demonstration boxes, Michael Newsham from TierraTek who sent us a fanless PC and NAS, which we used to display a constantly-repeating video (from Canonical’s marketing team) showing the Ubuntu phone’s Scopes on a television monitor provided to us by Eäär Oden at Video Resources. Oh, and of course Stuart Langridge, who gave up his personal, first-edition Bq Ubuntu phone for the entire weekend so we could show it off at the booth.

image20150222_132142752Like Ubuntu itself, this booth was not the product of just one organization’s work, but the combination of efforts and resources from many different, but connected, individuals and groups. We are what we are, because of who we all are. So thank you all for being a part of making this booth amazing.

Read more

“The problem is all inside your head” she said to me
“The answer is easy if you take it logically”
— Paul Simon, 50 ways to leave your lover

So recently I tweaked around with these newfangled C++11 initializer lists and created an EasyHack to use them to initialize property sequences in a readable way. This caused a short exchange on the LibreOffice mailing list, which I assumed had its part in motivating Stephans interesting post “On filling a vector”. For all the points being made (also in the quick follow up on IRC), I wondered how much the theoretical “can use a move constructor” discussed etc. really meant when the C++ is translated to e.g. GENERIC, then GIMPLE, then amd64 assembler, then to the internal RISC instructions of the CPU — with multiple levels of caching in addition.

So I quickly wrote the following (thanks so much for C++11 having the nice std::chrono now).


#include <vector>
struct Data {
    Data(int a);
    int m_a;
void DoSomething(std::vector<Data>&);


#include "data.hxx"
// noop in different compilation unit to prevent optimizing out what we want to measure
void DoSomething(std::vector<Data>&) {};
Data::Data() : m_a(4711) {};
Data::Data(int a) : m_a(a+4711) {};


#include "data.hxx"
#include <iostream>
#include <vector>
#include <chrono>
#include <functional>

void A1(long count) {
    while(--count) {
        std::vector<Data> vec { Data(), Data(), Data() };

void A2(long count) {
    while(--count) {
        std::vector<Data> vec { {}, {}, {} };

void A3(long count) {
    while(--count) {
        std::vector<Data> vec { 0, 0, 0 };

void B1(long count) {
    while(--count) {
        std::vector<Data> vec;

void B2(long count) {
    while(--count) {
        std::vector<Data> vec;

void B3(long count) {
    while(--count) {
        std::vector<Data> vec;

void C1(long count) {
    while(--count) {
        std::vector<Data> vec;

void C3(long count) {
    while(--count) {
        std::vector<Data> vec;

double benchmark(const char* name, std::function<void (long)> testfunc, const long count) {
    const auto start = std::chrono::system_clock::now();
    const auto end = std::chrono::system_clock::now();
    const std::chrono::duration<double> delta = end-start;
    std::cout << count << " " << name << " iterations took " << delta.count() << " seconds." << std::endl;
    return delta.count();

int main(int, char**) {
    long count = 10000000;
    while(benchmark("A1", &A1, count) < 60l)
        count <<= 1;
    std::cout << "Going with " << count << " iterations." << std::endl;
    benchmark("A1", &A1, count);
    benchmark("A2", &A2, count);
    benchmark("A3", &A3, count);
    benchmark("B1", &B1, count);
    benchmark("B2", &B2, count);
    benchmark("B3", &B3, count);
    benchmark("C1", &C1, count);
    benchmark("C3", &C3, count);
    return 0;


main: main.o data.o
    g++ -o $@ $^

%.o: %.cxx data.hxx
    g++ $(CFLAGS) -std=c++11 -o $@ -c $<

Note the object here is small and trivial to copy as one would expect from objects passed around as values (as expensive to copy objects mostly can be passed around with a std::shared_ptr). So what did this measure? Here are the results:

Time for 1280000000 iterations on a Intel i5-4200U@1.6GHz (-march=core-avx2) compiled with gcc 4.8.3 without inline constructors:

implementation / CFLAGS -Os -O2 -O3 -O3 -march=…
A1 89.1 s 79.0 s 78.9 s 78.9 s
A2 89.1 s 78.1 s 78.0 s 80.5 s
A3 90.0 s 78.9 s 78.8 s 79.3 s
B1 103.6 s 97.8 s 79.0 s 78.0 s
B2 99.4 s 95.6 s 78.5 s 78.0 s
B3 107.4 s 90.9 s 79.7 s 79.9 s
C1 99.4 s 94.4 s 78.0 s 77.9 s
C3 98.9 s 100.7 s 78.1 s 81.7 s

creating a three element vector without inlined constructors
And, for comparison, following are the results, if one allows the constructors to be inlined.
Time for 1280000000 iterations on a Intel i5-4200U@1.6GHz (-march=core-avx2) compiled with gcc 4.8.3 with inline constructors:

implementation / CFLAGS -Os -O2 -O3 -O3 -march=…
A1 85.6 s 74.7 s 74.6 s 74.6 s
A2 85.3 s 74.6 s 73.7 s 74.5 s
A3 91.6 s 73.8 s 74.4 s 74.5 s
B1 93.4 s 90.2 s 72.8 s 72.0 s
B2 93.7 s 88.3 s 72.0 s 73.7 s
B3 97.6 s 88.3 s 72.8 s 72.0 s
C1 93.4 s 88.3 s 72.0 s 73.7 s
C3 96.2 s 88.3 s 71.9 s 73.7 s

creating a three element vector without inlined constructors
Some observations on these measurements:

  • -march=... is at best neutral: The measured times do not change much in general, they only even slightly improve performance in five out of 16 cases, and the two cases with the most significant change in performance (over 3%) are actually hurting the performance. So for the rest of this post, -march=... will be ignored. Sorry gentooers. ;)
  • There is no silver bullet with regard to the different implementations: A1, A2 and A3 are the faster implementations when not inlining constructors and using -Os or -O2 (the quickest A* is ~10% faster than the quickest B*/C*). However when inlining constructors and using -O3, the same implementations are the slowest (by 2.4%).
  • Most common release builds are still done with -O2 these days. For those, using initializer lists (A1/A2/A3) seem too have a significant edge over the alternatives, whether constructors are inlined or not. This is in contrast to the conclusions made from “constructor counting”, which assumed these to be slow because of additional calls needed.
  • The numbers printed in bold are either the quickest implementation in a build scenario or one that is within 1.5% of the quickest implementation. A1 and A2 are sharing the title here by being in that group five times each.
  • With constructors inlined, everything in the loop except DoSomething() could be inline. It seems to me that the compiler could — at least in theory — figure out that it is asked the same thing in all cases. Namely, reserve space for three ints on the heap, fill them each with 4711 and make the ::std::vector<int> data structure on the stack reflect that, then hand that to the DoSomething() function that you know nothing about. If the compiler would figure that out, it would take the same time for all implementations. This doesnt happen either on -O2 (differ by ~18% from quickest to slowest) nor on -O3 (differ by ~3.6%).

One common mantra in applications development is “trust the compiler to optimize”. The above observations show a few cracks in the foundations of that, esp. if you take into account that this is all on the same version of the same compiler running on the same platform and hardware with the same STL implementation. For huge objects with expensive constructors, the constructor counting approach might still be valid. Then again, those are rarely statically initialized as a bigger bunch into a vector. For the more common scenario of smaller objects with cheap constructors, my tentative conclusion so far would be to go with A1/A2/A3 — not so much because they are quickest in the most common build scenarios on my platform, but rather because the readability of them is a value on its own while the performance picture is muddy at best.

And hey, if you want to run the tests above on other platforms or compilers, I would be interested in results!

Note: I did these runs for each scenario only once, thus no standard deviation is given. In general, they seemed to be rather stable, but this being wallclock measurements, one or the other might be outliers. caveat emptor.

Read more
Daniel Holbach

I already blogged about the help app I was working on a bit in the last time. I wanted to go into a bit more detail now that we reached a new milestone.

What’s the idea behind it?

In a conversation in the Community team we noticed that there’s a lot of knowledge we gathered in the course of having used Ubuntu on a phone for a long time and that it might make sense to share tips and tricks, FAQ, suggestions and lots more with new device users in a simple way.

The idea was to share things like “here’s how to use edge swipes to do X” (maybe an animated GIF?) and “if you want to do Y, install the Z app from the store” in an organised and clever fashion. Obviously we would want this to be easily editable (Markdown) and have easy translations (Launchpad), work well on the phone (Ubuntu HTML5 UI toolkit) and work well on the web (Ubuntu Design Web guidelines) too.

What’s the state of things now?

There’s not much content yet and it doesn’t look perfect, but we have all the infrastructure set up. You can now start contributing! :-)

screenshot of web editionscreenshot of web edition screenshot of phone app editionscreenshot of phone app edition


What’s still left to be done?

  • We need HTML/CSS gurus who can help beautifying the themes.
  • We need people to share their tips and tricks and favourite bits of their Ubuntu devices experience.
  • We need hackers who can help in a few places.
  • We need translators.

What you need to do? For translations: you can do it in Launchpad easily. For everything else:

$ bzr branch lp:ubuntu-devices-help
$ cd ubuntu-devices-help
$ less HACKING

We’ve come a long way in the last week and with the easy of Markdown text and easy Launchpad translations, we should quickly be in a state where we can offer this in the Ubuntu software store and publish the content on the web as well.

If you want to write some content, translate, beautify or fix a few bugs, your help is going to be appreciated. Just ping myself, Nick Skaggs or David Planella on #ubuntu-app-devel.

Read more
Ben Howard

Back when we announced that the Ubuntu 14.04 LTS Cloud Images on Azure were using the Hardware Enablement Kernel (HWE), the immediate feedback was "what about 12.04?"

Well, the next Ubuntu 12.04 Cloud Images on Microsoft Azure will start using the HWE kernel. We have been working with Microsoft to validate using the 3.13 kernel on 12.04 and are pleased with the results and the stability. We spent a lot of time thinking about and testing this change, and in conference with the Ubuntu Kernel, Foundations and Cloud Image teams, feel this change will give the best experience on Microsoft Azure. 

By default, the HWE kernel is used on official images for Ubuntu 12.04 on VMware Air, Google Compute Engine, and now Microsoft Azure. 

Any 12.04 Image published to Azure with a serial later than 20140225 will default to the new HWE kernel. 

Users who want to upgrade their existing instance can simply run:
  • sudo apt-get update
  • sudo apt-get install linux-image-hwe-generic linux-cloud-tools-generic-lts-trusty
  • reboot

Read more
Victor Palau

Not long back Chris Wayne published a post about a scope creator tool.  Last week, I was visiting bq and we decided with Victor Gonzalez  that we should have a scope for Canal bq. The folks at bq do an excellent job at creating “how to” and “first steps” videos, and they have started publishing some for the bq Aquaris E4.5 Ubuntu Edition.

Here is a few screenshots of the scope that is now available to download from the store:


The impressive thing is that it took us about 5 min to get a working version of the scope.  Here is what needed to do:

  • First, we followed Chris’ instructions to install the scope creator tool.
  • Once we had it set up on my laptop, we run:
    scopecreator create youtube com.ubuntu.developer.victorbq canalbq
    cd canalbq
  • Next, we configured the scope. The configuration is done in a json file called manifest.json. This file describes the content of what you will publish later to the store. You need to care about: “title”, “description”, “version” and “mantainer”. The rest are values populated by the tool:
    scopecreator edit config
    "name": "com.ubuntu.developer.victorbq.canalbq",
    "description": "Canal bq",
    "framework": "ubuntu-sdk-14.10",
    "architecture": "armhf",
    "title": "Canal bq",
    "hooks": {
    "canalbq": {
    "scope": "canalbq",
    "apparmor": "scope-security.json"
    "version": "0.3",
    "maintainer": "Victor Gonzalez <>"
  • The following step was to set up the branding: Easy! Branding is define on an .ini file. “Display name” will be the name listed on the “manage” window once installed, and also will be the title of your scope if you don’t use a “PageHeader.Logo”. the [Appearance] section describes the colours and logos to use when banding a scope.
    scopecreator edit branding
    DisplayName=Canal bq
    Description=Youtube custommized channel
    Author=Canonical Ltd.
  • The final part is to define the departments (drop down menu) for the scope. This is also a json file and it is unique the youtube scope template. You can either use “playlists” or “channels” (or both) as departments. The id PLjQOV_HHlukyNGBFaSVGFVWrbj3vjtMjd corresponds to a play list from youtube, with url=
    scopecreator edit channels{
    “maxResults”: “20”,
    “playlists”: [
    “id”: “PLjQOV_HHlukyNGBFaSVGFVWrbj3vjtMjd”,
    “reminder”:”Aquaris E4,5 Ubuntu Edition”
    “id”: “PLjQOV_HHlukzBhuG97XVYsw96F-pd9P2I”,
    “reminder”: “Tecnópolis”
    “id”: “PLC46C98114CA9991F”,
    “reminder”: “aula bq”
    “id”: “PLE7ACC7492AD7D844”,
    “reminder”: “primeros pasos”
    “id”: “PL551D151492F07D63”,
    “reminder”: “accesorios”
    “id”: “PLjQOV_HHlukyIT8Jr3aI1jtoblUTD4mn0”,
    “reminder”: “3d”

After this, the only thing left to do is replace the placeholder icon, with the bq logo:
And build, check and publish the scope:
scopecreator build

This last command generates the click file that you need to upload to the store. If you have a device (for example a Nexus4 or an emulator ), it can also install it so you can test it. If you get any issues getting the scope to run, you might want to check your json files on It is a great web tool that will help you make sure your json doc is ship shaped!

It is super simple to create a scope for a youtube channel! so what are you going to create next?

Read more

Raspberry Pi 2 is here.

  • A 900MHz quad-core ARM Cortex-A7 CPU
  • Compatible with Raspberry Pi 1
  • $35

And now runs Snappy Ubuntu Core.


Read more
Nicholas Skaggs

Unity 8 Desktop Testing

While much of the excitement around unity8 and the next generation of ubuntu has revolved around mobile, again I'd like to point your attention to the desktop. The unity8 desktop is starting to evolve and gain more "desktopy" features. This includes things like window management and keyboard shortcuts for unity8, and MIR enhancements with things like native library support for rendering and support for X11 applications.

I hosted a session with Stephen Webb at UOS last year where we discussed the status of running unity8 on the desktop. During the session I mentioned my own personal goal of having some brave community members running unity8 as there default desktop this cycle. Now, it's still a bit early to realize that goal, but it is getting much closer! To help get there, I would encourage you to have a look at unity8 on your desktop and start running it. The development teams are ready for feedback and anxious to get it in shape on the desktop.

So how do you get it? Check out the unity8 desktop wiki page which explains how you can run unity8, even if you are on a stable version of ubuntu like the LTS. Install it locally in an lxc container and you can login to a unity8 desktop on your current pc. Check it out! After you finish playing, please don't forget to file bugs for anything you might find. The wiki page has you covered there as well. Enjoy unity8!

Read more
Daniel Holbach

Did you always want to write an app for Ubuntu and thought that HTML5 might be a good choice? Well picked!


We now have training materials up on which will get you started in all things related to Ubuntu devices. The great thing is that you just write this app once and it’ll work on the phone, the desktop and whichever device Ubuntu is going to run next on.

The example used in the materials is a RSS reader written by my friend, Adnane Belmadiaf. If you go through the steps one by one you’ll notice how easy it is to get stuff done. :-)

This is also a good workshop you could give in your LUG or LoCo or elsewhere. Maybe next weekend at Ubuntu Global Jam too? :-)

Read more
Dustin Kirkland

Gratuitous picture of my pets, the day after we rescued them
The PetName libraries (Shell, Python, Golang) can generate infinite combinations of human readable UUIDs

Some Background

In March 2014, when I first started looking after MAAS as a product manager, I raised a minor feature request in Bug #1287224, noting that the random, 5-character hostnames that MAAS generates are not ideal. You can't read them or pronounce them or remember them easily. I'm talking about hostnames like: sldna, xwknd, hwrdz or wkrpb. From that perspective, they're not very friendly. Certainly not very Ubuntu.

We're not alone, in that respect. Amazon generates forgettable instance names like i-15a4417c, along with most virtual machine and container systems.

Meanwhile, there is a reasonably well-known concept -- Zooko's Triangle -- which says that names should be:
  • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable
  • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
  • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
And, of course we know what XKCD has to say on a somewhat similar matter :-)

So I proposed a few different ways of automatically generating those names, modeled mostly after Ubuntu's beloved own code naming scheme -- Adjective Animal. To get the number of combinations high enough to model any reasonable MAAS user, though, we used Adjective Noun instead of Adjective Animal.

I collected a Adjective list and a Noun list from a blog run by moms, in the interest of having a nice, soft, friendly, non-offensive source of words.

For the most part, the feature served its purpose. We now get memorable, pronounceable names. However, we get a few odd balls in there from time to time. Most are humorous. But some combinations would prove, in fact, to be inappropriate, or perhaps even offensive to some people.

Accepting that, I started thinking about other solutions.

In the mean time, I realized that Docker had recently launched something similar, their NamesGenerator, which pairs an Adjective with a Famous Scientist's Last Name (except they have explicitly blacklisted boring_wozniak, because "Steve Wozniak is not boring", of course!).

Similarly, Github itself now also "suggests" random repo names.

I liked one part of the Docker approach better -- the use of proper names, rather than random nouns.

On the other hand, their approach is hard-coded into the Docker Golang source itself, and not usable or portable elsewhere, easily.

Moreover, there's only a few dozen Adjectives (57) and Names (76), yielding only about 4K combinations (4332) -- which is not nearly enough for MAAS's purposes, where we're shooting for 16M+, with minimal collisions (ie, covering a Class A network).

Introducing the PetName Libraries

I decided to scrap the Nouns list, and instead build a Names list. I started with Last Names (like Docker), but instead focused on First Names, and built a list of about 6,000 names from public census data.  I also built a new list of nearly 38,000 Adjectives.

The combination actually works pretty well! While smelly-Susan isn't particularly charming, it's certainly not an ad hominem attack targeted at any particular Susan! That 6,000 x 38,000 gives us well over 228 million unique combinations!

Moreover, I also thought about how I could actually make it infinitely extensible... The simple rules of English allow Adjectives to modify Nouns, while Adverbs can recursively modify other Adverbs or Adjectives.   How convenient!

So I built a word list of Adverbs (13,000) as well, and added support for specifying the "number" of words in a PetName.
  1. If you want 1, you get a random Name 
  2. If you want 2, you get a random Adjective followed by a Name 
  3. If you want 3 or more, you get N-2 Adverbs, an Adjective and a Name 
Oh, and the separator is now optional, and can be any character or string, with a default of a hyphen, "-".

In fact:
  • 2 words will generate over 221 million unique combinations, over 227 combinations
  • 3 words will generate over 2.8 trillion unique combinations, over 241 combinations (more than 32-bit space)
  • 4 words can generate over 255 combinations
  • 5 words can generate over 268 combinations (more than 64-bit space)
Interestingly, you need 10 words to cover 128-bit space!  So it's





So once the algorithm was spec'd out, I built and packaged a simple shell utility and text word lists, called petname, which are published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:petname/ppa
$ sudo apt-get update

$ sudo apt-get install petname
$ petname
$ petname -w 3
$ petname -s ":" -w 5


That's only really useful from the command line, though. In MAAS, we'd want this in a native Python library. So it was really easy to create python-petname, source now published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:python-petname/ppa
$ sudo apt-get update

$ sudo apt-get install python-petname
$ python-petname
$ python-petname -w 4
$ python-petname -s "" -w 2

Using it in your own Python code looks as simple as this:

$ python
⟫⟫⟫ import petname
⟫⟫⟫ foo = petname.Generate(3, "_")
⟫⟫⟫ print(foo)


In the way that NamesGenerator is useful to Docker, I though a Golang library might be useful for us in LXD (and perhaps even usable by Docker or others too), so I created:
Of course you can use "go get" to fetch the Golang package:

$ export GOPATH=$HOME/go
$ mkdir -p $GOPATH
$ export PATH=$PATH:$GOPATH/bin
$ go get

And also, the packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:golang-petname/ppa
$ sudo apt-get update

$ sudo apt-get install golang-petname
$ golang-petname
$ golang-petname -words=1
$ golang-petname -separator="|" -words=10

Using it in your own Golang code looks as simple as this:

package main
import (
func main() {
fmt.Println(petname.Generate(2, ""))
Gratuitous picture of my pets, 7 years later.

Read more

After a few weeks of being coffee-deprived, I decided to disassemble my espresso machine and see if I could figure out why it leaked water while on, and didn't have enough pressure to produce drinkable coffee.
I live a bit on the edge of where other people do, so my water supply is from my own pump, 40 meters into the ground. It's as hard as water gets. That was my main suspicion. I read a bit about it on the interwebz and learned about descaling, which I'd never heard about. I tried some of the home-made potions but nothing seemed to work.
Long story short, I'm enjoying a perfect espresso as I write this.

I wanted to share a bit with the internet people about what was hard to solve, and couldn't find any instructions on. All I really did was disassemble the whole thing completely, part by part, clean them, and make sure to put it back together tightening everything that seemed to need pressure.
I don't have the time and energy to put together a step-by-step walk-through, so here's the 2 tips I can give you:

1) Remove ALL the screws. That'll get you there 95% there. You'll need a philips head, a torx head, a flat head and some small-ish pliers.
2) The knob that releases the steam looks unremovable and blocks you from getting the top lid off. It doesn't screw off, you just need to pull upwards with some strength and care. It comes off cleanly and will go back on easily. Here's a picture to prove it:

DeLongi eco310.r

Hope this helps somebody!

Read more
Daniel Holbach

In a recent conversation we thought it’d be a good idea to share tips and tricks, suggestions and ideas with users of Ubuntu devices. Because it’d help to have it available immediately on the phone, an app could be a good idea.

I had a quick look at it and after some discussion with Rouven in my office space, it looked like hyde could fit the bill nicely. To edit the content, just write a bit of Markdown, generate the HTML (nice and readable templates – great!) and done.

Unfortunately I’m not a CSS or HTML wizard, so if you could help out making it more Ubuntu-y, that’d be great! Also: if you’re interested in adding content, that’d be great.

I pushed the code for it up on Launchpad, there are also the first bugs open already. Let’s make it look pretty and let’s share our knowledge with new Ubuntu devices users. :-)

Oh, and let’s see that we translate the content as well! :-)

Read more

Most of this has been discussed on mailing lists, blog entries, etc, while developing Ubuntu Touch, but I wanted to write up something that ties together these conversations for Snappy. This will provide background for the conversations surrounding hardware access for snaps that will be happening soon on the snappy-devel mailing list.


Ubuntu Touch has several goals that all apply to Snappy:

  • we want system-image upgrades
  • we want to replace the distro archive model with an app store model for Snappy systems
  • we want developers to be able to get their apps to users quickly
  • we want a dependable application lifecycle
  • we want the system to be easy to understand and to develop on
  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Snappy adds a few things to the above (that pertain to this conversation):

  • we want the system to be bulletproof (transactional updates with rollbacks)
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

Let’s look at what all these mean more closely.

system-image upgrades

  • we want system-image upgrades
  • we want the system to be bulletproof (transactional updates with rollbacks)

We want system-image upgrades so updates are fast, reliable and so people (users, admins, snappy developers, system builders, etc) always know what they have and can depend on it being there. In addition, if an upgrade goes bad, we want a mechanism to be able to rollback the system to a known good state. In order to achieve this, apps need to work within the system and live in their own area and not modify the system in unpredictable ways. The Snappy FHS is designed for this and the security policy enforces that apps follow it. This protects us from malware, sure, but at least as importantly, it protects us from programming errors and well-intentioned clever people who might accidentally break the Snappy promise.

app store

  • we want to replace the distro archive model with an app store model
  • we want developers to be able to get their apps to users quickly

Ubuntu is a fantastic distribution and we have a wonderfully rich archive of software that is refreshed on a cadence. However, the traditional distro model has a number of drawbacks and arguably the most important one is that software developers have an extremely high barrier to overcome to get their software into users hands on their own time-frame. The app store model greatly helps developers and users desiring new software because it gives developers the freedom and ability to get their software out there quickly and easily, which is why Ubuntu Touch is doing this now.

In order to enable developers in the Ubuntu app store, we’ve developed a system where a developer can upload software and have it available to users in seconds with no human review, intervention or snags. We also want users to be able to trust what’s in Ubuntu’s store, so we’ve created store policies that understand the Ubuntu snappy system such that apps do not require any manual review so long as the developer follows the rules. However, the Ubuntu Core system itself is completely flexible– people can install apps that are tightly confined, loosely confined, unconfined, whatever (more on this, below). In this manner, people can develop snaps for their own needs and distribute them however they want.

It is the Ubuntu store policy that dictates what is in the store. The existing store policy is in place to improve the situation and is based on our experiences with the traditional distro model and attempts to build something app store-like experiences on top of it (eg, MyApps).

application lifecycle

  • dependable application lifecycle

This has not been discussed as much with Snappy for Ubuntu Core, but Touch needs to have a good application lifecycle model such that apps cannot run unconstrained and unpredictably in the background. In other words, we want to avoid problems with battery drain and slow systems on Touch. I think we’ve done a good job so far on Touch, and this story is continuing to evolve.

(I mention application lifecycle in this conversation for completeness and because application lifecycle and security work together via the app’s application id)


  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Everyone wants a system that they trust and that is secure, and security is one of the core tenants of Snappy systems. For Ubuntu Touch, we’ve created a
system that is secure, that is easy to use and understand by users, and that still honors relevant, meaningful Linux traditions. For Snappy, we’ll be adding several additional security features (eg, seccomp, controlled abstract socket communication, firewalling, etc).

Our security story and app store policies give us something that is between Apple and Google. We have a strong security story that has a number of similarities to Apple, but a lightweight store policy akin to Google Play. In addition to that, our trust model is that apps not needing manual review are untrusted by the OS and have limited access to the system. On Touch we use tasteful, contextual prompting so the user may trust the apps to do things beyond what the OS allows on its own (simple example, app needs access to location, user is prompted at the time of use if the app can access it, user answers and the decision is remembered next time).

Snappy for Ubuntu Core is different not only because the UI supports a CLI, but also because we’ve defined a Snappy for Ubuntu Core user that is able to run the ‘snappy’ command as someone who is an admin, a system builder, a developer and/or someone otherwise knowledgeable enough to make a more informed trust decision. (This will come up again later, below)

easy to use

  • we want the system to be easy to understand and to develop on
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

We want a system that is easy to use and understand. It is key that developers are able to develop on it, system builders able to get their work done and admins can install and use the apps from the store.

For Ubuntu Touch, we’ve made a system that is easy to understand and to develop on with a simple declarative permissions model. We’ll refine that for Snappy and make it easy to develop on too. Remember, the security policy is there not just so we can be ‘super secure’ but because it is what gives us the assurances needed for system upgrades, a safe app store and an altogether bulletproof system.

As mentioned, the system we have designed is super flexible. Specifically, the underlying system supports:

  1. apps working wholly within the security policy (aka, ‘common’ security policy groups and templates)
  2. apps declaring specific exceptions to the security policy
  3. apps declaring to use restricted security policy
  4. apps declaring to run (effectively) unconfined
  5. apps shipping hand-crafted policy (that can be strict or lenient)

(Keep in mind the Ubuntu App Store policy will auto-accept apps falling under ‘1’ and trigger manual review for the others)

The above all works today (though it isn’t always friendly– we’re working on that) and the developer is in control. As such, Snappy developers have a plethora of options and can create snaps with security policy for their needs. When the developer wants to ship the app and make it available to all Snappy users via the Ubuntu App Store, then the developer may choose to work within the system to have automated reviews or choose not to and manage the process via manual reviews/commercial relationship with Canonical.

Moving forward

The above works really well for Ubuntu Touch, but today there is too much friction with regard to hardware access. We will make this experience better without compromising on any of our goals. How do we put this all together, today, so people can get stuff done with snappy without sacrificing on our goals, making it harder on ourselves in the future or otherwise opening Pandora’s box? We don’t want to relax our security policy, because we can’t make the bulletproof assurances we are striving for and it would be hard to tighten the security. We could also add some temporary security policy that adds only certain accesses (eg, serial devices) but, while useful, this is too inflexible. We also don’t want to have apps declare the accesses themselves to automatically adds the necessary security policy, because this (potentially) privileged access is then hidden from the Snappy for Ubuntu Core user.

The answer is simple when we remember that the Snappy for Ubuntu Core user (ie, the one who is able to run the snappy command) is knowledgeable enough to make the trust decision for giving an app access to hardware. In other words, let the admin/developer/system builder be in control.

immediate term

The first thing we are going to do is unblock people and adjust snappy to give the snappy core user the ability to add specific device access to snap-specific security policy. In essence you’ll install a snap, then run a command to give the snap access to a particular device, then you’re done. This simple feature will unblock developers and snappy users immediately while still supporting our trust-model and goals fully. Plus it will be worth implementing since we will likely always want to support this for maximum flexibility and portability (since people can use traditional Linux APIs).

The user experience for this will be discussed and refined on the mailing list in the coming days.

short term

After that, we’ll build on this and explore ways to make the developer and user experience better through integration with the OEM part and ways of interacting with the underlying system so that the user doesn’t have to necessarily know the device name to add, but can instead be given smart choices (this can have tie-ins to the web interface for snappy too). We’ll want to be thinking about hotpluggable devices as well.

Since this all builds on the concept of the immediate term solution, it also supports our trust-model and goals fully and is relatively easy to implement.


Once we have the above in place, we should have a reasonable experience for snaps needing traditional device access. This will give us time to evaluate how people are accessing hardware and see if we can make things even better by using frameworks and/or a hardware abstraction layer. In this manner, snaps can program to an easy to use API and the system can mediate access to the underlying hardware via that API.

Filed under: canonical, security, ubuntu, ubuntu-server, uncategorized

Read more
Daniel Holbach

What do Kinshasa, Omsk, Paris, Mexico City, Eugene, Denver, Tempe, Catonsville, Fairfax, Dania Beach, San Francisco and various places on the internet have in common?

Right, they’re all participating in the Ubuntu Global Jam on the weekend of 6-8 February! See the full list of teams that are part of the event here. (Please add yours if you haven’t already.)

What’s great about the event is that there are just two basic aims:

  1. do something with Ubuntu
  2. get together and have fun!

What I also like a lot is that there’s always something new to do. Here are just 3 quick examples of that:

App Development Schools

We have put quite a bit of work into putting training materials together, now, you can take them out to your team and start writing Ubuntu apps easily.


As one tech news article said “Robots embrace Ubuntu as it invades the internet of things“. Ubuntu’s newest foray, making it possible to bring a stable and secure OS to small devices where you can focus on apps and functionality, is attracting a number of folks on the mailing lists (snappy-devel, snappy-app-devel)  and elsewhere. Check out the mailing lists and the snappy site to find out more and have a play with it.

Unity8 on Desktop

Convergence is happening and what’s working great on the phone is making its way onto the desktop. You can help making this happen, by installing and testing it. Your feedback will be much appreciated.




Read more
Ben Howard

One of the perennial problems in the Cloud is knowing what is the most current image and where to find it. Some Clouds provide a nice GUI console, an API, or some combination. But what has been missing is a "dashboard" showing Ubuntu across multiple Clouds.

In that light, I am please to announce that we have a new beta Cloud Image Finder. This page shows where official Ubuntu images are available. As with all betas, we have some kinks to work out, like gathering up links for our Cloud Partners (so clicking an Image ID launches an image). I envision that in the future this locator page will be the default landing page for our Cloud Image Page..

The need for this page became painfully apparent yesterday as I was working through the fallout of the Ghost Vulnerability (aka CVE 2015-0235). The Cloud Image team had spent a good amount of time pushing our images to AWS, Azure, GCE, Joyent and then notifying our partners like Brightbox, DreamCompute, CloudSigma and VMware of new builds. I realized that we needed a single place for our users to just look and see where the builds are available. And so I hacked up the EC2 Locator page to display other clouds.  

Please note: this new page only shows stable releases. We push a lot of images and did not want to confuse things by showing betas, alphas, dailies or the development builds. Rather, this page will only show images that have been put through the complete QA process and are ready for production work loads. 

This new locator page is backed by Simple Streams, which is our machine-formatted data service. Simple Streams provides a way of locating images in uniform way across the cloud. Essentially our new Locator Page is just a viewer of the Simple Stream Data.

Hopefully our users will find this new page useful. Feedback is always welcome. Please feel free to drop me a line (utlemming @ ubuntu dot com). 

Read more
Ben Howard

A few years ago when our fine friends on the kernel team introduced the idea of the "hardware enablement" (HWE) kernel, those of us in the Cloud world looked at it as curiosity. We thought that by in large, the HWE kernel would not be needed or wanted for Virtual Cloud instances.

And we were wrong.

So wrong in fact, that the HWE kernel has found its way into the Vagrant Cloud Images, VMware's vCHS, and Google's Compute engine as the default kernel for the Certified Images. The main reason for these requests is that virtual hardware moves at a fairly quick pace. Unlike traditional hardware, Virtual Hardware can be fixed and patched at the speed that software can be deployed.

The feedback in regards to Azure has been the same: users and Microsoft has asked for the HWE kernel consistently. Microsoft has validated that the HWE kernel (3.16) running Ubuntu 14.04 on Windows Azures passes their validation testing. In our testing, we have validated that the 3.16 kernel works quite well in Azure.

For Azure users, using the 3.16 HWE kernel brings SMB 2.1 copy file support and updates LIS drivers.

Therefore, starting with the latest Windows Azure image [1], all the Ubuntu 14.04 images will track the latest hardware enablement kernel. That means that all the goodness in Ubuntu 14.10's kernel will be the default for 14.04 users launching our official images on Windows Azure.

If you want to install the LTS kernel on your existing instance(s), simply run:

  • sudo apt-get update
  • sudo apt-get install linux-image-virtual-lts-utopic linux-lts-utopic-cloud-tools-common walinuxagent
  • sudo reboot

[1] b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB

Read more
Nicholas Skaggs

It's time for a testing jam!

Ubuntu Global Jam, Vivid edition is a few short weeks away. It's time to make your event happen. I can help! Here's my officially unofficial guide to global jam success.


  1. Get your jam pack! Get the request in right away so it gets to you on time. 
  2. Pick a cool location to jam
  3. Tell everyone! (be sure to mention free swag, who can resist!?)
But wait, what are you going to do while jamming? I've got that covered too! Hold a testing jam! All you need to know can be found on the ubuntu global jam wiki. The wiki even has more information for you as a jam host in case you have questions or just like details.

Ohh and just in case you don't like testing (seems crazy, I know), there are other jam ideas available to you. The important thing is you get together with other ubuntu aficionados and celebrate ubuntu! 

P.S. Don't forget to share pictures afterwards. No one will know you had the coolest jam in the world unless you tell them :-)

P.P.S. If I'm invited, bring cupcakes! Yum!

Read more

WhatsApp on Ubuntu ?

Yes you can install WhatApp on Ubuntu Desktop!

It currently works with Chrome browser only (no Chromium support yet).

In Chrome go to :

On your WhatApp on your phone, go to Menu (top left) – WhatsApp Web and add your web client.

That’s all, now all your WhatsApp messages will also show up on your Ubuntu Desktop.

Have fun!


Read more
Dustin Kirkland

With the recent introduction of Snappy Ubuntu, there are now several different ways to extend and update (apt-get vs. snappy) multiple flavors of Ubuntu (Core, Desktop, and Server).

We've put together this matrix with a few examples of where we think Traditional Ubuntu (apt-get) and Transactional Ubuntu (snappy) might make sense in your environment.  Note that this is, of course, not a comprehensive list.

Ubuntu Core
Ubuntu Desktop
Ubuntu Server
Traditional apt-get
Minimal Docker and LXC imagesDesktop, Laptop, Personal WorkstationsBaremetal, MAAS, OpenStack, General Purpose Cloud Images
Transactional snappy
Minimal IoT Devices and Micro-Services Architecture Cloud ImagesTouch, Phones, TabletsComfy, Human Developer Interaction (over SSH) in an atomically updated environment

I've presupposed a few of the questions you might ask, while you're digesting this new landscape...

Q: I'm looking for the smallest possible Ubuntu image that still supports apt-get...
A: You want our Traditional Ubuntu Core. This is often useful in building Docker and LXC containers.

Q: I'm building the next wearable IoT device/drone/robot, and perhaps deploying a fleet of atomically updated micro-services to the cloud...
A: You want Snappy Ubuntu Core.

Q: I want to install the best damn Linux on my laptop, desktop, or personal workstation, with industry best security practices, 30K+ freely available open source packages, freely available, with extensive support for hardware devices and proprietary add-ons...
A: You want the same Ubuntu Desktop that we've been shipping for 10+ years, on time, every time ;-)

Q: I want that same converged, tasteful Ubuntu experience on your personal, smart devices like my Phones and Tablets...
A: You want Ubuntu Touch, which is a very graphical human interface focused expression of Snappy Ubuntu.

Q: I'm deploying Linux onto bare metal servers at scale in the data center, perhaps building IaaS clouds using OpenStack or PaaS cloud using CloudFoundry? And I'm launching general purpose Linux server instances in public clouds (like AWS, Azure, or GCE) and private clouds...
A: You want the traditional apt-get Ubuntu Server.

Q: I'm developing and debugging applications, services, or frameworks for Snappy Ubuntu devices or cloud instances?
A: You want Comfy Ubuntu Server, which is a command line human interface extension of Snappy Ubuntu, with a number of conveniences and amenities (ssh, byobu, manpages, editors, etc.) that won't be typically included in the minimal Snappy Ubuntu Core build. [*Note that the Comfy images will be available very soon]


Read more

ROS what?

Robot Operating System (ROS) is a set of libraries, services, protocols, conventions, and tools to write robot software. It’s about seven years old now, free software, and a growing community, bringing Linux into the interesting field of robotics. They primarily target/support running on Ubuntu (current Indigo ROS release runs on 14.04 LTS on x86), but they also have some other experimental platforms like Ubuntu ARM and OS X.

ROS, meet Snappy

It appears that their use cases match Ubuntu Snappy’s vision really well: ROS apps usually target single-function devices which require absolutely robust deployments and upgrades, and while they of course require a solid operating system core they mostly implement their own build system and libraries, so they don’t make too many assumptions about the underlying OS layer.

So I went ahead and created a snapp package for the Turtle ROS tutorial, which automates all the setup and building. As this is a relatively complex and big project, it helped to uncover quite a number of bugs, of which the most important ones got fixed now. So while the building of the snap still has quite a number of workarounds, installing and running the snap is now reasonably clean.

Enough talk, how can I get it?

If you are interested in ROS, you can look at bzr branch lp:~snappy-dev/snappy-hub/ros-tutorials. This contains documentation and a script which builds the snapp package in a clean Ubuntu Vivid environment. I recommend a schroot for this so that you can simply do e. g.

  $ schroot -c vivid ./

This will produce a /tmp/ros/ros-tutorial_0.2_<arch>.snap package. You can download a built amd64 snapp if you don’t want to build it yourself.

Installing and running

Then you can install this on your Snappy QEMU image or other installation and run the tutorial (again, see for details):

  yourhost$ ssh -o UserKnownHostsFile=/dev/null -p 8022 -R 6010:/tmp/.X11-unix/X0 ubuntu@localhost
  snappy$ scp <yourhostuser>@
  snappy$ sudo snappy install ros-tutorial_0.2_amd64.snap

You need to adjust <yourhostuser> accordingly; if you didn’t build yourself but downloaded the image, you might also need to adjust the host path where you put the .snap.

Finally, run it:

  snappy$ ros-tutorial.rossnap roscore &
  snappy$ DISPLAY=localhost:10.0 ros-tutorial.rossnap rosrun turtlesim turtlesim_node &
  snappy$ ros-tutorial.rossnap rosrun turtlesim turtle_teleop_key

You might prefer ssh’ing in three times and running the commands in separate shells. Only turtlesim_node needs $DISPLAY (and is quite an exception — an usual robotics app of course wouldn’t!). Also, note that this requires ssh from at least Ubuntu 14.10 – if you are on 14.04 LTS, see


Read more
Victor Palau

Just a quick note to tell you that I have published a new scope called uBrick that brings you the awesomeness of Lego, as a catalogue powered by, directly to your Ubuntu phone home screen.

I wrote the scope in Go cause I find it easier to work with for a quick scope ( took about 8 hours with interruptions over 2 days to write this scope).  The scope is now available at the store, just search for uBrick.

Here are some pics:

lego1lego2lego3 lego4

Also I have to congratulate the folks at Brickset for a very nice API, even if it is using SOAP :)

Read more