Canonical Voices

Posts tagged with 'canonical'

Hardik Dalwadi

In this edition i will demonstrate how to assemble your own Ubuntu Orange Matchbox: Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow. As shown below.

Ubuntu Orange Matchbox: Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow.

Basically, at the end of this blog i will demonstrate how to assemble your own Ubuntu Orange Matchbox: Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow. Flash it with latest Snappy Ubuntu Core, which has support for  Raspberry Pi 2’s GPIO & I2C. I will also managing Snap packages from Ubuntu Phone Browser & Snappy Scope (Beta). I will also share tips & tricks to enable WiFi on Snappy Ubuntu Core. At the end, i will demonstrate Snappy Ubuntu Core + Raspberry Pi 2 + PiGLow in action. PiGlow will blink LEDs, as per the CPU Resource Utilization by  Snappy Ubuntu Core.

We have divided this post in four different part:

  • Make In India: Ubuntu Orange Matchbox: Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow
  • Install Snappy Ubuntu Core @ Ubuntu Orange Matchbox: Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow
  • Tips & Tricks to enable WiFi @ Snappy Ubuntu Core
  • CPU Resource Utilization Demonstration @Snappy Ubuntu Core

Make In India: Ubuntu Orange Matchbox: Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow:

Here are the list of ingredients which i have used to cook my own Ubuntu Orange Matchbox. I have also share the link from where you can procure those stuffs in India.

  1. Raspberry Pi 2 with USB WiFi Dongle  & 5V, 2A USB Power Adapter
  2. Tangerine Pibow for Raspberry Pi 2
  3. PiGlow, It is small add on for Raspbeery Pi that provides 18 Individually controllable LEDs.
  4. Mixed Ubuntu Stickers







Assemble Raspberry Pi 2 with Tangerine Pibow.  Remove white laminate plastic sheet of closing cover of Tangerine Pibow.  This actions will give you transparent Pibow for Raspberry Pi 2.


Now, it’s time to give Ubuntu Branding. Just stick transparent Ubuntu Logo sticker on  transparent Pibow closing cover. You can also install PiGlow before closing the cover.  If you know any laser engraving service provider for metal / plastic, consult them and get your Ubuntu Branding engraving on transparent Pibow closing cover. But my trick will give you Ubuntu Orange Logo 😉


Final Version:


Install Snappy Ubuntu Core @ Ubuntu Orange Matchbox: Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow:

Recently, Snappy Ubuntu Core get latest updated for Raspberry Pi 2, which has latest updates & upgrades. Please follow this page Getting Started with Snappy Ubuntu Core for Raspberry Pi 2. It is having latest info and procedure to play with Snappy Ubuntu Core @ Raspberry Pi 2. You can even build your own image.

Here is sort summary to do the same. Get 4 Gb Class 10 micro SDHC Card. Format it and flash it with latest Snappy Ubuntu Core image.

# Note: replace /dev/sdX with the device name of your SD card (e.g. /dev/mmcblk0, /dev/sdg1 ...)
xzcat ubuntu-15.04-snappy-armhf-rpi2.img.xz | sudo dd of=/dev/sdX bs=32M

Now, boot your Ubuntu Orange Matchbox with Snappy Ubuntu Core. Default Username / Password is ubuntu / ubuntu. You can manage Snappy Ubuntu Core web store through Webdm, by pointing your browser to Ubuntu Orange Matchbox IP:4200 or webdm.local. Make sure that you are connected to LAN.

Tips & Tricks to enable WiFi @ Snappy Ubuntu Core:

Snappy Ubuntu Core has inbuilt support for Ethernet port, So, connecting your Raspberry Pi 2 to LAN will get LAN IP from Router. Since, this is headless device, you can always do SSH from your host machine if you do not want to connect it with separate monitor.

It does not have support for WiFi. Here are the steps to enable WiFi and add support for USB wi-fi adapter, having Ralink RA5370 chipset. I got this with my Raspberry Pi 2 kit, please check chipset for your USB wi-fi Adapter, you may need different firmware.

Please follow this solution @ Enable WiFi with Snappy Ubuntu Core. You may need to get below package instead of  – wpasupplicant_0.7.3-6ubuntu2.3_armhf.deb

wget -c

I would also prefer to install nano for better modification of WiFi SSID configuration in future. Get it from here.


CPU Resource Utilization Demonstration @ Snappy Ubuntu Core:

Now, it’s time to demonstrate CPU Resource Utilization through PiGlow.  It is small add on for Raspbeery Pi that provides 18 Individually controllable LEDs. We will feed CPU Resource Utilization such a way that it will glow more when we have more CPU Resource Utilization. As i said erlier, recently we got GPIO & I2C support on Snappy Ubuntu Core, it is possible to do the same. And we have snap package available from the same, called PiGlow Top.

Access your Snappy Ubuntu Core from WebDM or CLI through SSH and install PiGlow Top snap available from Snappy Ubuntu Core Web Store. After installation, we need to grant access to I2C hardware, do the same over SSH.

sudo snappy hw-assign piglowtop.kyrofa /dev/i2c-1

Now it’s time for the demonstration, grab your Ubuntu Phone, access the Snappy Ubuntu Core Webdm from your Ubuntu Phone Browser, Do some activity to increase CPU Utilization, for example Install / Remove any snap from Snappy Ubuntu Core Web Store. I have prepared small video Demonstration for your reference.

Read more
Dustin Kirkland

I delivered a presentation and an exciting live demo in San Francisco this week at the Container Summit (organized by Joyent).

It was professionally recorded by the A/V crew at the conference.  The live demo begins at the 25:21 mark.

You can also find the slide deck embedded below and download the PDFs from here.


Read more
Michael Hall

Photo from Aaron Honeycutt

Nicholas Skaggs presenting at UbuCon@FOSSETCON 2014

Thanks to the generous organizers of FOSSETCON who have given us a room at their venue, we will be having another UbuCon in Orlando this fall!

FOSSETCON 2015 will be held at the Hilton Orlando Lake Buena Vista‎, from November 19th through the 21st. This year they’ve been able to get Richard Stallman to attend and give a keynote, so it’s certainly an event worth attending for anybody who’s interested in free and open source software.

UbuCon itself will be held all day on the 19th in it’s own dedicate room at the venue. We are currently recruiting presenters to talk to attendees about some aspect of Ubuntu, from the cloud to mobile, community involved and of course the desktop. If you have a fun or interesting topic that you want to share with, please send your proposal to me at

Read more
Daniel Holbach

Tomorrow is a special anniversary: 2005-09-05 I joined Canonical – that’s right: It’s going to be a decade.

A lot of what we as Ubuntu Community experienced and went through I wrote up some time ago and it’s well-documented in blog posts, articles, LoCo event reports and pictures from Ubuntu Allhands events, so don’t expect any of that here.

For me personally it’s been a ride I could never have expected like this. A decade in a single company doesn’t seem to happen very often these days and I would also never have dreamed what we are delivering to the world today. I’m happy and proud to have been part of this all.

I still remember the days when I joined. I had just finished my studies and working next to people who could all easily be described as a wunderkind, it made me feel like I had quite a healthy impostor syndrome. It’s easy to underestimate how much I learned here – not just technically or in terms of other abilities, but also as a person. I got to work on things I never imagined I could do and am happy I was involved in so many different projects.

One thing made this whole ride even more special: the people. I made lots of friends along the way – that’s one of the primary reasons I still feel like I work at a very very special place.

Big hugs everyone and thanks for accompanying me this far! :-)

Read more

Now that we've open sourced the code for Ubuntu One filesync, I thoughts I'd highlight some of the interesting challenges we had while building and scaling the service to several million users.

The teams that built the service were roughly split into two: the foundations team, who was responsible for the lowest levels of the service (storage and retrieval of files, data model, client and server protocol for syncing) and the web team, focused on user-visible services (website to manage files, photos, music streaming, contacts and Android/iOS equivalent clients).
I joined the web team early on and stayed with it until we shut it down, so that's where a lot of my stories will be focused on.

Today I'm going to focus on the challenge we faced when launching the Photos and Music streaming services. Given that by the time we launched them we had a few years of experience serving files at scale, our challenge turned out to be in presenting and manipulating the metadata quickly to each user, and be able to show the data in appealing ways to users (showing music by artist, genre and searching, for example). Photos was a similar story, people tended to have many thousands of photos and songs and we needed to extract metadata, parse it, store it and then be able to present it back to users quickly in different ways. Easy, right? It is, until a certain scale  :)
Our architecture for storing metadata at the time was about 8 PostgreSQL master databases where we sharded metadata across (essentially your metadata lived on a different DB server depending on your user id) plus at least one read-only slave per shard. These were really beefy servers with a truck load of CPUs, more than 128GB of RAM and very fast disks (when reading this, remember this was 2009-2013, hardware specs seem tiny as time goes by!).  However, no matter how big these DB servers got, given how busy they were and how much metadata was stored (for years, we didn't delete any metadata, so for every change to every file we duplicated the metadata) after a certain time we couldn't get a simple listing of a user's photos or songs (essentially, some of their files filtered by mimetype) in a reasonable time-frame (less than 5 seconds). As it grew we added caches, indexes, optimized queries and code paths but we quickly hit a performance wall that left us no choice but a much feared major architectural change. I say much feared, because major architectural changes come with a lot of risk to running services that have low tolerance for outages or data loss, whenever you change something that's already running in a significant way you're basically throwing out most of your previous optimizations. On top of that as users we expect things to be fast, we take it for granted. A 5 person team spending 6 months to make things as you expect them isn't really something you can brag about in the middle of a race with many other companies to capture a growing market.
In the time since we had started the project, NoSQL had taken off and matured enough for it to be a viable alternative to SQL and seemed to fit many of our use cases much better (webscale!). After some research and prototyping, we decided to generate pre-computed views of each user's data in a NoSQL DB (Cassandra), and we decided to do that by extending our existing architecture instead of revamping it completely. Given our code was pretty well built into proper layers of responsibility we hooked up to the lowest layer of our code,-database transactions- an async process that would send messages to a queue whenever new data was written or modified. This meant essentially duplicating the metadata we stored for each user, but trading storage for computing is usually a good trade-off to make, both in cost and performance. So now we had a firehose queue of every change that went on in the system, and we could build a separate piece of infrastructure who's focus would only be to provide per-user metadata *fast* for any type of file so we could build interesting and flexible user interfaces for people to consume back their own content. The stated internal goals were: 1) Fast responses (under 1 second), 2) Less than 10 seconds between user action and UI update and 3) Complete isolation from existing infrastructure.
Here's a rough diagram of how the information flowed throw the system:

U1 Diagram

It's a little bit scary when looking at it like that, but in essence it was pretty simple: write each relevant change that happened in the system to a temporary table in PG in the same transaction that it's written to the permanent table. That way you get transactional guarantees that you won't loose any data on that layer for free and use PG's built in cache that keeps recently added records cheaply accessible.
Then we built a bunch of workers that looked through those rows, parsed them, sent them to a persistent queue in RabbitMQ and once it got confirmation it was queued it would delete it from the temporary PG table.
Following that we took advantage of Rabbit's queue exchange features to build different types of workers that processes the data differently depending on what it was (music was stored differently than photos, for example).
Once we completed all of this, accessing someone's photos was a quick and predictable read operation that would give us all their data back in an easy-to-parse format that would fit in memory. Eventually we moved all the metadata accessed from the website and REST APIs to these new pre-computed views and the result was a significant reduction in load on the main DB servers, while now getting predictable sub-second request times for all types of metadata in a horizontally scalable system (just add more workers and cassandra nodes).

All in all, it took about 6 months end-to-end, which included a prototype phase that used memcache as a key/value store.

You can see the code that wrote and read from the temporary PG table if you branch the code and look under: src/backends/txlog/
The worker code, as well as the web ui is still not available but will be in the future once we finish cleaning it up to make it available. I decided to write this up and publish it now because I believe the value is more in the architecture rather than the code itself   :)

Read more

Over the past few months, James Henstridge, Xavi Garcia Mena, and I have implemented a fast and scalable thumbnailing service for Ubuntu and Ubuntu Touch. This post explains how we did it, and how we achieved our performance and reliability goals.


On a phone as well as the desktop, applications need to display image thumbnails for various media, such as photos, songs, and videos. Creating thumbnails for such media is CPU-intensive and can be costly in bandwidth if images are retrieved over the network. In addition, different types of media require the use of different APIs that are non-trivial to learn. It makes sense to provide thumbnail creation as a platform API that hides this complexity from application developers and, to improve performance, to cache thumbnails on disk.

This article explains the requirements we had and how we implemented a thumbnailer service that is extremely fast and scalable, and robust in the face of power loss or crashes.


We had a number of requirements we wanted to meet in our implementation.

  • Robustness
    In the event of a crash, the implementation must guarantee the integrity of on-disk data structures. This is particularly important on a phone, where we cannot expect the user to perform manual recovery (such as cleaning up damaged files). Because batteries can run out at any time, integrity must be guaranteed even in the face of power loss.
  • Scalability
    It is common for people to store many thousands of songs and photos on a device, so the cache must scale to at least tens of thousands of records. Thumbnails can range in size from a few kilobytes to well over a megabyte (for “thumbnails” at full-screen resolution), so the cache must deal efficiently with large records.
  • Re-usability
    Persistent and reliable on-disk storage of arbitrary records (ranging in size from a few bytes to potentially megabytes) is a common application requirement, so we did not want to create a cache implementation that is specific to thumbnails. Instead, the disk cache is provided as a stand-alone C++ API that can be used for any number of other purposes, such as a browser or HTTP cache, or to build an object file cache similar to ccache.
  • High performance
    The performance of the thumbnailer directly affects the user experience: it is not nice for the customer to look at “please wait a while” icons in, say, an image gallery while thumbnails are being loaded one by one. We therefore had to have a high-performance implementation that delivers cached thumbnails quickly (on the order of a millisecond per thumbnail on an Arm CPU). An efficient implementation also helps to conserve battery life.
  • Location independence and extensibility
    Canonical runs an image server at that provides album and artist artwork for many musicians and bands. Images from this server are used to display artwork in the music player for media that contains ID3 tags, but does not embed artwork in the media file. The thumbnailer must work with embedded images as well as remote images, and it must be possible to extend it for new types of media without unduly disturbing the existing code.
  • Low bandwidth consumption
    Mobile phones typically come with data caps, so the cache has to be frugal with network bandwidth.
  • Concurrency and isolation
    The implementation has to allow concurrent access by multiple applications, as well as concurrent access from a single implementation. Besides needing to be thread-safe, this means that a request for a thumbnail that is slow (such as downloading an image over the network) must not delay other requests.
  • Fault tolerance
    Mobile devices lose network access without warning, and users can add corrupt media files to their device. The implementation must be resilient to partial failures, such as incomplete network replies, dropped connections, and bad image data. Moreover, the recovery strategy for such failures must conserve battery and avoid repeated futile attempts to create thumbnails from media that cannot be retrieved or contains malformed data.
  • Security
    The implementation must ensure that applications cannot see (or, worse, overwrite) each other’s thumbnails or coerce the thumbnailer into delivering images from files that an application is not allowed to read.
  • Asynchronous API
    The customers of the thumbnailer are applications that are written in QML or Qt, which cannot block in the UI thread. The thumbnailer therefore must provide a non-blocking API. Moreover, the application developer should be able to get the best possible performance without having to use threads. Instead, concurrency must be internal to the implementation (which is able to put threads to use intelligently where they make sense), instead of the application throwing threads at the problem in the hope that it might make things faster when, in fact, it might just add overhead.
  • Monitoring
    The effectiveness of a cache cannot be assessed without statistics to show hit and miss rates, evictions, and other basic performance data, so it must provide a way to extract this information.
  • Error reporting
    When something goes wrong with a system service, typically the only way to learn about the problem is to look at log messages. In case of a failure, the implementation must leave enough footprints behind to allow someone to diagnose a failure after the fact with some chance of success.
  • Backward compatibility
    This project was a rewrite of an earlier implementation. Rather than delivering a “big bang” piece of software and potentially upsetting existing clients, we incrementally changed the implementation such that existing applications continued to work. (The only pre-existing interface was a QML interface that required no change.)

System architecture

Here is a high-level overview of the main system components.

A Fast Thumbnailer for UbuntuExternal API

To the outside world, the thumbnailer provides two APIs.

One API is a QML plugin that registers itself as an image provider for QQuickAsyncImageProvider. This allows the caller to to pass a URI that encodes a query for a local or remote thumbnail at a particular size; if the URI matches the registered provider, QML transfers control to the entry points in our plugin.

The second API is a Qt API that provides three methods:

QSharedPointer<Request> getThumbnail(QString const& filePath,
                                     QSize const& requestedSize);
QSharedPointer<Request> getAlbumArt(QString const& artist,
                                    QString const& album,
                                    QSize const& requestedSize);
QSharedPointer<Request> getArtistArt(QString const& artist,
                                     QString const& album,
                                     QSize const& requestedSize);

The getThumbnail() method extracts thumbnails from local media files, whereas getAlbumArt() and getArtistArt() retrieve artwork from the remote image server. The returned Request object provides a finished signal, and methods to test for success or failure of the request and to extract a thumbnail as a QImage. The request also provides a waitForFinished() method, so the API can be used synchronously.

Thumbnails are delivered to the caller in the size they are requested, subject to a (configurable) 1920-pixel limit. As an escape hatch, requests with width and height of zero deliver artwork at its original size, even if it exceeds the 1920-pixel limit. The scaling algorithm preserves the original aspect ratio and never scales up from the original, so the returned thumbnails may be smaller than their requested size.

DBus service

The thumbnailer is implemented as a DBus service with two interfaces. The first interface provides the server-side implementation of the three methods of the external API; the second interface is an administrative interface that can deliver statistics, clear the internal disk caches, and shut down the service. A simple tool, thumbnailer-admin, allows both interfaces to be called from the command line.

To conserve resources, the service is started on demand by DBus and shuts down after 30 seconds of idle time.

Image extraction

Image extraction uses an abstract base class. This interface is independent of media location and type. The actual image extraction is performed by derived implementations that download images from the remote server, extract them from local image files, or extract them from local streaming media files. This keeps knowledge of image location and encoding out of the main caching and error handling logic, and allows us to support new media types (whether local or remote) by simply adding extra derived implementations.

Image extraction is asynchronous, with currently three implementations:

  • Image downloader
    To retrieve artwork from the remote image server, the service talks to an abstract base class with asynchronous download_album() and download_artist() methods. This allows multiple downloads to run concurrently and makes it easy to add new local or remote image providers without disturbing the code for existing ones. A class derived from that abstract base implements a REST API with QNetworkAccessManager to retrieve images from
  • Photo extractor
    The photo extractor is responsible for delivering images from local image files, such as JPEG or PNG files. It simply delegates that work to the image converter and scaler.
  • Audio and video thumbnail extractor
    To extract thumbnails from audio and video files, we use GStreamer. Due to reliability problems with some codecs that can hang or crash, we delegate the task to a separate vs-thumb executable. This shields the service from failures and also allows us to run several GStreamer pipelines concurrently without a crash of one pipeline affecting the others.

Image converter and scaler

We use a simple Image class with a synchronous interface to convert and scale different image formats to JPEG. The implementation uses Gdk-Pixbuf, which can handle many different input formats and is very efficient.

For JPEG source images, the code checks for the presence of EXIF data using libexif and, if it contains a thumbnail that is at least as large as the requested size, scales the thumbnail from the EXIF data. (For images taken with the camera on a Nexus 4, the original image size is 3264×1836, with an embedded EXIF thumbnail of 512×288. Scaling from the EXIF thumbnail is around one hundred times faster than scaling from the full-size image.)

Disk cache

The thumbnailer service optimizes performance and conserves bandwidth and battery by adopting a layered caching strategy.

Two-level caching with failure lookup

Internally, the service uses three separate on-disk caches:

  • Full-size cache
    This cache stores images that are expensive to retrieve (images that are remote or are embedded in audio and video files) at original resolution (scaled down to a 1920-pixel bounding box if the original image is larger). The default size of this cache is 50 MB, which is sufficient to hold around 400 images at 1920×1080 resolution. Images are stored in JPEG format (at a 90% quality setting).
  • Thumbnail cache
    This cache stores thumbnails at the size that was requested by the caller, such as 512×288. The default size of this cache is 100 MB, which is sufficient to store around 11,000 thumbnails at 512×288, or around 25,000 thumbnails at 256×144.
  • Failure cache
    The failure cache stores the keys for images that could not be extracted because of a failure. For remote images, this means that the server returned an authoritative answer “no such image exists”, or that we encountered an unexpected (non-authoritative) failure, such as the server not responding or a DNS lookup timing out. For local images, it means either that the image data could not be processed because it is damaged, or that an audio file does not contain embedded artwork.

The full-size cache exists because it is likely that an application will request thumbnails at different sizes for the same image. For example, when scrolling through a list of songs that shows a small thumbnail of the album cover beside each song, the user is likely to select one of the songs to play, at which point the media player will display the same cover in a larger size. By keeping full-size images in a separate (smallish) cache, we avoid performing an expensive extraction or download a second time. Instead, we create additional thumbnails by scaling them from the full-size cache (which uses an LRU eviction policy).

The thumbnail cache stores thumbnails that were previously retrieved, also using LRU eviction. Thumbnails are stored as JPEG at the default quality setting of 75%, at the actual size that was requested by the caller. Storing JPEG images (rather than, say, PNG) saves space and increases cache effectiveness. (The minimal quality loss from compression is irrelevant for thumbnails). Because we store thumbnails at the size they are actually needed, we may have several thumbnails for the same image in the cache (each thumbnail at a different size). But applications typically ask for thumbnails in only a small number of sizes, and ask for different sizes for the same image only rarely. So, the slight increase in disk space is minor and amply repaid by applications not having to scale thumbnails after they receive them from the cache, which saves battery and achieves better performance overall.

Finally, the failure cache is used to stop futile attempts to repeatedly extract a thumbnail when we know that the attempt will fail. It uses LRU eviction with an expiry time for each entry.

Cache lookup algorithm

When asked for a thumbnail at a particular size, the lookup and thumbnail generation proceed as follows:

  1. Check if a thumbnail exists in the requested size in the thumbnail cache. If so, return it.
  2. Check if a full-size image for the thumbnail exists in the full-size cache. If so, scale the new thumbnail from the full-size image, add the thumbnail to the thumbnail cache, and return it.
  3. Check if there is an entry for the thumbnail in the failure cache. If so, return an error.
  4. Attempt to download or extract the original image for the thumbnail. If the attempt fails, add an entry to the failure cache and return an error.
  5. If the original image was delivered by the remote server or was extracted locally from streaming media, add it to the full-size cache.
  6. Scale the thumbnail to the desired size, add it to the thumbnail cache, and return it.

Note that these steps represent only the logical flow of control for a particular thumbnail. The implementation executes these steps concurrently for different thumbnails.

Designing for performance

Apart from fast on-disk caches (see below), the thumbnailer must make efficient use of I/O bandwidth and threads. This not only means making things fast, but also to not unnecessarily waste resources such as threads, memory, network connections, or file descriptors. Provided that enough requests are made to keep the service busy, we do not want it to ever wait for a download or image extraction to complete while there is something else that could be done in the mean time, and we want it to keep all CPU cores busy. In addition, requests that are slow (because they require a download or a CPU-intensive image extraction) must not block requests that are queued up behind them if those requests would result in cache hits that could be returned immediately.

To achieve a high degree of concurrency without blocking on long-running operations while holding precious resources, the thumbnailer uses a three-phase lookup algorithm:

  1. In phase 1, we look at the caches to determine if we have a hit or an authoritative miss. Phase 1 is very fast. (It takes around a millisecond to return a thumbnail from the cache on a Nexus 4.) However, cache lookup can briefly stall on disk I/O or require a lot of CPU to extract and scale an image. To get good performance, phase 1 requests are passed to a thread pool with as many threads as there are CPU cores. This allows the maximum number of lookups to proceed concurrently.
  2. Phase 2 is initiated if phase 1 determines that a thumbnail requires download or extraction, either of which can take on the order of seconds. (In case of extraction from local media, the task is CPU intensive; in case of download, most of the time is spent waiting for the reply from the server.) This phase is scheduled asynchronously from an event loop. This minimizes task switching and allows large numbers of requests to be queued while only using a few bytes for each request that is waiting in the queue.
  3. Phase 3 is really a repeat of phase 1: if phase 2 produces a thumbnail, it adds it to the cache; if phase 2 does not produce a thumbnail, it creates an entry in the failure cache. By simply repeating phase 1, the lookup then results in either a thumbnail or an error.

If phase 2 determines that a download or extraction is required, that work is performed concurrently: the service schedules several downloads and extractions in parallel. By default, it will run up to two concurrent downloads, and as many concurrent GStreamer pipelines as there are CPUs. This ensures that we use all of the available CPU cores. Moreover, download and extraction run concurrently with lookups for phase 1 and 3. This means that, even if a cache lookup briefly stalls on I/O, there is a good chance that another thread can make use of the CPU.

Because slow operations do not block lookup, this also ensures that a slow request does not stall requests for thumbnails that are already in the cache. In other words, it does not matter how many slow requests are in progress: requests that can be completed quickly are indeed completed quickly, regardless of what is going on elsewhere.

Overall, this strategy works very well. For example, with sufficient workload, the service achieves around 750% CPU utilization on an 8-core desktop machine, while still delivering cache hits almost instantaneously. (On a Nexus 4, cache hits take a little over 1 ms while concurrent extractions or downloads are in progress.)

A re-usable persistent cache for C++

The three internal caches are implemented by a small and flexible C++ API. This API is available as a separate reusable PersistentStringCache component (see persistent-cache-cpp) that provides a persistent store of arbitrary key–value pairs. Keys and values can be binary, and entries can be large. (Megabyte-sized values do not present a problem.)

The implementation uses leveldb, which provides a very fast NoSQL database that scales to multi-gigabyte sizes and provides integrity guarantees. In particular, if the calling process crashes, all inserts that completed at the API level will be intact after a restart. (In case of a power failure or kernel crash, a few buffered inserts can be lost, but the integrity of the database is still guaranteed.)

To use a cache, the caller instantiates it with a path name, a maximum size, and an eviction policy. The eviction policy can be set to either strict LRU (least-recently-used) or LRU with an expiry time. Once a cache reaches its maximum size, expired entries (if any) are evicted first and, if that does not free enough space for a new entry, entries are discarded in least-recently-used order until enough room is available to insert a new record. (In all other respects, expired entries behave like entries that were never added.)

A simple get/put API allows records to be retrieved and added, for example:

auto c = core::PersistentStringCache::open(
    “my_cache”, 100 * 1024 * 1024, core::CacheDiscardPolicy::lru_only);
// Look for an entry and add it if there is a cache miss.
string key = "Bjarne";
auto value = c->get(key);
if (value) {
    cout << key << ″: ″ << *value << endl;
} else {
    value = "C++ inventor";  // Provide a value for the key. 
    c->put(key, *value);     // Insert it.

Running this program prints nothing on the first run, and “Bjarne: C++ inventor” on all subsequent runs.

The API also allows application-specific metadata to be added to records, provides detailed statistics, supports dynamic resizing of caches, and offers a simple adapter template that makes it easy to store complex user-defined types without the need to clutter the code with explicit serialization and deserialization calls. (In a pinch, if iteration is not needed, the cache can be used as a persistent map by setting an impossibly large cache size, in which case no records are ever evicted.)


Our benchmarks indicate good performance. (Figures are for an Intel Ivy Bridge i7-3770k 3.5 GHz machine with a 256 GB SSD.) Our test uses 60-byte string keys. Values are binary blobs filled with random data (so they are not compressible), 20 kB in size with a standard deviation of 7,000, so the majority of values are 13–27 kB in size. The cache size is 100 MB, so it contains around 5,000 records.

Filling the cache with 100 MB of records takes around 2.8 seconds. Thereafter, the benchmark does a random lookup with an 80% hit probability. In case of a cache miss, it inserts a new random record, evicting old records in LRU order to make room for the new one. For 100,000 iterations, the cache returns around 4,800 “thumbnails” per second, with an aggregate read/write throughput of around 93 MB/sec. At 90% hit rate, we see a 50% performance increase at around 7,100 records/sec. (Writes are expensive once the cache is full due to the need to evict entries, which requires updating the main cache table as well as an index.)

Repeating the test with a 1 GB cache produces identical timings so (within limits) performance remains constant for large databases.

Overall, performance is restricted largely by the bandwidth to disk. With a 7,200 rpm disk, we measured around one third of the performance with an SSD.

Recovering from errors

The overall design of the thumbnailer delivers good performance when things work. However, our implementation has to deal with the unexpected, such as network requests that do not return responses, GStreamer pipelines that crash, request overload, and so on. What follows is a partial list of steps we took to ensure that things behave sensibly, particularly on a battery-powered device.

Retry strategy

The failure cache provides an effective way to stop the service from endlessly trying to create thumbnails that, in an earlier attempt, returned an error.

For remote images, we know that, if the server has (authoritatively) told us that it has no artwork for a particular artist or album, it is unlikely that artwork will appear any time soon. However, the server may be updated with more artwork periodically. To deal with this, we add an expiry time of one week to the entries in the failure cache. That way, we do not try to retrieve the same image again until at least one week has passed (and only if we receive a request for a thumbnail for that image again later).

As opposed to authoritative answers from the image server (“I do not have artwork for this artist.”), we can also encounter transient failures. For example, the server may currently be down, or there may be some other network-related issue. In this case, we remember the time of the failure and do not try to contact the remote server again for two hours. This conserves bandwidth and battery power.

The device may also be disconnected from the network, in which case any attempt to retrieve a remote image is doomed. Our implementation returns failure immediately on a cache miss for a remote image if no network is present or the device is in flight mode. (We do not add an entry to the failure cache in this case).

For local files, we know that, if an attempt to get a thumbnail for a particular file has failed, future attempts will fail as well. This means that the only way for the problem to get fixed is by modifying or replacing the actual media file. To deal with this, we add the inode number, modification time, and inode modification time to the key for local images. If a user replaces, say, a music file with a new one that contains artwork, we automatically pick up the new version of the file because its key has changed; the old version will eventually fall out of the cache.

Download and extraction failures

We monitor downloads and extractions for timely completion. (Timeouts for downloads and extractions can be configured separately.) If the server does not respond within 10 seconds, we abandon the attempt and treat it it as a transient network error. Similarly, the vs-thumb processes that extract images from audio and video files can hang. We monitor these processes and kill them if they do not produce a result within 10 seconds.

Database corruption

Assuming an error-free implementation of leveldb, database corruption is impossible. However, in practice, an errant command could scribble over the database files. If leveldb detects that the database is corrupted, the recovery strategy is simple: we delete the on-disk cache and start again from scratch. Because the cache contents are ephemeral anyway, this is fine (other than slower operation until the working set of thumbnails makes it into the cache again).

Dealing with backlog

The asynchronous API provided by the service allows an application to submit an unlimited number of requests. Lots of requests happen if, for example, the user has inserted a flash card with thousands of photos into the device and then requests a gallery view for the collection. If the service’s client-side API blindly forwards requests via DBus, this causes a problem because DBus terminates the connection once there are more than around 400 outstanding requests.

To deal with this, we limit the number of outstanding requests to 20 and send another request via DBus only when an earlier request completes. Additional requests are queued in memory. Because this happens on the client side, the number of outstanding requests is limited only by the amount of memory that is available to the client.

A related problem arises if a client submits many requests for a thumbnail for the same image. This happens when, for example, the user looks at a list of tracks: tracks that belong to the same album have the same artwork. If artwork needs to be retrieved from the remote server, naively forwarding cache misses for each thumbnail to the server would end up re-downloading the same image several times.

We deal with this by maintaining an in-memory map of all remote download requests that are currently in progress. If phase 1 reports a cache miss, before initiating a download, we add the key for the remote image to the map and remove it again once the download completes. If more requests for the same image encounter a cache miss while the download for the original request is still in progress, the key for the in-progress download is still in the map, and we hold additional requests for the same image until the download completes. We then schedule the held requests as usual and create their thumbnails from the image that was cached by the first request.


The thumbnailer runs with normal user privileges. We use AppArmor’s aa_query_label() function to verify that the calling client has read access to a file it wants a thumbnail for. This prevents one application from accessing thumbnails produced by a different application, unless both applications can read the original file. In addition, we place the entire service under an AppArmor profile to ensure that it can write only to its own cache directory.


Overall, we are very pleased with the overall design and performance of the thumbnailer. Each component has a clearly defined role with a clean interface, which made it easy for us to experiment and to refine the design as we went along. The design is extensible, so we can support additional media types or remote data sources without disturbing the existing code.

We used threads sparingly and only where we saw worthwhile concurrency opportunities. Using asynchronous interfaces for long-running operations kept resource usage to a minimum and allowed us to take advantage of I/O interleaving. In turn, this extracts the best possible performance from the hardware.

The thumbnailer now runs on Ubuntu Touch and is used by the gallery, camera, and music apps, as well as for all scopes that display media thumbnails.

Read more
Dustin Kirkland

Canonical is delighted to sponsor ContainerCon 2015, a Linux Foundation event in Seattle next week, August 17-19, 2015. It's quite exciting to see the A-list of sponsors, many of them newcomers to this particular technology, teaming with energy around containers. 

From chroots to BSD Jails and Solaris Zones, the concepts behind containers were established decades ago, and in fact traverse the spectrum of server operating systems. At Canonical, we've been working on containers in Ubuntu for more than half a decade, providing a home and resources for stewardship and maintenance of the upstream Linux Containers (LXC) project since 2010.

Last year, we publicly shared our designs for LXD -- a new stratum on top of LXC that endows the advantages of a traditional hypervisor into the faster, more efficient world of containers.

Those designs are now reality, with the open source Golang code readily available on Github, and Ubuntu packages available in a PPA for all supported releases of Ubuntu, and already in the Ubuntu 15.10 beta development tree. With ease, you can launch your first LXD containers in seconds, following this simple guide.

LXD is a persistent daemon that provides a clean RESTful interface to manage (start, stop, clone, migrate, etc.) any of the containers on a given host.

Hosts running LXD are handily federated into clusters of container hypervisors, and can work as Nova Compute nodes in OpenStack, for example, delivering Infrastructure-as-a-Service cloud technology at lower costs and greater speeds.

Here, LXD and Docker are quite complementary technologies. LXD furnishes a dynamic platform for "system containers" -- containers that behave like physical or virtual machines, supplying all of the functionality of a full operating system (minus the kernel, which is shared with the host). Such "machine containers" are the core of IaaS clouds, where users focus on instances with compute, storage, and networking that behave like traditional datacenter hardware.

LXD runs perfectly well along with Docker, which supplies a framework for "application containers" -- containers that enclose individual processes that often relate to one another as pools of micro services and deliver complex web applications.

Moreover, the Zen of LXD is the fact that the underlying container implementation is actually decoupled from the RESTful API that drives LXD functionality. We are most excited to discuss next week at ContainerCon our work with Microsoft around the LXD RESTful API, as a cross-platform container management layer.

Ben Armstrong, a Principal Program Manager Lead at Microsoft on the core virtualization and container technologies, has this to say:
“As Microsoft is working to bring Windows Server Containers to the world – we are excited to see all the innovation happening across the industry, and have been collaborating with many projects to encourage and foster this environment. Canonical’s LXD project is providing a new way for people to look at and interact with container technologies. Utilizing ‘system containers’ to bring the advantages of container technology to the core of your cloud infrastructure is a great concept. We are looking forward to seeing the results of our engagement with Canonical in this space.”
Finally, if you're in Seattle next week, we hope you'll join us for the technical sessions we're leading at ContainerCon 2015, including: "Putting the D in LXD: Migration of Linux Containers", "Container Security - Past, Present, and Future", and "Large Scale Container Management with LXD and OpenStack". Details are below.
Date: Monday, August 17 • 2:20pm - 3:10pm
Title: Large Scale Container Management with LXD and OpenStack
Speaker: Stéphane Graber
Location: Grand Ballroom B
Date: Wednesday, August 19 10:25am-11:15am
Title: Putting the D in LXD: Migration of Linux Containers
Speaker: Tycho Andersen
Location: Willow A
Date: Wednesday, August 19 • 3:00pm - 3:50pm
Title: Container Security - Past, Present and Future
Speaker: Serge Hallyn
Location: Ravenna

Read more
Dustin Kirkland

The Golden Ratio is one of the oldest and most visible irrational numbers known to humanity.  Pi is perhaps more famous, but the Golden Ratio is found in more of our art, architecture, and culture throughout human history.

I think of the Golden Ratio as sort of "Pi in 1 dimension".  Whereas Pi is the ratio of a circle's circumference to its diameter, the Golden Ratio is the ratio of a whole to one of its parts, when the ratio of that part to the remainder is equal.

Visually, this diagram from Wikipedia helps explain it:

We find the Golden Ratio in the architecture of antiquity, from the Egyptians to the Greeks to the Romans, right up to the Renaissance and even modern times.

While the base of the pyramids are squares, the Golden Ratio can be observed as the base and the hypotenuse of a basic triangular cross section like so:

The floor plan of the Parthenon has a width/depth ratio matching the Golden Ratio...

For the first 300 years of printing, nearly all books were printed on pages whose length to width ratio matched that of the Golden Ratio.

Leonardo da Vinci used the Golden Ratio throughout his works.  I'm told that his Vitruvian Man displays the Golden Ratio...

From school, you probably remember that the Golden Ratio is approximately ~1.6 (and change).
There's a strong chance that your computer or laptop monitor has a 16:10 aspect ratio.  Does 1280x800 or 1680x1050 sound familiar?

That ~1.6 number is only an approximation, of course.  The Golden Ratio is in fact an irrational number and can be calculated to much greater precision through several different representations, including:

You can plug that number into your computer's calculator and crank out a dozen or so significant digits.

However, if you want to go much farther than that, Alexander Yee has created a program called y-cruncher, which as been used to calculate most of the famous constants to world record precision.  (Sorry free software readers of this blog -- y-cruncher is not open source code...)

I came across y-cruncher a few weeks ago when I was working on the mprime post, demonstrating how you can easily put any workload into a Docker container and then produce both Juju Charms and Ubuntu Snaps that package easily.  While I opted to use mprime in that post, I saved y-cruncher for this one :-)

Also, while doing some network benchmark testing of The Fan Networking among Docker containers, I experimented for the first time with some of Amazon's biggest instances, which have dedicated 10gbps network links.  While I had a couple of those instances up, I did some small scale benchmarking of y-cruncher.

Presently, none of the mathematical constant records are even remotely approachable with CPU and Memory alone.  All of them require multiple terabytes of disk, which act as a sort of swap space for temporary files, as bits are moved in and out of memory while the CPU crunches.  As such, approaching these are records are overwhelmingly I/O bound -- not CPU or Memory bound, as you might imagine.

After a variety of tests, I settled on the AWS d2.2xlarge instance size as the most affordable instance size to break the previous Golden Ratio record (1 trillion digits, by Alexander Yee on his gaming PC in 2010).  I say "affordable", in that I could have cracked that record "2x faster" with a d2.4xlarge or d2.8xlarge, however, I would have paid much more (4x) for the total instance hours.  This was purely an economic decision :-)

Let's geek out on technical specifications for a second...  So what's in a d2.2xlarge?
  • 8x Intel Xeon CPUs (E5-2676 v3 @ 2.4GHz)
  • 60GB of Memory
  • 6x 2TB HDDs
First, I arranged all 6 of those 2TB disks into a RAID0 with mdadm, and formatted it with xfs (which performed better than ext4 or btrfs in my cursory tests).

$ sudo mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=6 /dev/xvd?
$ sudo mkfs.xfs /dev/md0
$ df -h /mnt
/dev/md0 11T 34M 11T 1% /mnt

Here's a brief look at raw read performance with hdparm:

$ sudo hdparm -tT /dev/md0
Timing cached reads: 21126 MB in 2.00 seconds = 10576.60 MB/sec
Timing buffered disk reads: 1784 MB in 3.00 seconds = 593.88 MB/sec

The beauty here of RAID0 is that each of the 6 disks can be used to read and/or write simultaneously, perfectly in parallel.  600 MB/sec is pretty quick reads by any measure!  In fact, when I tested the d2.8xlarge, I put all 24x 2TB disks into the same RAID0 and saw nearly 2.4 GB/sec read performance across that 48TB array!

With /dev/md0 mounted on /mnt and writable by my ubuntu user, I kicked off y-crunch with these parameters:

Program Version:       0.6.8 Build 9461 (Linux - x64 AVX2 ~ Airi)
Constant: Golden Ratio
Algorithm: Newton's Method
Decimal Digits: 2,000,000,000,000
Hexadecimal Digits: 1,660,964,047,444
Threading Mode: Thread Spawn (1 Thread/Task) ? / 8
Computation Mode: Swap Mode
Working Memory: 61,342,174,048 bytes ( 57.1 GiB )
Logical Disk Usage: 8,851,913,469,608 bytes ( 8.05 TiB )

Byobu was very handy here, being able to track in the bottom status bar my CPU load, memory usage, disk usage, and disk I/O, as well as connecting and disconnecting from the running session multiple times over the 4 days of running.

And approximately 79 hours later, it finished successfully!

Start Date:            Thu Jul 16 03:54:11 2015
End Date: Sun Jul 19 11:14:52 2015

Computation Time: 221548.583 seconds
Total Time: 285640.965 seconds

CPU Utilization: 315.469 %
Multi-core Efficiency: 39.434 %

Last Digits:
5027026274 0209627284 1999836114 2950866539 8538613661 : 1,999,999,999,950
2578388470 9290671113 7339871816 2353911433 7831736127 : 2,000,000,000,000

Amazing, another person (who I don't know), named Ron Watkins, performed the exact same computation and published his results within 24 hours, on July 22nd/23rd.  As such, Ron and I are "sharing" credit for the Golden Ratio record.

Now, let's talk about the economics here, which I think are the most interesting part of this post.

Look at the above chart of records, which are published on the y-cruncher page, the vast majority of those have been calculated on physical PCs -- most of them seem to be gaming PCs running Windows.

What's different about my approach is that I used Linux in the Cloud -- specifically Ubuntu in AWS.  I paid hourly (actually, my employer, Canonical, reimbursed me for that expense, thanks!)  It took right at 160 hours to run the initial calculation (79 hours) as well as the verification calculation (81 hours), at the current rate of $1.38/hour for a d2.2xlarge, which is a grand total of $220!

$220 is a small fraction of the cost of 6x 2TB disks, 60 GB of memory, or 8 Xeon cores, not to mention the electricity and cooling required to run a system of this size (~750W) for 160 hours.

If we say the first first trillion digits were already known from the previous record, that comes out to approximately 4.5 billion record-digits per dollar, and 12.5 billion record-digits per hour!

Hopefully you find this as fascinating as I!


Read more

T &#8211; 242d!

It’s only 242 days until April 1st, 2016, the month where another great Ubuntu Long Term Support (LTS) release will be born. Ubuntu 16.04 will be the most sophisticated release of Ubuntu so far. In my old/new role as Canonical’s Shepherd for all things related to Ubuntu Client (meaning Ubuntu, Phones, Tablets and everything related), […]

Read more
Dustin Kirkland

tl;dr:  Your Ubuntu-based container is not a copyright violation.  Nothing to see here.  Carry on.
I am speaking for my employer, Canonical, when I say you are not violating our policies if you use Ubuntu with Docker in sensible, secure ways.  Some have claimed otherwise, but that’s simply sensationalist and untrue.

Canonical publishes Ubuntu images for Docker specifically so that they will be useful to people. You are encouraged to use them! We see no conflict between our policies and the common sense use of Docker.

Going further, we distribute Ubuntu in many different signed formats -- ISOs, root tarballs, VMDKs, AMIs, IMGs, Docker images, among others.  We take great pride in this work, and provide them to the world at large, on, in public clouds like AWS, GCE, and Azure, as well as in OpenStack and on DockerHub.  These images, and their signatures, are mirrored by hundreds of organizations all around the world. We would not publish Ubuntu in the DockerHub if we didn’t hope it would be useful to people using the DockerHub. We’re delighted for you to use them in your public clouds, private clouds, and bare metal deployments.

Any Docker user will recognize these, as the majority of all Dockerfiles start with these two words....

FROM ubuntu

In fact, we gave away hundreds of these t-shirts at DockerCon.

We explicitly encourage distribution and redistribution of Ubuntu images and packages! We also embrace a very wide range of community remixes and modifications. We go further than any other commercially supported Linux vendor to support developers and community members scratching their itches. There are dozens of such derivatives and many more commercial initiatives based on Ubuntu - we are definitely not trying to create friction for people who want to get stuff done with Ubuntu.

Our policy exists to ensure that when you receive something that claims to be Ubuntu, you can trust that it will work to the same standard, regardless of where you got it from. And people everywhere tell us they appreciate that - when they get Ubuntu on a cloud or as a VM, it works, and they can trust it.  That concept is actually hundreds of years old, and we’ll talk more about that in a minute....

So, what do I mean by “sensible use” of Docker? In short - secure use of Docker. If you are using a Docker container then you are effectively giving the producer of that container ‘root’ on your host. We can safely assume that people sharing an Ubuntu docker based container know and trust one another, and their use of Ubuntu is explicitly covered as personal use in our policy. If you trust someone to give you a Docker container and have root on your system, then you can handle the risk that they inadvertently or deliberately compromise the integrity or reliability of your system.

Our policy distinguishes between personal use, which we can generalise to any group of collaborators who share root passwords, and third party redistribution, which is what people do when they exchange OS images with strangers.

Third party redistribution is more complicated because, when things go wrong, there’s a real question as to who is responsible for it. Here’s a real example: a school district buys laptops for all their students with free software. A local supplier takes their preferred Linux distribution and modifies parts of it (like the kernel) to work on their hardware, and sells them all the PCs. A month later, a distro kernel update breaks all the school laptops. In this case, the Linux distro who was not involved gets all the bad headlines, and the free software advocates who promoted the whole idea end up with egg on their faces.

We’ve seen such cases in real hardware, and in public clouds and other, similar environments.  Digital Ocean very famously published some modified and very broken Ubuntu images, outside of Canonical's policies.  That's inherently wrong, and easily avoidable.

So we simply say, if you’re going to redistribute Ubuntu to third parties who are trusting both you and Ubuntu to get it right, come and talk to Canonical and we’ll work out how to ensure everybody gets what they want and need.

Here’s a real exercise I hope you’ll try...

  1. Head over to your local purveyor of fine wines and liquors.
  2. Pick up a nice bottle of Champagne, Single Malt Scotch Whisky, Kentucky Straight Bourbon Whiskey, or my favorite -- a rare bottle of Lambic Oude Gueze.
  3. Carefully check the label, looking for a seal of Appellation d'origine contrôlée.
  4. In doing so, that bottle should earn your confidence that it was produced according to strict quality, format, and geographic standards.
  5. Before you pop the cork, check the seal, to ensure it hasn’t been opened or tampered with.  Now, drink it however you like.
  6. Pour that Champagne over orange juice (if you must).  Toss a couple ice cubes in your Scotch (if that’s really how you like it).  Pour that Bourbon over a Coke (if that’s what you want).
  7. Enjoy however you like -- straight up or mixed to taste -- with your own guests in the privacy of your home.  Just please don’t pour those concoctions back into the bottle, shove a cork in, put them back on the shelf at your local liquor store and try to pass them off as Champagne/Scotch/Bourbon.

Rather, if that’s really what you want to do -- distribute a modified version of Ubuntu -- simply contact us and ask us first (thanks for sharing that link, mjg59).  We have some amazing tools that can help you either avoid that situation entirely, or at least let’s do everyone a service and let us help you do it well.

Believe it or not, we’re really quite reasonable people!  Canonical has a lengthy, public track record, donating infrastructure and resources to many derivative Ubuntu distributions.  Moreover, we’ve successfully contracted mutually beneficial distribution agreements with numerous organizations and enterprises. The result is happy users and happy companies.

FROM ubuntu,

The one and only Champagne region of France

Read more
Dustin Kirkland

As you probably remember from grade school math class, primes are numbers that are only divisible by 1 and themselves.  2, 3, 5, 7, and 11 are the first 5 prime numbers, for example.

Many computer operations, such as public-key cryptography, depends entirely on prime numbers.  In fact, RSA encryption, invented in 1978, uses a modulo of a product of two very large primes for encryption and decryption.  The security of asymmetric encryption is tightly coupled with the computational difficulty in factoring large numbers.  I actually use prime numbers as the status update intervals in Byobu, in order to improve performance and distribute the update spikes.

Euclid proved that there are infinitely many prime numbers around 300 BC.  But the Prime Number Theorem (proven in the 19th century) says that the probability of any number is prime is inversely proportional to its number of digits.  That means that larger prime numbers are notoriously harder to find, and it gets harder as they get bigger!
What's the largest known prime number in the world?

Well, it has 17,425,170 decimal digits!  If you wanted to print it out, size 11 font, it would take 6,543 pages -- or 14 reams of paper!

That number is actually one less than a very large power of 2.  257,885,161-1.  It was discovered by Curtis Cooper on January 25, 2013, on an Intel Core2 Duo.

Actually, each of the last 14 record largest prime numbers discovered (between 1996 and today) have been of that form, 2P-1.  Numbers of that form are called Mersenne Prime Numbers, named after Friar Marin Mersenne, a French priest who studied them in the 1600s.

Friar Mersenne's work continues today in the form of the Great Internet Mersenne Prime Search, and the mprime program, which has been used to find those 14 huge prime numbers since 1996.

mprime is a massive parallel, cpu scavenging utility, much like SETI@home or the Protein Folding Project.  It runs in the background, consuming resources, working on its little piece of the problem.  mprime is open source code, and also distributed as a statically compiled binary.  And it will make a fine example of how to package a service into a Docker container, a Juju charm, and a Snappy snap.

Docker Container

First, let's build the Docker container, which will serve as our fundamental building block.  You'll first need to download the mprime tarball from here.  Extract it, and the directory structure should look a little like this (or you can browse it here):

├── license.txt
├── local.txt
├── mprime
├── prime.log
├── prime.txt
├── readme.txt
├── results.txt
├── stress.txt
├── undoc.txt
├── whatsnew.txt
└── worktodo.txt

And then, create a Dockerfile, that copies the files we need into the image.  Here's our example.

FROM ubuntu
MAINTAINER Dustin Kirkland
COPY ./mprime /opt/mprime/
COPY ./license.txt /opt/mprime/
COPY ./prime.txt /opt/mprime/
COPY ./readme.txt /opt/mprime/
COPY ./stress.txt /opt/mprime/
COPY ./undoc.txt /opt/mprime/
COPY ./whatsnew.txt /opt/mprime/
CMD ["/opt/mprime/mprime", "-w/opt/mprime/"]

Now, build your Docker image with:

$ sudo docker build .
Sending build context to Docker daemon 36.02 MB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
Successfully built de2e817b195f

Then publish the image to Dockerhub.

$ sudo docker push kirkland/mprime

You can see that image, which I've publicly shared here:

Now you can run this image anywhere you can run Docker.

$ sudo docker run -d kirkland/mprime

And verify that it's running:

$ sudo docker ps
c9233f626c85 kirkland/mprime:latest "/opt/mprime/mprime 24 seconds ago Up 23 seconds furious_pike

Juju Charm

So now, let's create a Juju Charm that uses this Docker container.  Actually, we're going to create a subordinate charm.  Subordinate services in Juju are often monitoring and logging services, things that run along side primary services.  Something like mprime is a good example of something that could be a subordinate service, attached to one or many other services in a Juju model.

Our directory structure for the charm looks like this (or you can browse it here):

└── trusty
└── mprime
├── config.yaml
├── copyright
├── hooks
│   ├── config-changed
│   ├── install
│   ├── juju-info-relation-changed
│   ├── juju-info-relation-departed
│   ├── juju-info-relation-joined
│   ├── start
│   ├── stop
│   └── upgrade-charm
├── icon.png
├── icon.svg
├── metadata.yaml
└── revision
3 directories, 15 files

The three key files we should look at here are metadata.yaml, hooks/install and hooks/start:

$ cat metadata.yaml
name: mprime
summary: Search for Mersenne Prime numbers
maintainer: Dustin Kirkland
description: |
A Mersenne prime is a prime of the form 2^P-1.
The first Mersenne primes are 3, 7, 31, 127
(corresponding to P = 2, 3, 5, 7).
There are only 48 known Mersenne primes, and
the 13 largest known prime numbers in the world
are all Mersenne primes.
This charm uses a Docker image that includes the
statically built, 64-bit Linux binary mprime
which will consume considerable CPU and Memory,
searching for the next Mersenne prime number.
See for more details!
- misc
subordinate: true
interface: juju-info
scope: container


$ cat hooks/install
apt-get install -y
docker pull kirkland/mprime


$ cat hooks/start
service docker restart
docker run -d kirkland/mprime

Now, we can add the mprime service to any other running Juju service.  As an example here, I'll --bootstrap, deploy the Apache2 charm, and attach mprime to it.

$ juju bootrap
$ juju deploy apache2
$ juju deploy cs:~kirkland/mprime
$ juju add-relation apache2 mprime

Looking at our services, we can see everything deployed and running here:

$ juju status
charm: cs:trusty/apache2-14
exposed: false
current: unknown
since: 20 Jul 2015 11:55:59-05:00
- mprime
current: unknown
since: 20 Jul 2015 11:55:59-05:00
current: idle
since: 20 Jul 2015 11:56:03-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
machine: "1"
current: unknown
since: 20 Jul 2015 11:58:52-05:00
current: idle
since: 20 Jul 2015 11:58:56-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
upgrading-from: local:trusty/mprime-1
charm: local:trusty/mprime-1
exposed: false
service-status: {}
- apache2
- apache2

Snappy Ubuntu Core Snap

Finally, let's build a Snap.  Snaps are applications that run in Ubuntu's transactional, atomic OS, Snappy Ubuntu Core.

We need the simple directory structure below (or you can browse it here):

├── meta
│   ├── icon.png
│   ├── icon.svg
│   ├── package.yaml
│   └──
1 directory, 5 files

The package.yaml describes what we're actually building, and what capabilities the service needs.  It looks like this:

name: mprime
vendor: Dustin Kirkland 
architecture: [amd64]
icon: meta/icon.png
version: 28.5-11
- docker
- name: mprime
description: "Search for Mersenne Prime Numbers"
- docker_client
- networking

And the launches the service via Docker.

docker rm -v -f mprime
docker run --name mprime -d kirkland/mprime
docker wait mprime

Now, we can build the snap like so:

$ snappy build .
Generated 'mprime_28.5-11_amd64.snap' snap
$ ls -halF *snap
-rw-rw-r-- 1 kirkland kirkland 9.6K Jul 20 12:38 mprime_28.5-11_amd64.snap

First, let's install the Docker framework, upon which we depend:

$ snappy-remote --url ssh://snappy-nuc install docker
Installing docker from the store
Installing docker
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1

And now, we can install our locally built Snap.
$ snappy-remote --url ssh://snappy-nuc install mprime_28.5-11_amd64.snap
Installing mprime_28.5-11_amd64.snap from local environment
Installing /tmp/mprime_28.5-11_amd64.snap
2015/07/20 17:44:26 Signature check failed, but installing anyway as requested
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20
mprime 2015-07-20 28.5-11 sideload
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1

Alternatively, you can install the snap directly from the Ubuntu Snappy store, where I've already uploaded the mprime snap:

$ snappy-remote --url ssh://snappy-nuc install mprime.kirkland
Installing mprime.kirkland from the store
Installing mprime.kirkland
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20
mprime 2015-07-20 28.5-11 kirkland
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1


How long until this Docker image, Juju charm, or Ubuntu Snap finds a Mersenne Prime?  Almost certainly never :-)  I want to be clear: that was never the point of this exercise!

Rather I hope you learned how easy it is to run a Docker image inside either a Juju charm or an Ubuntu snap.  And maybe learned something about prime numbers along the way ;-)

Join us in #docker, #juju, and #snappy on


Read more
Michael Hall

Picture by Aaron HoneycuttThe next Ubuntu Global Jam is coming up next month, the weekend of August 7th through the 9th. Last cycle we introduced the Ubuntu Global Jam Packs, and they were such a big hit that we’re bringing them back this cycle.

Jam Packs are a miniaturized version of the conference packs that Canonical has long offered to LoCo Teams who show off Ubuntu at events. These smaller packs are designed specifically for LoCo Teams to use during their own Global Jam events, to help promote Ubuntu in their area and encourage participation with the team.

What’s in the Global Jam Pack?

The Global Jam Pack contains a number of give-away items to use during your team’s Global Jam event. This cycle the packs will contain:

  • 20 DVDs
  • 20 sticker sheets
  • 20 pens
  • 20 notebooks

There will also be one XL t-shirt for the person who is organizing the event.

Who can request a Global Jam Pack?

The Global Jam Pack is available to any LoCo team that is running a Global Jam event. It doesn’t matter if your team has verified status or not, if you are hosting a Global Jam event, you can request a Jam Pack for it.

How do I request a Global Jam Pack?

The first thing you need to do is plan a Global Jam event for your LoCo team. Global Jams happen one weekend each cycle, and are a chance for you to meet up with Ubuntu contributors in your area to work together on improving some aspect of Ubuntu. They don’t require a lot of setup, just pick a day, time and location for everybody to show up.

Once you know when and where you will be holding your event, you need to register it in the LoCo Team Portal, making sure it’s listed as being part of the Ubuntu Global Jam parent event. You can use your event page on the portal to advertise your event, and allow people to register their intention to attend.

Next you will need to fill out a community donations request for your Jam Pack. In there you will be asked for your name and shipping address. In the field for describing your request, be sure to include the link to your team’s Global Jam event.

Need help?

If you need help or advice in organizing a Global Jam event, join #ubuntu-locoteams on Freenode IRC to talk to folks from the community who have experience running them. We’ve also documented some great advice to help you with organization on our wiki, including a list of suggested topics for you to work on during your event.

Read more


Once in a while, I get to tackle issues that have little or no documentation other than the official documentation of the product and the product’s source code.  You may know from experience that product documentation is not always sufficient to get a complete configuration working. This article intend to flesh out a solution to customizing disk configurations using Curtin.

This article take for granted that you are familiar with Maas install mechanisms, that you already know how to customize installations and deploy workloads using Juju.

While my colleagues in the Maas development team have done a tremendous job at keeping the Maas documentation accurate (see Maas documentation), it does only cover the basics when it comes to Maas’s preseed customization, especially when it comes to Curtin’s customization.

Curtin is Maas’s fastpath installer which is meant to replace Debian’s installer (familiarly known as d-i). It does a complete machine installation much faster than with the standard debian method.  But while d-i is well known and it is easy to find example of its use on the web, Curtin does not have the same notoriety and, hence, not as much documentation.

Theory of operation

When the fastpath installer is used to install a maas unit (which is now the default), it will send the content of the files prefixed with curtin_ to the unit being installed.  The curtin_userdata contains cloud-config type commands that will be applied by cloud-init when the unit is installed. If we want to apply a specific partitioning scheme to all of our unit, we can modify this file and every unit will get those commands applied to it when it installs.

But what if we only have one or a few servers that have specific disk layout that require partitioning ?  In the following example, I will suppose that we have one server, named curtintest which has a one terabyte disk (1 TB) and that we want to partition this disk with the following partition table :

  • Partition #1 has the /boot file system and is bootable
  • Partition #2 has the root (/) file system
  • Partition #3 has a 31 Gb file system
  • Partition #4 has 32 Gb of swap space
  • Partition #5 has the remaining disk space

Since only one server has such a disk, the partitioning should be specific to that curtintest server only.

Setting up Curtin development environment

To get to a working Maas partitioning setup, it is preferable to use Curtin’s development environment to test the curtin commands. Using Maas deployment to test each command quickly becomes tedious and time consuming.  There is a description on how to set it up in the README.txt but here are more details here.

Aside from putting all the files under one single directory, the steps described here are the same as the one in the README.txt file :

$ mkdir -p download
$ DLDIR=$(pwd)/download
$ rel="trusty"
$ arch=amd64
$ burl="$rel/current/"
$ for f in $rel-server-cloudimg-${arch}-root.tar.gz $rel-server-cloudimg-{arch}-disk1.img; do wget "$burl/$f" -O $DLDIR/$f; done
$ ( cd $DLDIR && qemu-img convert -O qcow $rel-server-cloudimg-${arch}-disk1.img $rel-server-cloudimg-${arch}-disk1.qcow2)
$ BOOTIMG="$DLDIR/$rel-server-cloudimg-${arch}-disk1.qcow2"
$ ROOTTGZ="$DLDIR/$rel-server-cloudimg-${arch}-root.tar.gz"
$ mkdir src
$ bzr init-repo src/curtin
$ (cd src/curtin && bzr  branch lp:curtin trunk.dist )
$ (cd src/curtin && bzr  branch trunk.dist trunk)
$ cd src/curtin/trunk

You now have an environment you can use with Curtin to automate installations. You can test it by using the following command which will start a VM and run « curtin install » in it.  Once you get the prompt, login with :

username : ubuntu
password : passw0rd

$ sudo ./tools/launch $BOOTIMG --publish $ROOTTGZ -- curtin install "PUBURL/${ROOTTGZ##*/}"

Using Curtin in the development environment

To test Curtin in its environment, simply remove  — curtin install « PUBURL/${ROOTTGZ##*/} » at the end of the statement. Once logged in, you will find the Curtin executable in /curtin/bin :

ubuntu@ubuntu:~$ sudo -s
root@ubuntu:~# /curtin/bin/curtin --help
usage: [-h] [--showtrace] [--verbose] [--log-file LOG_FILE]

positional arguments:

optional arguments:
-h, --help            show this help message and exit
--verbose, -v
--log-file LOG_FILE

Each of Curtin’s commands have their own help :

ubuntu@ubuntu:~$ sudo -s
root@ubuntu:~# /curtin/bin/curtin install --help
usage: install [-h] [-c FILE] [--set key=val] [source [source ...]]

positional arguments:
source what to install

optional arguments:
-h, --help show this help message and exit
-c FILE, --config FILE
read configuration from cfg
--set key=val define a config variable


Creating Maas’s Curtin preseed commands

Now that we have our Curtin development environment available, we can use it to come up with a set of commands that will be fed to Curtin by Maas when a unit is created.

Maas uses preseed files located in /etc/maas/preseeds on the Maas server. The curtin_userdata preseed file is the one that we will use as a reference to build our set of partitioning commands.  During the testing phase, we will use the -c option of curtin install along with a configuration file that will mimic the behavior of curtin_userdata.

We will also need to add a fake 1TB disk to Curtin’s development environment so we can use it as a partitioning target. So in the development environment, issue the following command :

$ qemu-img create -f qcow2 boot.disk 1000G Formatting ‘boot.disk’, fmt=qcow2 size=1073741824000 encryption=off cluster_size=65536 lazy_refcounts=off

sudo ./tools/launch $BOOTIMG –publish $ROOTTGZ

ubuntu: ubuntu password: passw0rd

ubuntu@ubuntu:~$ sudo -s root@ubuntu:~# cat /proc/partitions

major minor  #blocks  name

253        0    2306048 vda 253        1    2305024 vda1 253       16        426 vdb 253       32 1048576000 vdc 11        0    1048575 sr0

We can see that the 1000G /dev/vdc is indeed present.  Let’s now start to craft the conffile that will receive our partitioning commands. To test the syntax, we will use two simple commands :

root@ubuntu:~# cat << EOF > conffile 
  builtin: []
  01_partition_make_label: ["/sbin/parted", "/dev/vdc", "-s", "'","mklabel","msdos","'"]
  02_partition_make_part: ["/sbin/parted", "/dev/vdc", "-s", "'","mkpart","primary","1049K","538M","'"] 

The sources: statement is only there to avoid having to repeat the SOURCE portion of the curtin command and is not to be used in the final Maas configuration. The URL is the address of the server from which you are running the Curtin development environment.


The builtin [] statement is VERY important. It is there to override Curtin’s native builtin statement which is to partition the disk using « block-meta simple ».  If it is removed, Curtin will overwrite he partitioning with its default configuration. This comes straight from Scott Moser, the main developer behind Curtin.

Now let’s run the Curtin command :

root@ubuntu:~# /curtin/bin/curtin install -c conffile

Curtin will run its installation sequence and you will see a display which you should be familiar with if you installed units with Maas previously.  The command will most probably exit on error, comlaining about the fact that install-grub received an argument that was not a block device. We do not need to worry about that at the motent.

Once completed, have a look at the partitioning of the /dev/vdc device :

root@ubuntu:~# parted /dev/vdc print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4

The partitioning commands were successful and we have the /dev/vdc disk properly configured.  Now that we know that the mechanism works, let try with a complete configuration file. I have found that it was preferable to start with a fresh 1TB disk :

root@ubuntu:~# poweroff

$ rm -f boot.img

$ qemu-img create -f qcow2 boot.disk 1000G
Formatting ‘boot.disk’, fmt=qcow2 size=1073741824000 encryption=off cluster_size=65536 lazy_refcounts=off

sudo ./tools/launch $BOOTIMG –publish $ROOTTGZ

ubuntu@ubuntu:~$ sudo -s

root@ubuntu:~# cat << EOF > conffile 
  builtin: [] 
  01_partition_announce: ["echo", "'### Partitioning disk ###'"]
  01_partition_make_label: ["/sbin/parted", "/dev/vda", "-s", "'","mklabel","msdos","'"]
  02_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","1049k","538M","'"]
  02_partition_set_flag: ["/sbin/parted", "/dev/vda", "-s", "'","set","1","boot","on","'"]
  04_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","538M","4538M","'"]
  05_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","extended","4538M","1000G","'"]
  06_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","25.5G","57G","'"]
  07_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","57G","89G","'"]
  08_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","89G","1000G","'"]
  09_partition_announce: ["echo", "'### Creating filesystems ###'"]
  10_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda1"]
  11_partition_label_fs: ["/sbin/e2label", "/dev/vda1", "cloudimg-boot"]
  12_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda2"]
  13_partition_label_fs: ["/sbin/e2label", "/dev/vda2", "cloudimg-rootfs"]
  14_partition_mount_fs: ["sh", "-c", "mount /dev/vda2 $TARGET_MOUNT_POINT"]
  15_partition_mkdir: ["sh", "-c", "mkdir $TARGET_MOUNT_POINT/boot"]
  16_partition_mount_fs: ["sh", "-c", "mount /dev/vda1 $TARGET_MOUNT_POINT/boot"]
  17_partition_announce: ["echo", "'### Filling /etc/fstab ###'"]
  18_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-rootfs / ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
  19_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-boot /boot ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
  20_partition_make_swap: ["sh", "-c", "mkswap /dev/vda6"]
  21_partition_make_fstab: ["sh", "-c", "echo '/dev/vda6 none swap sw 0 0' >> $OUTPUT_FSTAB"]
sources: 01_primary: EOF

You will note that I have added a few statement like [« echo », « ‘### Partitioning disk ###' »] that will display some logs during the execution. Those are not necessary.
Now let’s try a second test with the complete configuration file :

root@ubuntu:~# /curtin/bin/curtin install -c conffile

root@ubuntu:~# parted /dev/vdc print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4         boot
2      538MB   4538MB  4000MB  primary   ext4
3      4538MB  1000GB  995GB   extended               lba
5      25.5GB  57.0GB  31.5GB  logical
6      57.0GB  89.0GB  32.0GB  logical
7      89.0GB  1000GB  911GB   logical

We now have a correctly partitioned disk in our development environment. All we need to do now is to carry that over to Maas to see if it works as expected.

Customization of Curtin execution in Maas

The section « How preseeds work in MAAS » give a good outline on how to select the name of the a preseed file to restrict its usage to specific sub-groups of nodes.  In our case, we want our partitioning to apply to only one node : curtintest.  So by following the description in the section « User provided preseeds« , we need to use the following template :


The fileneme that we need to choose needs to end with our hostname, curtintest. The other elements are :

  • prefix : curtin_userdata
  • osystem : amd64
  • node_subarch : generic
  • release : trusty
  • node_name : curtintest

So according to that, our filename must be curtin_userdata_amd64_generic_trusty_curtintest

On the MAAS server, we do the following :

root@maas17:~# cd /etc/maas/preseeds

root@maas17:~# cp curtin_userdata curtin_userdata_amd64_generic_trusty_curtintest

We now edit this newly created file and add our previously crafted Curtin configuration file just after the following block :

{{if third_party_drivers and driver}}
  {{py: key_string = ''.join(['\\x%x' % x for x in map(ord, driver['key_binary'])])}}
  driver_00_get_key: /bin/echo -en '{{key_string}}' > /tmp/maas-{{driver['package']}}.gpg
  driver_01_add_key: ["apt-key", "add", "/tmp/maas-{{driver['package']}}.gpg"]
  driver_02_add: ["add-apt-repository", "-y", "deb {{driver['repository']}} {{node.get_distro_series()}} main"]
  driver_03_update_install: ["sh", "-c", "apt-get update --quiet && apt-get --assume-yes install {{driver['package']}}"]
  driver_04_load: ["sh", "-c", "depmod && modprobe {{driver['module']}}"]

The complete section should look just like this :

{{if third_party_drivers and driver}}
  {{py: key_string = ''.join(['\\x%x' % x for x in map(ord, driver['key_binary'])])}}
   driver_00_get_key: /bin/echo -en '{{key_string}}' > /tmp/maas-{{driver['package']}}.gpg
   driver_01_add_key: ["apt-key", "add", "/tmp/maas-{{driver['package']}}.gpg"]
   driver_02_add: ["add-apt-repository", "-y", "deb {{driver['repository']}} {{node.get_distro_series()}} main"]
   driver_03_update_install: ["sh", "-c", "apt-get update --quiet && apt-get --assume-yes install {{driver['package']}}"]
   driver_04_load: ["sh", "-c", "depmod && modprobe {{driver['module']}}"]
   builtin: []
   01_partition_announce: ["echo", "'### Partitioning disk ###'"]
   01_partition_make_label: ["/sbin/parted", "/dev/vda", "-s", "'","mklabel","msdos","'"]
   02_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","1049k","538M","'"]
   02_partition_set_flag: ["/sbin/parted", "/dev/vda", "-s", "'","set","1","boot","on","'"]
   04_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","538M","4538M","'"]
   05_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","extended","4538M","1000G","'"]
   06_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","25.5G","57G","'"]
   07_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","57G","89G","'"]
   08_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","89G","1000G","'"]
   09_partition_announce: ["echo", "'### Creating filesystems ###'"]
   10_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda1"]
   11_partition_label_fs: ["/sbin/e2label", "/dev/vda1", "cloudimg-boot"]
   12_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda2"]
   13_partition_label_fs: ["/sbin/e2label", "/dev/vda2", "cloudimg-rootfs"]
   14_partition_mount_fs: ["sh", "-c", "mount /dev/vda2 $TARGET_MOUNT_POINT"]
   15_partition_mkdir: ["sh", "-c", "mkdir $TARGET_MOUNT_POINT/boot"]
   16_partition_mount_fs: ["sh", "-c", "mount /dev/vda1 $TARGET_MOUNT_POINT/boot"]
   17_partition_announce: ["echo", "'### Filling /etc/fstab ###'"]
   18_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-rootfs / ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
   19_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-boot /boot ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
   20_partition_make_swap: ["sh", "-c", "mkswap /dev/vda6"]
   21_partition_make_fstab: ["sh", "-c", "echo '/dev/vda6 none swap sw 0 0' >> $OUTPUT_FSTAB"]

Now that maas is properly configured for curtintest, complete the test by deploying a charm in a Juju environment where curtintest is properly comissionned.  In that example, curtintest is the only available node so maas will systematically pick it up :

caribou@avogadro:~$ juju status
environment: maas17
« 0 »:
agent-state: started
agent-version: 1.24.0
dns-name: state-server.maas
instance-id: /MAAS/api/1.0/nodes/node-2555c398-1bf9-11e5-a7c4-525400214658/
series: trusty
hardware: arch=amd64 cpu-cores=1 mem=1024M
state-server-member-status: has-vote
services: {}
provider-id: maas-eth0

caribou@avogadro:~$ juju deploy mysql
Added charm « cs:trusty/mysql-25 » to the environment.

Once the mysql charm has been deployed, connect to the unit to confirm that the partitioning was successful

caribou@avogadro:~$ juju ssh mysql/0
ubuntu@curtintest:~$ sudo -s
root@curtintest:~# parted /dev/vda print
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4         boot
2      538MB   4538MB  4000MB  primary   ext4
3      4538MB  1000GB  995GB   extended               lba
5      25.5GB  57.0GB  31.5GB  logical
6      57.0GB  89.0GB  32.0GB  logical
7      89.0GB  1000GB  911GB   logical
ubuntu@curtintest:~$ swapon -s
Filename Type Size Used Priority
/dev/vda6 partition 31249404 0 -1


Customizing disks and partition using curtin is possible but currently not sufficiently documented. I hope that this write up will be helpful.  Sustained development on Curtin is currently done to improve these functionalities so things will definitively get better.

Read more
Ben Howard

With Ubuntu 12.04.2, the kernel team introduced the idea of the "hardware enablement kernel" (HWE), originally intended to support new hardware for bare metal server and desktop. In fact, the documentation indicates that HWE images are not suitable for Virtual or Cloud Computing environments.  The thought was that cloud and virtual environments provide stable hardware and that the newer kernel features would not be needed.

Time has proven this assumption painfully wrong. Take for example the need for drivers in virtual environments. Several of the Cloud providers that we have engaged with have requested the use of the HWE kernel by default. On GCE, the HWE kernels provide support for their NVME disks or multiqueue NIC support. Azure has benefited from having an updated HyperV driver stack resulting in better performance. When we engaged with VMware Air, the 12.04 kernel lacked the necessary drivers.

Perhaps more germane to our Cloud users is that containers are using kernel features. 12.04 users need to use the HWE kernel in order to make use of Docker. The new Ubuntu Fan project will be enabled for 14.04 via the HWE-V kernel for Ubuntu 14.04.3. If you use Ubuntu as your container host, you will likely consider using an HWE kernel.

And with that there has been a steady chorus of people requesting that we provide HWE image builds for AWS. The problem has never been the base builds; building the base bits is fairly easy. The hard part is that by adding base builds, each daily and release build goes form 96 images for AWS to 288 (needless to say that is quite a problem). Over the last few weeks -- largely in my spare time -- I've been working out what it would take to deliver HWE images for AWS.

I am happy to announce that as of today, we are now building HWE-U (3.16) and HWE-V (3.19) Ubuntu 14.04 images for AWS. To be clear, we are not making any behavioral changes to the standard Ubuntu 14.04 images. Unless users opt into using an HWE image on AWS they will continue to get the 3.13 kernel. However, for those who want newer kernels, they now have the choice.

For the time being, only amd64 and i386 builds are being published.. Over the next few weeks, we expect the HWE images to reach full feature parity including release promotions, and indexing. And I fully expect that the HWE-V version of 14.04 will include our recent Fan project once the SRU's complete.

Check them out at and .

As always, feedback is always welcome.

Read more
Ben Howard

[UPDATE] The Image ID's have been updated with the latest builds which now include Docker 1.6.2, the latest LXD and of course the Ubuntu Fan driver. 

This week, Dustin Kirkland announced the Ubuntu Fan Project.  To steal from the description, "The Fan is not a software-defined network, and relies on neither distributed databases nor consensus protocols.  Rather, routes are calculated deterministically and traffic carries no additional overhead beyond routine IP tunneling.  Canonical engineers have already demonstrated The Fan operating at 5Gpbs between two Docker containers on separate hosts."

My team at Canonical is responsible for the production of these images. Once the official SRU's land, I anticipate that we will publish an official stream over at But until then, check back here for images and updates. As always, if you have feedback, please hop into #server on FreeNode or send email.

GCE Images

Images for GCE have been published to the "ubuntu-os-cloud-devel" project.

The Images are:
  • daily-ubuntu-docker-lxd-1404-trusty-v20150620
  • daily-ubuntu-docker-lxd-1504-vivid-v20150621
To launch an instance, you might run:
$ gcloud compute instances create \
    --image-project ubuntu-os-cloud-devel \
    --image <IMAGE> <NAME>

You need to make sure that IPIP traffic is enable:
$ gcloud compute firewall-rules create fan2 --allow 4 --source-ranges

Amazon AWS Images

The AWS images are HVM-only, AMD64 builds. 


It is important to note that these images are only usable inside of a VPC. Newer AWS users are in VPC by default, but older users may need to create and update their VPC. For example:
$ ec2-authorize --cidr <CIDR_RANGE> --protocol 4 <SECURITY_GROUP>

Read more
Dustin Kirkland

652 Linux containers running on a Laptop?  Are you kidding me???

A couple of weeks ago, at the OpenStack Summit in Vancouver, Canonical released the results of some scalability testing of Linux containers (LXC) managed by LXD.

Ryan Harper and James Page presented their results -- some 536 Linux containers on a very modest little Intel server (16GB of RAM), versus 37 KVM virtual machines.

Ryan has published the code he used for the benchmarking, and I've used to to reproduce the test on my dev laptop (Thinkpad x230, 16GB of RAM, Intel i7-3520M).

I managed to pack a whopping 652 Ubuntu 14.04 LTS (Trusty) containers on my Ubuntu 15.04 (Vivid) laptop!

The system load peaked at 1056 (!!!), but I was using merely 56% of 15.4GB of system memory.  Amazingly, my Unity desktop and Byobu command line were still perfectly responsive, as were the containers that I ssh'd into.  (Aside: makes me wonder if the Linux system load average is accounting for container process correctly...)

Check out the process tree for a few hundred system containers here!

As for KVM, I managed to launch 31 virtual machines without KSM enabled, and 65 virtual machines with KSM enabled and working hard.  So that puts somewhere between 10x - 21x as many containers as virtual machines on the same laptop.

You can now repeat these tests, if you like.  Please share your results with #LXD on Google+ or Twitter!

I'd love to see someone try this in AWS, anywhere from an m3.small to an r3.8xlarge, and share your results ;-)

Density test instructions

## Install lxd
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-git-master
$ sudo apt-get update
$ sudo apt-get install -y lxd bzr
$ cd /tmp
## At this point, it's a good idea to logout/login or reboot
## for your new group permissions to get applied
## Grab the tests, disable the tools download
$ bzr branch lp:~raharper/+junk/density-check
$ cd density-check
$ mkdir lxd_tools
## Periodically squeeze your cache
$ sudo bash -x -c 'while true; do sleep 30; \
echo 3 | sudo tee /proc/sys/vm/drop_caches; \
free; done' &
## Run the LXD test
$ ./density-check-lxd --limit=mem:512m --load=idle release=trusty arch=amd64
## Run the KVM test
$ ./density-check-kvm --limit=mem:512m --load=idle release=trusty arch=amd64

As for the speed-of-launch test, I'll cover that in a follow-up post!

Can you contain your excitement?


Read more
Michael Hall

Ubuntu is sponsoring the South East Linux Fest this year in Charlotte North Carolina, and as part of that event we will have a room to use all day Friday, June 12, for an UbuCon. UbuCon is a mini-conference with presentations centered around Ubuntu the project and it’s community.

I’m recruiting speakers to fill the last three hour-long slots, if anybody is willing and able to attend the conference and wants to give a presentation to a room full of enthusiastic Ubuntu users, please email me at Topic can be anything Ubuntu related, design, development, client, cloud, using it, community, etc.

Read more
Dustin Kirkland

In November of 2006, Canonical held an "all hands" event, which included a team building exercise.  Several teams recorded "Ubuntu commercials".

On one of the teams, Mark "Borat" Shuttleworth amusingly proffered,
"Ubuntu make wonderful things possible, for example, Linux appliance, with Ubuntu preinstalled, we call this -- the fridge!"

Nine years later, that tongue-in-cheek parody is no longer a joke.  It's a "cold" hard reality!

GE Appliances, FirstBuild, and Ubuntu announced a collaboration around a smart refrigerator, available today for $749, running Snappy Ubuntu Core on a Raspberry Pi 2, with multiple USB ports and available in-fridge accessories.  We had one in our booth at IoT World in San Francisco this week!

While the fridge prediction is indeed pretty amazing, the line that strikes me most is actually "Ubuntu make(s) wonderful things possible!"

With emphasis on "things".  As in, "Internet of Things."  The possibilities are absolutely endless in this brave new world of Snappy Ubuntu.  And that is indeed wonderful.

So what are you making with Ubuntu?!?


Read more
Michael Hall

Ubuntu has been talking a lot about convergence lately, it’s something that we believe is going to be revolutionary and we want to be at the forefront of it. We love the idea of it, but so far we haven’t really had much experience with the reality of it.

image20150423_164034801I got my first taste of that reality two weeks ago, while at a work sprint in London. While Canonical has an office in London, it had other teams sprinting there, so the Desktop sprint I was at was instead held at a hotel. We planned to visit the office one day that week, it would be my first visit to any Canonical office, as well as my first time working at an actual office in several years. However, we also planned to meet up with the UK loco for release drinks that evening. This meant that we had to decide between leaving our laptops at the hotel, thus not having them to work on at the office, or taking them with us, but having to carry them around the pub all evening.

I chose to leave my laptop behind, but I did take my phone (Nexus 4 running Ubuntu) with me. After getting a quick tour of the office, I found a vacant seat at a desk, and pulled out my phone. Most of my day job can be done with the apps on my phone: I have email, I have a browser, I have a terminal with ssh, I can respond to our community everywhere they are active.

I spent the next couple of hours doing work, actual work, on my phone. The only problem I had was that I was doing it on a small screen, and I was burning through my battery. At one point I looked up and realized that the vacant desk I was sitting at was equipped with a laptop docking station. It had also a USB hub and an HDMI monitor cable available. If I had a slimport cable for my phone, I might have been able to plug it into this docking station and both power my phone and get a bigger screen to work with.

If I could have done that, I would have achieved the full reality of convergence, and it would have been just like if I had brought my laptop with me. Only with this I was able to simply slide it into my pocket when it was time to leave for drinks. It was tantalizingly close, I got a little taste of what it’s going to be like, and now I’m craving more of it.

Read more