Canonical Voices

Posts tagged with 'english'

Zsombor Egri

Adaptive page layout made flexible

A few weeks ago Tim posted a nice article about Adaptive page layouts made easy. It is my turn now to continue the series, with the hope that you will all agree on the title.

Ladies and Gentlemen, we have good news and (slightly) bad news to announce about the AdaptivePageLayout. If the blogging would be interactive, I’d ask you which one to start with, and most probably you would say with the bad ones, as it is always good to get the cold shower first and then have a sunbath. Sorry folks, this time I’ll start with the good news.

The good news

We’ve added a column configurability API to the AdaptivePageLayout! From now on you can configure more than two columns in your layout, and for each column you can configure the minimum, maximum and preferred sizes as well as whether to fill the remaining width of the layout or not. And even more, if the minimum and maximum values of the column configuration differs, the column can be resized with mouse or touch. See the following video demonstrating the feature.

And all this is possible right now, right here, only with Ubuntu UI Toolkit!

You can configure any number of column configurations, with conditions when those should be applied. The one column mode doesn’t need to be configured, that is automatically applied when none of the specified column configuration conditions apply. However, if you wish, you can still configure the single column mode, in case you want to apply minimum width value for the column. Note however that the minimum width configuration will not (yet) be applied on the application’s minimum resizable width, as you can observe on the video above.

The video above was made based on the sample code from Tim’s post, with the following additions:

AdaptivePageLayout {
    id: layout
    \\ [...]
    layouts: [
        // configure two columns
        PageColumnsLayout {
            when: layout.width >
            PageColumn {
            PageColumn {
                fillWidth: true
        // configure minimum size for single column
        PageColumnsLayout {
            when: true
            PageColumn {
                fillWidth: true

The full source code is on lp:~zsombi/+junk/AdaptivePageLayoutMadeFlexible.

The bad news

Oh, yes, this is the time you guys start to get mad. But let’s see how bad it is going to be this time.

We started to apply the AdaptivePageLayout in a few core applications, when we realized that the UI is getting blocked when Pages with heavy content are added to the columns. As pages were created synchronously, we would have had to redo each app’s Page content management to be able to load at least partially asynchronously using Loaders. And that seemed to be a really bad omen for the component. So we decided to bring in an API break for the AdaptivePageLayout addPageTo{Current|Next}Column() functions, so if the second argument is a file URL or a Component, the functions now return an incubator object which can be used to track the loading completion. In the case of an existing Page instance, as you already have it, the functions will return null. More on how to use incubators in QML can be read from

A code snippet to catch page completion would then look like

var incubator = layout.addPageToNextColumn(thisPage, Qt.resolvedUrl(pageDocument));
if (incubator && incubator.status == Component.Loading) {
    incubator.onStatusChanged = function(status) {
        if (status == Component.Ready) {
            // incubator.object contains the loaded Page instance
            // do whatever you wish with the Page
            incubator.object.title = "Dynamic Page";

Of course, if you want to set up the Page properties with some parameters, you can do it in the good old way, by specifying the parameters in the function, i.e.

addPageToNextColumn(thisPage, Qt.resolvedUrl(pageDocument), {title: “Dynamic Page”}).

The incubator approach you would need if you want to do some bindings on the properties of the page, which cannot be done with the creation parameters.


So, the bad news is not so bad after all, isn’t it? That’s why I started with the good news ;)

More “bad” news to come

Oh, yes, we have not finished yet with the bad news. So from now on pages added to the columns are asynchronous by default, except the very first page. That is still going to be loaded synchronously. The good news: it is not for long ;) We are planning to enable asynchronous loading of the primary page as well, and most probably you will get a signal triggered when the page is loaded. In this way you would be able to show something else while the first page is loading, an animation, another splash screen, or the Flying Dutchman, whatever :)

Stay tuned! We’ll be back!


Read more
David Planella

Snappy Ubuntu + Mycroft = Love

This is a guest post from Ryan Sipes, CTO of the Mycroft project, explaining how snappy Ubuntu will enable them to deliver a secure and open AI for everyone.

When we first undertook the Mycroft project, dubbed the “AI For Everyone”, we knew we would face interesting challenges. We were creating a voice-controlled platform not only for assisting you in your daily life with weather, news updates, calendar reminders, and answers to your questions - but also a hub which would allow you to control your Internet of Things, specifically in the form of home automation. Managing all these devices through a seamless user experience requires a strong backbone for developers, and this is where snappy Ubuntu Core works wonders.

Since choosing to base our open source, open hardware product called Mycroft on snappy Ubuntu Core, we have found the platform to be amazing. Being able to build and deliver apps easily through Snappy packages, makes for a quick and painless packaging experience with only a short bit required to get up to speed and start creating your own. We’ve taken advantage of this and are planning to use Snappy packages as the main delivery method of apps on our platform. Want to install the Spotify app on Mycroft? Just install the Snappy package, which you’ll be able to do with a just a click.

But Snappy Core’s usefulness goes beyond creating packages, the ability to do transactional updates of apps makes testing and stability easier. We’ve found that the ability to rollback an update to be critical in ensuring that we are our platform is working when it needs to, but it has also made it possible to test for bugs on versions that we are unsure about - and rollback when there is serious breakage. As we continue to learn more, we are every impressed with this feature of Snappy.

We’re going to be leveraging snappy Ubuntu Core and “Snaps” to deliver applications to Mycroft, and when talking about a platform that sits in your home and has the ability to install third party software an important conversation about privacy in necessary. We are doing our best to ensure that user’s critical data and interactions with Mycroft are kept private, and Snappy makes our job easier. Having a great deal of control over security policies of apps and being able to make applications run in a sandbox, allows us to take measure to ensure the core system isn’t compromised. In a world where you are interacting with lots of IoT devices every day, security is paramount, and Snappy Core Ubuntu doesn’t let you down.

In case you couldn’t tell from the paragraphs above, the Mycroft team is ecstatic to be using such an awesome technology on which to build our open source artificial intelligence and home automation platform. But one thing I didn’t talk about was the awesome community surrounding Ubuntu and the passionate people working for Canonical that have poured their time into this amazing project and that, above all, is the best reason for using Snappy Core.

If you are interested in learning more about Mycroft, please check out our Kickstarter and consider backing the project. We’ve only got a few days left, but we promise that we will continue to keep everyone posted about our experiences as we continue to use Snappy Core while we work on the #AIForEveryone.

I want AI for everyone too! >

Read more
Zoltán Balogh

The Next Generation SDK

Up until now the basic architecture of the SDK IDE and tools packaging was that we have packaged and distributed the QtCreator IDE and our Ubuntu plugins as separate distro packages which strongly depend on the Qt available in the same release.

Since 14.04 we have been jumping through hoops to provide the very same developer experience from a single development branch of the SDK projects. Just to give a quick picture on what we have available in the last few releases (note that 1.3 UITK is not yet released):

14.04 Trusty Qt 5.2.1 QtCreator 3.0.1 UI Toolkit 0.1
14.10 Utopic Qt 5.3. QtCreator 3.1.1 UI Toolkit 1.1
15.04 Vivid Qt 5.4.1 QtCreator 3.1.1 UI Toolkit 1.2
15.10 Wily Qt 5.4.2 QtCreator 3.5.0 UI Toolkit 1.3

Life could have been easier by sticking to one stable Qt and QtCreator and base our SDK on it. Obviously it was not a realistic option as the phone development needed the most recent Qt and our friend Kubuntu required a hot new engine under its hood too. So Qt was quickly moving forward and the SDK followed it. Of course it was all beneficial as new Qt releases brought us bugfixes, new features and improved performance.

But on the way we came to realize that continuously backporting the UITK and the QtCreator plugins to older releases and the LTS was simply not going to be possible. It went fine for some time, but the more API breaks the new Qt and QtCreator releases brought the more problems we had to face. Some people have asked why we don’t backport the latest Qt releases to the LTS or to the stable Ubuntu. As an idea it may sound good, but changing the Qt to 5.4.2 under an application in LTS what was built against 5.2.1 Qt would certainly break that application. So it is simply not cool to mess around with such fundamental bits of a stable and long term supported release.

The only option we had was to decouple the SDK from the archive release of Qt and build it as a standalone package without any external Qt dependencies. That way we could provide the exact same experience and tools to all developers regardless if they are playing safe on Trusty/LTS or enjoy the cutting edge on the daily developed release of Wily.

The idea manifested in a really funny project. The source tree of the project is pretty empty. Only cmake and the debian/rules take care of the job. The builder pulls the latest stable Qt, QtCreator and UITK. Builds and integrates the libdbusmenu-qt and appmenu-qt5 projects and deploys the SDK IDE. The package itself is super skinny. Opposing the old model where QtCreator has pulled most of the Qt modules as dependencies this package contains all it needs and the size of it is impressing 36MB. Cheap. Just the way I like it. Plus this package already contains the 1.3 UITK as our QtCreator plugin (Devices Tab) is using it. So in fact we are just one step from enabling desktop application development on 14.04 LTS with the same UI Toolkit as we use on the commercial phone devices. And that is a super hot idea.

The Ubuntu SDK IDE project lives here:

If you want to check out how it is done:

$ bzr branch lp:ubuntu-sdk-ide

Since we considered such a big facelift on the SDK I thought why not to make the change much bigger. Some might remember that there was a discussion on the Ubuntu Phone mailing list about the possibility to improve the Kit creation in the IDE. Since then we have been playing with the idea and I think it is now a good time to unleash the static chroots.

The basic idea is that creating the builder chroots runtime is a super slow and fragile process. The bootstrapping of the click chroot already takes a long time and installing the SDK API packages (all the libs and dev packages with headers) into the chroot is also time consuming. So why not to create these root filesystems in advance and provide them as single installable packages.

This is exactly what we have done. The base of the API packages is the Vivid core image. It is small and contains only the absolutely necessary packages, we install the SDK libs, dev packages and development tools on the core image and configure the Overlay PPA too. So the final image is pretty much equivalent with the image on a freshly updated device out there. It means that the developer can build and test against the same API set as it is available on the devices.

These API packages are still huge. Their size is around 500MB, so on a slow connection it still takes ages to download, but still it is way faster than bootstrapping a 1.6GB chroot package by package.

This API packages contain a single tar.gz file and the post install script of the package puts the content of this tar.gz to the right place and wires it in, in the way it should be. Once the package is installed the new Kit will be automatically recognized by the IDE.

One important note on this API package! If you have an armhf 15.04 Kit (click chroot) already on your system when you install this package, then your original Kit will not be removed but simply renamed to backup-[timestamp]-[original name]. So do not worry if you have customized Kits, they are safe.

The Ubuntu SDK API project is only a packaging project with a simple script to take care of the dirty details. The project is hosted here:

And if you want to see what is in it just do

$ bzr branch lp:ubuntu-sdk-api-15.04  

The release candidate packages are available from the Tools Development PPA of the SDK team:

How to test these packages?

$ sudo add-apt-repository ppa:ubuntu-sdk-team/tools-development -y

$ sudo apt-get update

$ sudo apt-get install ubuntu-sdk-ide ubuntu-sdk-api-tools

$ sudo apt-get install ubuntu-sdk-api-15.04-armhf ubuntu-sdk-api-15.04-i386

After that look for the Ubuntu SDK IDE in the dash.

Read more
Michi Henning

A Fast Thumbnailer for Ubuntu

Over the past few months, James Henstridge, Xavi Garcia Mena, and I have implemented a fast and scalable thumbnailing service for Ubuntu and Ubuntu Touch. This post explains how we did it, and how we achieved our performance and reliability goals.


On a phone as well as the desktop, applications need to display image thumbnails for various media, such as photos, songs, and videos. Creating thumbnails for such media is CPU-intensive and can be costly in bandwidth if images are retrieved over the network. In addition, different types of media require the use of different APIs that are non-trivial to learn. It makes sense to provide thumbnail creation as a platform API that hides this complexity from application developers and, to improve performance, to cache thumbnails on disk.

This article explains the requirements we had and how we implemented a thumbnailer service that is extremely fast and scalable, and robust in the face of power loss or crashes.


We had a number of requirements we wanted to meet in our implementation.

  • Robustness
    In the event of a crash, the implementation must guarantee the integrity of on-disk data structures. This is particularly important on a phone, where we cannot expect the user to perform manual recovery (such as cleaning up damaged files). Because batteries can run out at any time, integrity must be guaranteed even in the face of power loss.
  • Scalability
    It is common for people to store many thousands of songs and photos on a device, so the cache must scale to at least tens of thousands of records. Thumbnails can range in size from a few kilobytes to well over a megabyte (for “thumbnails” at full-screen resolution), so the cache must deal efficiently with large records.
  • Re-usability
    Persistent and reliable on-disk storage of arbitrary records (ranging in size from a few bytes to potentially megabytes) is a common application requirement, so we did not want to create a cache implementation that is specific to thumbnails. Instead, the disk cache is provided as a stand-alone C++ API that can be used for any number of other purposes, such as a browser or HTTP cache, or to build an object file cache similar to ccache.
  • High performance
    The performance of the thumbnailer directly affects the user experience: it is not nice for the customer to look at “please wait a while” icons in, say, an image gallery while thumbnails are being loaded one by one. We therefore had to have a high-performance implementation that delivers cached thumbnails quickly (on the order of a millisecond per thumbnail on an Arm CPU). An efficient implementation also helps to conserve battery life.
  • Location independence and extensibility
    Canonical runs an image server at that provides album and artist artwork for many musicians and bands. Images from this server are used to display artwork in the music player for media that contains ID3 tags, but does not embed artwork in the media file. The thumbnailer must work with embedded images as well as remote images, and it must be possible to extend it for new types of media without unduly disturbing the existing code.
  • Low bandwidth consumption
    Mobile phones typically come with data caps, so the cache has to be frugal with network bandwidth.
  • Concurrency and isolation
    The implementation has to allow concurrent access by multiple applications, as well as concurrent access from a single implementation. Besides needing to be thread-safe, this means that a request for a thumbnail that is slow (such as downloading an image over the network) must not delay other requests.
  • Fault tolerance
    Mobile devices lose network access without warning, and users can add corrupt media files to their device. The implementation must be resilient to partial failures, such as incomplete network replies, dropped connections, and bad image data. Moreover, the recovery strategy for such failures must conserve battery and avoid repeated futile attempts to create thumbnails from media that cannot be retrieved or contains malformed data.
  • Security
    The implementation must ensure that applications cannot see (or, worse, overwrite) each other’s thumbnails or coerce the thumbnailer into delivering images from files that an application is not allowed to read.
  • Asynchronous API
    The customers of the thumbnailer are applications that are written in QML or Qt, which cannot block in the UI thread. The thumbnailer therefore must provide a non-blocking API. Moreover, the application developer should be able to get the best possible performance without having to use threads. Instead, concurrency must be internal to the implementation (which is able to put threads to use intelligently where they make sense), instead of the application throwing threads at the problem in the hope that it might make things faster when, in fact, it might just add overhead.
  • Monitoring
    The effectiveness of a cache cannot be assessed without statistics to show hit and miss rates, evictions, and other basic performance data, so it must provide a way to extract this information.
  • Error reporting
    When something goes wrong with a system service, typically the only way to learn about the problem is to look at log messages. In case of a failure, the implementation must leave enough footprints behind to allow someone to diagnose a failure after the fact with some chance of success.
  • Backward compatibility
    This project was a rewrite of an earlier implementation. Rather than delivering a “big bang” piece of software and potentially upsetting existing clients, we incrementally changed the implementation such that existing applications continued to work. (The only pre-existing interface was a QML interface that required no change.)

System architecture

Here is a high-level overview of the main system components.

A Fast Thumbnailer for UbuntuExternal API

To the outside world, the thumbnailer provides two APIs.

One API is a QML plugin that registers itself as an image provider for QQuickAsyncImageProvider. This allows the caller to to pass a URI that encodes a query for a local or remote thumbnail at a particular size; if the URI matches the registered provider, QML transfers control to the entry points in our plugin.

The second API is a Qt API that provides three methods:

QSharedPointer<Request> getThumbnail(QString const& filePath,
                                     QSize const& requestedSize);
QSharedPointer<Request> getAlbumArt(QString const& artist,
                                    QString const& album,
                                    QSize const& requestedSize);
QSharedPointer<Request> getArtistArt(QString const& artist,
                                     QString const& album,
                                     QSize const& requestedSize);

The getThumbnail() method extracts thumbnails from local media files, whereas getAlbumArt() and getArtistArt() retrieve artwork from the remote image server. The returned Request object provides a finished signal, and methods to test for success or failure of the request and to extract a thumbnail as a QImage. The request also provides a waitForFinished() method, so the API can be used synchronously.

Thumbnails are delivered to the caller in the size they are requested, subject to a (configurable) 1920-pixel limit. As an escape hatch, requests with width and height of zero deliver artwork at its original size, even if it exceeds the 1920-pixel limit. The scaling algorithm preserves the original aspect ratio and never scales up from the original, so the returned thumbnails may be smaller than their requested size.

DBus service

The thumbnailer is implemented as a DBus service with two interfaces. The first interface provides the server-side implementation of the three methods of the external API; the second interface is an administrative interface that can deliver statistics, clear the internal disk caches, and shut down the service. A simple tool, thumbnailer-admin, allows both interfaces to be called from the command line.

To conserve resources, the service is started on demand by DBus and shuts down after 30 seconds of idle time.

Image extraction

Image extraction uses an abstract base class. This interface is independent of media location and type. The actual image extraction is performed by derived implementations that download images from the remote server, extract them from local image files, or extract them from local streaming media files. This keeps knowledge of image location and encoding out of the main caching and error handling logic, and allows us to support new media types (whether local or remote) by simply adding extra derived implementations.

Image extraction is asynchronous, with currently three implementations:

  • Image downloader
    To retrieve artwork from the remote image server, the service talks to an abstract base class with asynchronous download_album() and download_artist() methods. This allows multiple downloads to run concurrently and makes it easy to add new local or remote image providers without disturbing the code for existing ones. A class derived from that abstract base implements a REST API with QNetworkAccessManager to retrieve images from
  • Photo extractor
    The photo extractor is responsible for delivering images from local image files, such as JPEG or PNG files. It simply delegates that work to the image converter and scaler.
  • Audio and video thumbnail extractor
    To extract thumbnails from audio and video files, we use GStreamer. Due to reliability problems with some codecs that can hang or crash, we delegate the task to a separate vs-thumb executable. This shields the service from failures and also allows us to run several GStreamer pipelines concurrently without a crash of one pipeline affecting the others.

Image converter and scaler

We use a simple Image class with a synchronous interface to convert and scale different image formats to JPEG. The implementation uses Gdk-Pixbuf, which can handle many different input formats and is very efficient.

For JPEG source images, the code checks for the presence of EXIF data using libexif and, if it contains a thumbnail that is at least as large as the requested size, scales the thumbnail from the EXIF data. (For images taken with the camera on a Nexus 4, the original image size is 3264×1836, with an embedded EXIF thumbnail of 512×288. Scaling from the EXIF thumbnail is around one hundred times faster than scaling from the full-size image.)

Disk cache

The thumbnailer service optimizes performance and conserves bandwidth and battery by adopting a layered caching strategy.

Two-level caching with failure lookup

Internally, the service uses three separate on-disk caches:

  • Full-size cache
    This cache stores images that are expensive to retrieve (images that are remote or are embedded in audio and video files) at original resolution (scaled down to a 1920-pixel bounding box if the original image is larger). The default size of this cache is 50 MB, which is sufficient to hold around 400 images at 1920×1080 resolution. Images are stored in JPEG format (at a 90% quality setting).
  • Thumbnail cache
    This cache stores thumbnails at the size that was requested by the caller, such as 512×288. The default size of this cache is 100 MB, which is sufficient to store around 11,000 thumbnails at 512×288, or around 25,000 thumbnails at 256×144.
  • Failure cache
    The failure cache stores the keys for images that could not be extracted because of a failure. For remote images, this means that the server returned an authoritative answer “no such image exists”, or that we encountered an unexpected (non-authoritative) failure, such as the server not responding or a DNS lookup timing out. For local images, it means either that the image data could not be processed because it is damaged, or that an audio file does not contain embedded artwork.

The full-size cache exists because it is likely that an application will request thumbnails at different sizes for the same image. For example, when scrolling through a list of songs that shows a small thumbnail of the album cover beside each song, the user is likely to select one of the songs to play, at which point the media player will display the same cover in a larger size. By keeping full-size images in a separate (smallish) cache, we avoid performing an expensive extraction or download a second time. Instead, we create additional thumbnails by scaling them from the full-size cache (which uses an LRU eviction policy).

The thumbnail cache stores thumbnails that were previously retrieved, also using LRU eviction. Thumbnails are stored as JPEG at the default quality setting of 75%, at the actual size that was requested by the caller. Storing JPEG images (rather than, say, PNG) saves space and increases cache effectiveness. (The minimal quality loss from compression is irrelevant for thumbnails). Because we store thumbnails at the size they are actually needed, we may have several thumbnails for the same image in the cache (each thumbnail at a different size). But applications typically ask for thumbnails in only a small number of sizes, and ask for different sizes for the same image only rarely. So, the slight increase in disk space is minor and amply repaid by applications not having to scale thumbnails after they receive them from the cache, which saves battery and achieves better performance overall.

Finally, the failure cache is used to stop futile attempts to repeatedly extract a thumbnail when we know that the attempt will fail. It uses LRU eviction with an expiry time for each entry.

Cache lookup algorithm

When asked for a thumbnail at a particular size, the lookup and thumbnail generation proceed as follows:

  1. Check if a thumbnail exists in the requested size in the thumbnail cache. If so, return it.
  2. Check if a full-size image for the thumbnail exists in the full-size cache. If so, scale the new thumbnail from the full-size image, add the thumbnail to the thumbnail cache, and return it.
  3. Check if there is an entry for the thumbnail in the failure cache. If so, return an error.
  4. Attempt to download or extract the original image for the thumbnail. If the attempt fails, add an entry to the failure cache and return an error.
  5. If the original image was delivered by the remote server or was extracted locally from streaming media, add it to the full-size cache.
  6. Scale the thumbnail to the desired size, add it to the thumbnail cache, and return it.

Note that these steps represent only the logical flow of control for a particular thumbnail. The implementation executes these steps concurrently for different thumbnails.

Designing for performance

Apart from fast on-disk caches (see below), the thumbnailer must make efficient use of I/O bandwidth and threads. This not only means making things fast, but also to not unnecessarily waste resources such as threads, memory, network connections, or file descriptors. Provided that enough requests are made to keep the service busy, we do not want it to ever wait for a download or image extraction to complete while there is something else that could be done in the mean time, and we want it to keep all CPU cores busy. In addition, requests that are slow (because they require a download or a CPU-intensive image extraction) must not block requests that are queued up behind them if those requests would result in cache hits that could be returned immediately.

To achieve a high degree of concurrency without blocking on long-running operations while holding precious resources, the thumbnailer uses a three-phase lookup algorithm:

  1. In phase 1, we look at the caches to determine if we have a hit or an authoritative miss. Phase 1 is very fast. (It takes around a millisecond to return a thumbnail from the cache on a Nexus 4.) However, cache lookup can briefly stall on disk I/O or require a lot of CPU to extract and scale an image. To get good performance, phase 1 requests are passed to a thread pool with as many threads as there are CPU cores. This allows the maximum number of lookups to proceed concurrently.
  2. Phase 2 is initiated if phase 1 determines that a thumbnail requires download or extraction, either of which can take on the order of seconds. (In case of extraction from local media, the task is CPU intensive; in case of download, most of the time is spent waiting for the reply from the server.) This phase is scheduled asynchronously from an event loop. This minimizes task switching and allows large numbers of requests to be queued while only using a few bytes for each request that is waiting in the queue.
  3. Phase 3 is really a repeat of phase 1: if phase 2 produces a thumbnail, it adds it to the cache; if phase 2 does not produce a thumbnail, it creates an entry in the failure cache. By simply repeating phase 1, the lookup then results in either a thumbnail or an error.

If phase 2 determines that a download or extraction is required, that work is performed concurrently: the service schedules several downloads and extractions in parallel. By default, it will run up to two concurrent downloads, and as many concurrent GStreamer pipelines as there are CPUs. This ensures that we use all of the available CPU cores. Moreover, download and extraction run concurrently with lookups for phase 1 and 3. This means that, even if a cache lookup briefly stalls on I/O, there is a good chance that another thread can make use of the CPU.

Because slow operations do not block lookup, this also ensures that a slow request does not stall requests for thumbnails that are already in the cache. In other words, it does not matter how many slow requests are in progress: requests that can be completed quickly are indeed completed quickly, regardless of what is going on elsewhere.

Overall, this strategy works very well. For example, with sufficient workload, the service achieves around 750% CPU utilization on an 8-core desktop machine, while still delivering cache hits almost instantaneously. (On a Nexus 4, cache hits take a little over 1 ms while concurrent extractions or downloads are in progress.)

A re-usable persistent cache for C++

The three internal caches are implemented by a small and flexible C++ API. This API is available as a separate reusable PersistentStringCache component (see persistent-cache-cpp) that provides a persistent store of arbitrary key–value pairs. Keys and values can be binary, and entries can be large. (Megabyte-sized values do not present a problem.)

The implementation uses leveldb, which provides a very fast NoSQL database that scales to multi-gigabyte sizes and provides integrity guarantees. In particular, if the calling process crashes, all inserts that completed at the API level will be intact after a restart. (In case of a power failure or kernel crash, a few buffered inserts can be lost, but the integrity of the database is still guaranteed.)

To use a cache, the caller instantiates it with a path name, a maximum size, and an eviction policy. The eviction policy can be set to either strict LRU (least-recently-used) or LRU with an expiry time. Once a cache reaches its maximum size, expired entries (if any) are evicted first and, if that does not free enough space for a new entry, entries are discarded in least-recently-used order until enough room is available to insert a new record. (In all other respects, expired entries behave like entries that were never added.)

A simple get/put API allows records to be retrieved and added, for example:

auto c = core::PersistentStringCache::open(
    “my_cache”, 100 * 1024 * 1024, core::CacheDiscardPolicy::lru_only);
// Look for an entry and add it if there is a cache miss.
string key = "Bjarne";
auto value = c->get(key);
if (value) {
    cout << key << ″: ″ << *value << endl;
} else {
    value = "C++ inventor";  // Provide a value for the key. 
    c->put(key, *value);     // Insert it.

Running this program prints nothing on the first run, and “Bjarne: C++ inventor” on all subsequent runs.

The API also allows application-specific metadata to be added to records, provides detailed statistics, supports dynamic resizing of caches, and offers a simple adapter template that makes it easy to store complex user-defined types without the need to clutter the code with explicit serialization and deserialization calls. (In a pinch, if iteration is not needed, the cache can be used as a persistent map by setting an impossibly large cache size, in which case no records are ever evicted.)


Our benchmarks indicate good performance. (Figures are for an Intel Ivy Bridge i7-3770k 3.5 GHz machine with a 256 GB SSD.) Our test uses 60-byte string keys. Values are binary blobs filled with random data (so they are not compressible), 20 kB in size with a standard deviation of 7,000, so the majority of values are 13–27 kB in size. The cache size is 100 MB, so it contains around 5,000 records.

Filling the cache with 100 MB of records takes around 2.8 seconds. Thereafter, the benchmark does a random lookup with an 80% hit probability. In case of a cache miss, it inserts a new random record, evicting old records in LRU order to make room for the new one. For 100,000 iterations, the cache returns around 4,800 “thumbnails” per second, with an aggregate read/write throughput of around 93 MB/sec. At 90% hit rate, we see twice the performance at around 7,100 records/sec. (Writes are expensive once the cache is full due to the need to evict entries, which requires updating the main cache table as well as an index.)

Repeating the test with a 1 GB cache produces identical timings so (within limits) performance remains constant for large databases.

Overall, performance is restricted largely by the bandwidth to disk. With a 7,200 rpm disk, we measured around one third of the performance with an SSD.

Recovering from errors

The overall design of the thumbnailer delivers good performance when things work. However, our implementation has to deal with the unexpected, such as network requests that do not return responses, GStreamer pipelines that crash, request overload, and so on. What follows is a partial list of steps we took to ensure that things behave sensibly, particularly on a battery-powered device.

Retry strategy

The failure cache provides an effective way to stop the service from endlessly trying to create thumbnails that, in an earlier attempt, returned an error.

For remote images, we know that, if the server has (authoritatively) told us that it has no artwork for a particular artist or album, it is unlikely that artwork will appear any time soon. However, the server may be updated with more artwork periodically. To deal with this, we add an expiry time of one week to the entries in the failure cache. That way, we do not try to retrieve the same image again until at least one week has passed (and only if we receive a request for a thumbnail for that image again later).

As opposed to authoritative answers from the image server (“I do not have artwork for this artist.”), we can also encounter transient failures. For example, the server may currently be down, or there may be some other network-related issue. In this case, we remember the time of the failure and do not try to contact the remote server again for two hours. This conserves bandwidth and battery power.

The device may also disconnected from the network, in which case any attempt to retrieve a remote image is doomed. Our implementation returns failure immediately on a cache miss for a remote image if no network is present or the device is in flight mode. (We do not add an entry to the failure cache in this case).

For local files, we know that, if an attempt to get a thumbnail for a particular file has failed, future attempts will fail as well. This means that the only way for the problem to get fixed is by modifying or replacing the actual media file. To deal with this, we add the inode number, modification time, and inode modification time to the key for local images. If a user replaces, say, a music file with a new one that contains artwork, we automatically pick up the new version of the file because its key has changed; the old version will eventually fall out of the cache.

Download and extraction failures

We monitor downloads and extractions for timely completion. (Timeouts for downloads and extractions can be configured separately.) If the server does not respond within 10 seconds, we abandon the attempt and treat it it as a transient network error. Similarly, the vs-thumb processes that extract images from audio and video files can hang. We monitor these processes and kill them if they do not produce a result within 10 seconds.

Database corruption

Assuming an error-free implementation of leveldb, database corruption is impossible. However, in practice, an errant command could scribble over the database files. If leveldb detects that the database is corrupted, the recovery strategy is simple: we delete the on-disk cache and start again from scratch. Because the cache contents are ephemeral anyway, this is fine (other than slower operation until the working set of thumbnails makes it into the cache again).

Dealing with backlog

The asynchronous API provided by the service allows an application to submit an unlimited number of requests. Lots of requests happen if, for example, the user has inserted a flash card with thousands of photos into the device and then requests a gallery view for the collection. If the service’s client-side API blindly forwards requests via DBus, this causes a problem because DBus terminates the connection once there are more than around 400 outstanding requests.

To deal with this, we limit the number of outstanding requests to 200 and send another request via DBus only when an earlier request completes. Additional requests are queued in memory. Because this happens on the client side, the number of outstanding requests is limited only by the amount of memory that is available to the client.

A related problem arises if a client submits many requests for a thumbnail for the same image. This happens when, for example, the user looks at a list of tracks: tracks that belong to the same album have the same artwork. If artwork needs to be retrieved from the remote server, naively forwarding cache misses for each thumbnail to the server would end up re-downloading the same image several times.

We deal with this by maintaining an in-memory map of all remote download requests that are currently in progress. If phase 1 reports a cache miss, before initiating a download, we add the key for the remote image to the map and remove it again once the download completes. If more requests for the same image encounter a cache miss while the download for the original request is still in progress, the key for the in-progress download is still in the map, and we hold additional requests for the same image until the download completes. We then schedule the held requests as usual and create their thumbnails from the image that was cached by the first request.


The thumbnailer runs with normal user privileges. We use AppArmor’s aa_query_label() function to verify that the calling client has read access to a file it wants a thumbnail for. This prevents one application from accessing thumbnails produced by a different application, unless both applications can read the original file. In addition, we place the entire service under an AppArmor profile to ensure that it can write only to its own cache directory.


Overall, we are very pleased with the overall design and performance of the thumbnailer. Each component has a clearly defined role with a clean interface, which made it easy for us to experiment and to refine the design as we went along. The design is extensible, so we can support additional media types or remote data sources without disturbing the existing code.

We used threads sparingly and only where we saw worthwhile concurrency opportunities. Using asynchronous interfaces for long-running operations kept resource usage to a minimum and allowed us to take advantage of I/O interleaving. In turn, this extracts the best possible performance from the hardware.

The thumbnailer now runs on Ubuntu Touch and is used by the gallery, camera, and music apps, as well as for all scopes that display media thumbnails.

This article has been originally published on Michi Henning's blog.

Read more
Tim Peeters

Adaptive page layouts made easy

Convergent applications

We want to make it easy for app developers to write an app that can run on different form factors without changes in the code. This implies that an app should support screens of various sizes, and the layout of the app should be optimal for each screen size. For example, a messaging app running on a desktop PC in a big window could show a list of conversations in a narrow column on the left, and the selected conversation in a wider column on the right side. The same application on a phone would show only the list of conversations, or the selected conversation with a back-button to return to the list. It would also be useful if the app automatically switches between the 1-column and 2-column layouts when the user resizes the window, or attaches a large screen to the phone.

To accomplish this, we introduced the AdaptivePageLayout component in Ubuntu.Components 1.3. This version of  Ubuntu.Components is still under development (expect an official release announcement soon), but if you are running the latest version of the Ubuntu UI Toolkit, you can already try it out by updating your import Ubuntu.Components to version 1.3. Note that you should not mix import versions, so when you update one of your components to 1.3, they should all be updated.


AdaptivePageLayout is an Item with the following properties and functions:

  • property Page primaryPage
  • function addPageToCurrentColumn(sourcePage, newPage)
  • function addPageToNextColumn(sourcePage, newPage)
  • function removePages(page)

To understand how it works, imagine that internally, the AdaptivePageLayout keeps track of an infinite number of virtual columns that may be displayed on your screen. Not all virtual columns are visible on the screen. By default, depending on the width of your AdaptivePageLayout, either one or two columns are visible. When a Page is added to a virtual column that is not visible, it will instead be shown in the right-most visible column.

The Page defined as primaryPage will initially be visible in the first (left-most) column and all the other columns are empty (see figure 1).

Figure 1: Showing only primaryPage in layouts of 100 and 50 grid-units.
Showing only primaryPage at 100 grid units. Showing primaryPage at 50 grid units.

To show another Page in the first column, call addPageToCurrentColumn() with as parameters the current page (primaryPage), and the new page. The new page will then show up in the same column with a back button in the header to close the new page and return to the previous page (see figure 2). So far, AdaptivePageLayout is no different than a PageStack.

Figure 2: Page with back button in the first column.
Page with back button in the first column at 100 grid units. Page with back button in first column at 50 grid units.

The differences with PageStack become evident when you want to keep the first page visible in the first column, while adding a new page to the next column. To do this, call addPageToNextColumn() with the same parameters as addPageToCurrentColumn() above. The new page will now show up in the following column on the screen (see figure  3).

Figure 3: Adding a page to the next column.
Added a page to the next column at 100 grid units. Added a page to the next column at 50 grid units.

However, if you resize the window so that it fits only one column, the left column will be hidden, and the page that was in the right column will now have a back button. Resizing back to get the two-column layout will again give you the first page on the left, and the new page on the right. Call removePages(page) to remove page and all pages that were added after page was added. There is one exception: primaryPage is never removed, so removePages(primaryPage) will remove all pages except primaryPage and return your AdaptivePageLayout to its initial state.

AdaptivePageLayout automatically chooses between a one and two-column layout depending on the width of the window. It also automatically shows a back button in the correct column when one is needed and synchronizes the header size between the different columns (see figure 4).

Figure 4: Adding sections to any column increases the height of the header in every column.
Added a page with sections to the right column at 100 grid units. Added a page with sections at 50 grid units.

Future extensions

The version of AdaptivePageLayout that is now in the UI toolkit is only the first version. What works now will keep working, but we will extend the API to support the following:

  • Layouts with more than two columns
  • Use different conditions for switching between layouts
  • User-resizable columns
  • Automatic and manual hiding of the header in single-column layouts
  • Custom proxy objects to support Autopilot tests for applications

Below you can read the full source code that was used to create the screenshots above. The screenhots do not cover all the possible orders in which the pages left and right can be added, so I encourage you to run the code for yourself and discover its full behavior. We are looking forward to see your first applications using the new AdaptivePageLayout component soon :). Of course if there are any questions you can leave a comment below or ping members of the SDK team (I am t1mp) in #ubuntu-app-devel on Freenode IRC.


import QtQuick 2.4
import Ubuntu.Components 1.3

MainView {

    AdaptivePageLayout {
        id: layout
        anchors.fill: parent
        primaryPage: rootPage

        Page {
            id: rootPage
            title:"Root page")

            Column {
                anchors {
                    left: parent.left

                Button {
                    text: "Add page left"
                    onClicked: layout.addPageToCurrentColumn(rootPage, leftPage)
                Button {
                    text: "Add page right"
                    onClicked: layout.addPageToNextColumn(rootPage, rightPage)
                Button {
                    text: "Add sections page right"
                    onClicked: layout.addPageToNextColumn(rootPage, sectionsPage)

        Page {
            id: leftPage
            title:"First column")

            Rectangle {
                anchors {
                    fill: parent

                Button {
                    anchors.centerIn: parent
                    text: "right"
                    onTriggered: layout.addPageToNextColumn(leftPage, rightPage)

        Page {
            id: rightPage
            title:"Second column")

            Rectangle {
                anchors {
                    fill: parent

                Button {
                    anchors.centerIn: parent
                    text: "Another page!"
                    onTriggered: layout.addPageToCurrentColumn(rightPage, sectionsPage)

        Page {
            id: sectionsPage
            title:"Page with sections")
            head.sections.model: ["one"),"two"),"three")]

            Rectangle {
                anchors {
                    fill: parent


Read more
Daniel Holbach

Announcing UbuContest 2015

Have you read the news already? Canonical, the Ubucon Germany 2015 team, and the UbuContest 2015 team, are happy to announce the first UbuContest! Contestants from all over the world have until September 18, 2015 to build and publish their apps and scopes using the Ubuntu SDK and Ubuntu platform. The competion has already started, so register your competition entry today! You don’t have to create a new project, submit what you have and improve it over the next two months.

But we know it's not all about shiny new apps and scopes! A great platform also needs content, great design, testing, documentation, bug management, developer support, interesting blog posts, technology demonstrations and all of the other incredible things our community does every day. So we give you, our community members, the opportunity to nominate other community members for prizes!

We are proud to present five dedicated categories:

  1. Best Team Entry: A team of up to three developers may register up to two apps/scopes they are developing. The jury will assign points in categories including "Creativity", "Functionality", "Design", "Technical Level" and "Convergence". The top three entries with the most points win.

  2. Best Individual Entry: A lone developer may register up to two apps/scopes he or she is developing. The rest of the rules are identical to the "Best Team Entry" category.

  1. Outstanding Technical Contribution: Members of the general public may nominate candidates who, in their opinion, have done something "exceptional" on a technical level. The nominated candidate with the most jury votes wins.

  1. Outstanding Non-Technical Contribution: Members of the general public may nominate candidates who, in their opinion, have done something exceptional, but non-technical, to bring the Ubuntu platform forward. So, for example, you can nominate a friend who has reported and commented on all those phone-related bugs on Launchpad. Or nominate a member of your local community who did translations for Core Apps. Or nominate someone who has contributed documentation, written awesome blog articles, etc. The nominated candidate with the most jury votes wins.

  1. Convergence Hero: The "Best Team Entry" or "Best Individual Entry" contribution with the highest number of "Convergence" points wins. The winner in this category will probably surprise us in ways we have yet to imagine.

Our community judging panel members Laura Cowen, Carla Sella, Simos Xenitellis, Sujeevan Vijayakumaran and Michael Zanetti will select the winners in each category. Successful winners will be awarded items from a huge pile of prizes, including travel subsidies for the first-placed winners to attend Ubucon Germany 2015 in Berlin, four Ubuntu Phones sponsored by bq and Meizu, t-shirts, and bundles of items from the official Ubuntu Shop.

We wish all the contestants good luck!

Go to or for more information, including how to register and nominate folks. You can also follow us on Twitter @ubucontest, or contact us via e-mail at


Read more
April Wang



To celebrate the Developer Edition Ubuntu phone launch in China, Canonical organized a “celebrate Ubuntu” hackathon for phone in Beijing. It is also hosted as part of the on-going China Mobile & Ubuntu Developer Innovation Contest, all projects that were coded during the hackathon can be submitted into the contest afterwards.  This 30+ hour hackathon was packed with creativity, excitement and laughter, it was exhausting but amazingly fun.

With the help of media partners (TechCrunch CN, Tech Noda), local tech partners & communities (GitCafe, MS OpenTech,  Ubuntu Kylin, Kaiyuanshe, SegmentFault, CSDN, linuxCN, OSCN, Linuxeden, QTCN,, there are over 120 people signed up online pre-hackathon, and 70+ people turned up onsite.



Being the first ever Ubuntu phone hackathon in China, it doesn’t have any fixed topic and project requirement, as long as it will run on an ubuntu phone in the end. The entire hackathon was driven by pure innovation and creativity.

Ideas and solutions to problems that can benefit or provide entertainment for phone users is the key for this hackathon. There are 7 different awards to be given out to credit different types of ideas.

  • Avant-garde Award - for the most innovative ideas and projects

  • Geek Award - for hardcore techy geeky projects

  • Foolproof Award - for most user friendly projects

  • TalkDaTalk Award - for best project demonstration

  • Stunning Award - for best design projects

  • Entertaining Award - for most fun and entertaining projects

  • Special Content Award - for certain most needed content providing categories

And every team who stood up and provided a project demo received a final demo prize.

Final judging panels were made up by teams from Canonical and China Mobile device company. Each project are being reviewed by its creativity, usability, problem solving level, technical difficulty, design and the completion level. 30 hour straight hackathon is an intense exercise, there were 14 teams in the end proved their talent and effort through their live demo sessions. Four Meizu MX4 phone were given out for the top 4 teams, and all final teams received a Qt Core Tutorial book and numerous small gifts.



A live weibo tweets & hackathon countdown wall, that was put together by @penk , provided a great live interactive platform for onsite participants and online fans and community members.


The 30+ hour hackathon was fueled with energy, determination and of course food, water, cans of redbull & sweets! :) Here is a few clips on how the energy and creativity was flowing throughout the event.
Some of the teams were fresh learners, spent the first day learning and second day coding. However, some of them in the end were part of the final winners too!


Then of course, all work and no play makes Jack a dull boy. Various gaming sessions and polaroid fun took place naturally to keep things alive and exciting!




Now let’s take a look at some of the finalists and their amazing work after the 30 hour hackathon.

Douban FM

A great qml app with neat design and smooth user login experience, which also enables multi device sync up under your own account.  Coded by the one man(the guy on the right hand side of the picture) team @DawnDIYSoft, who is also the man behind the current youku scope in the Ubuntu store



A brand new programming language, which was re-implemented with an interpreter built with JavaScript and ported to Ubuntu phone. Their project can be found here on github.  As you have probably guessed by now, they are of course the winner of the Geek Award, which was presented by Caobin, Project Manager from the China Mobile Device Company.


Memory Dictionary
Utilizing fragmented time slots in your life, such as when you are travelling on a metro train, to memorizing new words and phrases (English language learning app). The app was already built for MacOS, iOS and Android, and is very popular on those platforms. During the hackathon, the team ported it to Ubuntu phone based on cordova.


Couple Like:

An html5 app that compares two person’s pictures to provide a conclusion on what type of couple the two will make. It’s light hearted, fun and packed with love, coded by a team of couple who were on the dance floor not long before the demo.  Also one of the project that runs smoothly on the phone by the end of the hackathon.


Dou Dizhu (poker game)

A single poker game with it's own memory management system and AI. It was implemented in qt widget, so still need some work to port it to Ubuntu phone. But through the desktop demo, it already looking addictive and entertaining.


Utu / uPhoto

It’s an app implemented with C++ for image/photo processing, still a WIP project, but exciting enough for us to know that soon we can beautifying our snapshots on our Ubuntu phone.  


uChat/(Ubuntu Chat)

A dating messaging service application dedicated for anyone who finds it difficult to make the first move or the right move when it comes to meeting someone. It involves server side and client side technology, despite their initial plan of using html5 based on webapp or cordova, the prototype in the end was built with qml by a team of university students in their second and third year.


A few more clips of the hackathon and of course a happy group shot.



Read more
David Callé

Add a C++ backend to your QML UI

Whether you are creating a new app or porting an existing one from another ecosystem, you may need more backend power than the QML + JavaScript duo proposed in the QML app tutorial.

Let's have a peek at how to to add a C++ backend to your application, using system libraries or your own, and vastly increase its performance and potential features.

In this tutorial, you will learn how to use and invoke C++ classes from QML and integrate a 3rd party library into your project.

Read the tutorial

Read more
Pawel Stolowski

Cleaning up scopes settings

The scopes architecture on Unity 7, which provides the Ubuntu shell and default UX experience on current desktops, and Unity 8, which powers the phone and soon the convergent desktop, differ to a large degree when it comes to visibility of data sources. Future Unity 8 builds will be obsoleting the legacy privacy flag in favour of a clearer way for users to decide where the data is being sent to.

Scope searches and preserving privacy in Unity 7

By default, using a regular Dash search in Unity 7 will first contact Canonical's smart scopes server, which recommends the best or most promising scopes for the search term. Then, as a second step, those scopes are queried for actual results, which will finally be presented.

However, this approach means that the user doesn't necessarily know in advance which scopes are queried and that the search term will be hitting the smart scopes server. Although the data sent to server is anonymized, we understood that some users might still be concerned about data privacy. It was for that reason that privacy flag was introduced: a setting for scopes that prevents access to the smart scopes server.

Scope searches in Unity 8

The scopes architecture in Unity 8 is quite different: there is no smart scope server involved in the search lifecycle.

Instead, each query is only sent to the currently  active scope (that is, the one that is currently visible), so that the user always knows where their search data ends up.

For the case where the current scope being is aggregating multiple other scopes, its settings page will list all aggregated scopes, offering the possibility to individually disable each one if desired.

Obsoleting the privacy flag in Unity 8

With this clear visibility of what's being queried, and the possibility to easily disable sources/scopes, the privacy flag becomes redundant in Unity 8. As such, we have decided to remove this legacy setting in one of our phone/Unity 8 next snapshots.

If you have been using this flag under Unity 8, either unfavorite or disable the respective scopes from the aggregator settings to reach the same result. You can also uninstall the individual scopes.

Creating privacy in Unity 8

In the shell you can see two kind of scopes: normal scopes and aggregator scopes. Normal scopes can access either local or remote data but never both at the same time. So, if there is a scope called “My Music” then this scope will only query your phone, while a “BBC News” scope will only query If you don’t want to use “BBC News” scope then do not invoke (via manage dash) or favor the scope (similar to not to invoke (web)apps).

Aggregator scopes in contrast can aggregate all kind of scopes whether they access local or remote data. If you’re concerned about a specific scope you can disable it via the scopes’ settings page that lists all scopes being aggregated. However, given that most scopes deal with remote data, it will be faster to just unfavorite the respective aggregator via “Manage Dash” and favorite the interesting scopes dealing with local data like “My Music” or “My Videos”. This has also the benefit of not having (almost) empty dash pages.

Read more
Zoltán Balogh

Sprinting for convergence

Convergence is all around. Our deeply loved UI Toolkit, what was primarily targeting touch environments is converging to where users might have keyboards and pointer devices. But that is just one point. The innovative track for Ubuntu is called Snappy and at the same time the SDK is converging to the desktop. We move to the direction where frameworks and applications are packaged and distributed in a new model. It is exciting to see how the different development tracks do move in the same direction.

Last week the SDK team has spent quality time with the creative folks from the design team and with master ninjas from the QA team to put down the foundations of a converged UI Toolkit and SDK.

We had two major questions when we entered the sprint:

How UI Components will look and behave when pointer and keyboard devices become available even during runtime?

How can we enable scope and application development for literally any kind of Ubuntu device?

Our offering is not only for smartphones. The UI components are as good on a big screen desktop as on a tablet sized device or on any small device with a screen. I totally can imagine the UI Toolkit on a car’s infotainment dash or on a control panel of an intelligent house. But before the bold dreams we focus on bringing the components to the classic desktop environment.

Application convergence

When we talk about convergence we mostly mean application convergence. The “definition of done” is when one can start an application on a touchscreen phone and the application scales and adapts automatically to a bigger screen with keyboard and mouse when plugged into the device.
The driving applications are Ubuntu browser, dekko email client, music player, calendar, document viewer, messaging, address book, snap decisions/alerts and Telegram.

As an addition the toolkit will provide API to control window and page sizing, a component to easily transition from a one column pagestack to a multi-column view, supporting 2 or more columns. A detachable header component is also planned, so applications can put headers in different views, not only in a MainView. But more about these in the following.

Foundations and tools

To make a converged SDK we do need a solid and sustainable foundation. Not only the UITK depends on the Qt stack but our own IDE needs it. On the sprint we already made working prototypes of distro decoupled Qt and IDE packages. In other terms it means that we can produce the Qt, UI Toolkit and SDK Tools snappy packages pretty much any time when needed. The cool thing in keeping our eyes on snappy is that this new structure motivates us to cut the loose ends from our packages and make the SDK more portable and easier to build.

The promise is that we will have distro independent (snappable) SDK tools and UITK with Qt 5.4 for anything what is compatible with the 14.04 Ubuntu.

UI Toolkit 2.0 plans

Improving the performance and the overall quality are the keywords for UITK 2.0. We will list those components what would perform better if they were implemented in C++, starting with MainView as it is needed for the Convergence story.

We want to upstream the UITK to Qt. Living close to our upstream foundation brings great value. Refactoring the source tree to have a single Ubuntu.Components module without submodules is the natural first step towards upstreaming. It will make the UITK more compatible with other Qt modules. Early bits might land on 1.3 depending on the needs. The detailed API planning will start end of Ubuntu 15.09 and is planned to land on 16.04.


The scopes toolkit will slowly migrate from the Unity8 space to the UI Toolkit. It means that the components used now for scope development will be available for classic application development. Also the scopes APIs are under heavy re-factoring. According to the present plans the UITK will be available to the scopes and scopes will become more active aggregators than ever. The key point with scopes is that we will put lots of effort on scopes development as they are one of the most visible differentiation from other platforms.


An issue with the current version is the thumb is covering actions in the UI because it can not be positioned outside the window and if users approach an action on the right hand side the thumb is revealed. The scrolling user experience will be the same as in Unity7 with the exception that the thumb appears inside the window area. The thumb follows the mouse cursor position and hides when the mouse does not move. The design team is currently prototyping two different scrollbars, one with a thumb and one without, which visually would look the same as in Qt Creator for example, and we will evaluate which fits better to the designs and will release the most appropriate and usable one.


When mouse pointers are available the tooltip appears when hovering over a component. With a touch interface a long press interaction is under investigation which would invoke a tooltip on a component or action..

The tooltip appears under the mouse cursor after a timeout (1 second), positioned the same way as popovers, and disappears after a timeout (10s) or when the mouse is moved out of the component’s area.

Date and time pickers

This is one of the components which got a heavy design facelift. The components are no longer tumbler based Pickers, but composed of an editable component, which when tapped/clicked opens a popup, in which there can be a calendar component for date picking, or a picker for time picking. The main component is a text input with no text cursor, when activated with keyboard, the entire content will be selected, and can be edited at once, i.e. no positioning will be possible. The popups will be full screen dialogs on screens smaller than 40x71 GU, and popovers on bigger screens.

Dropdown Menus & popovers

We are considering to reuse QtQuick Controls Menu components, adapting those to the toolkit’s theming and actions. Keyboard shortcuts and accelerator will only be visible with mouse/keyboard attached in any drop down menu. The context menus will be single level menus in the first iteration and cascading menus might come later if needed. The individual application menus are not high priority but we will listen to the app developers and hear what they need.

Expandables, ListItems module

As now we have OptionSelector and ItemSelector what is confusing, and neither of them is configurable enough. The old Ubuntu.Components.ListItems has a pile of components which is just not flexible enough, and they are all underperforming. Expansion will be introduced to the new ListItem, and new layouts will be made which will be flexible enough to survive eventual design changes. This is not a high priority for convergence, however it will serve as a ground for phone and desktop layouts as well as the prerequisite for the application menus. We will keep trying to separate the layout from the ListItem, hopefully we will manage to do that performantly enough.

Accessing ListItem actions on desktop

At first, a mouse right click will bring up a `contextual` menu which will contain leading, trailing and default actions, as well as selection/drag modes, without any API change on ListItem. After application menu will be implemented, we will enable the context menu on the ListItem together with other components.

Panels behaviour & MultiColumnView

For the 2-column pagestack we still need to find out the best navigation model, more precisely the way we handle the headers of the pages, cascading or not. We gathered the tasks we have to complete in order to provide convergent view handling:

  • PageStack cannot be adapted to the new UX without major API changes, therefore we will introduce a component called MultiColumnView, which can transition from one column to 2 or more columns. The component will put Pages side by side, and will maintain a stack depending on where the page is pushed, above the current page or next to it. Applications using this component must specify the minimum and maximum sizes for the page

  • Title, or header handling should be detached from MainView, and there will be a Header component which then can be used in bottom edge, or a ListView’s header component

  • The bottom edge will be used on the desktop, and there will be a component called BottomEdgeHint which provides a clickable component if there’s a mouse. The bottom edge swipe known from the touch environment will simply translate a new clickable component what will appear when the mouse is hovered on it. The content of the component (for example a pagestack) depends on what the developer wants.

Focus handling

The focus handling concerns not only TAB/Shift+TAB navigation between components, but also the keyboard navigation inside composite components, such as ListView, ComboButton, text inputs, header, etc. The focus highlight is more or less agreed, however there is a little prototyping ongoing to figure out whether we can do some nice effects on it or not.


Read more
Daniel Holbach

Thanks to the tireless work of Oliver Grawert we now have a handy tool called node-snapper which very easily turns your node.js project into a .snap package. It will automatically take care of bundling required libraries, other related node.js projects and will make a multi-arch ("fat") package available, so it will immediately work on armhf and amd64 architectures.

Intrigued? Check out our tutorial Turning node.js projects into snap packages.

Read more
David Callé

Have you taken Vision Mobile’s developer survey? The survey is covering your development in Mobile, Desktop, Cloud, and IoT this year, there is something all devs can contribute towards and help shape the findings for this survey.

Participating is easy - take the 10-minute Developer Skill Census survey and enter a draw to win prizes such as the BQ Aquaris E4.5 Ubuntu Edition, iPhone 6, Apple Sports Watch, Oculus Rift Dev Kit, and many more. A free chapter from one of VisionMobile’s premium paid reports will also be given immediately upon completion, taking a close look at app profits & costs.

The survey closes on 5th June - enter the survey now

Thanks to everyone who has already completed the survey!

Read more
Christian Dywan

We all love QML because it allows for fast prototyping, and not only that, it's a very efficient tool for production applications. The complexity of C and C++ is hidden behind neat and simple API. Many if not most app developers these days take advantage of that without even having to know the implementation details. Most of the Ubuntu UI Toolkit is pure QML except for performance-critical elements like the new ListItem or the theming engine.

There's a notable flaw however to QML as a language when it comes to versioning. Any QML component is made known to the engine in one of two ways. Using qmldir, which essentially is a text file listing with version numbers and filenames - unfortunately there's no error handling whatsoever so qmldir files in productive use are all but flawless and mistakes including missing files won't be noticed easily, made worse by the fact that QML automagically recognizes files as class names regardless of being registered anywhere. The other way is qmlRegisterType in one of its various incarnations - seemingly with inbuilt support for minor revisions which in fact are completely unrelated to QML.

Looking further at how classes behave it's not looking much better either. There's no support for versions in functions, properties or signals. All members will show up in all versions the same QML file is registered to. Additions as well as changes affect all versions - unless you fork the implementation, which is what we do for the Ubuntu UI Toolkit these days to ensure new versions don't break existing code, with the exception of bug fixes. To make matters worse, if the implementation imports another, newer version, the public API will follow suit. Regardless of the policy of a particular project, there's no easy way of ensuring the public API is what you want it to be, it's just too failible.

Fortunately the Ubuntu UI Toolkit has employed a solution that's now become available for everyone:


Usage: apicheck [-v[v]] [-qml] [-json] IMPORT_URI [...IMPORT_URI]

Generate an API description file of one or multiple components.
Example: apicheck Ubuntu.Components
    apicheck --json Ubuntu.DownloadManager

The following rules apply for inclusion of public API:

 - Types not declared as internal in qmldir
 - C++ types exported from a plugin
 - Properties and functions not prefixed with __ (two underscores)
 - Members of internal base classes become part of public components


It's designed to serialize the public QML API in a way that is human readable as well as easy to process in a pogrammatic fashion. Let's try it out, shall we?

/usr/lib/x86_64-linux-gnu/ubuntu-ui-toolkit/apicheck Ubuntu.Components >

This will give you something like the following in the file

Ubuntu.Components.PageHeadConfiguration 1.1: Object
    readonly property Action actions
property Action backAction
    property Item contents
    property color foregroundColor
    property string preset
    readonly property PageHeadSections sections
Ubuntu.Components.PageHeadConfiguration 1.3: Object
    readonly property Action actions
    property Action backAction
    property Item contents
    property color foregroundColor
    property bool locked
    property string preset
    readonly property PageHeadSections sections
    property bool visible
Ubuntu.Components.UbuntuShape.HAlignment: Enum
Ubuntu.Components.ViewItems 1.2: QtObject
    property bool dragMode
    signal dragUpdated(ListItemDrag event)
    property bool selectMode
    property QList<int> selectedIndices
Ubuntu.Components.i18n 1.0 0.1: QtObject
    property string domain
    property string language
    function bindtextdomain(string domain_name, string dir_name)
    function string tr(string text)
    function string tr(string singular, string plural, int n)
    function string dtr(string domain, string text)
    function string dtr(string domain, string singular, string plural, int n)
    function string ctr(string context, string text)
    function string dctr(string domain, string context, string text)
    function string tag(string text)
    function string tag(string context, string text)

There are, in order, a QML component, an enum, an attached property and a singleton, all read from the typesystem in the way they will be available to QML applications.

Now in addition to reviewing this file with the naked eye you also use diff:

diff -F '[.0-9]' -u components.api{,.new}

Now let's imagine we're making some changes to some of the classes and running it again will yield this result:

@@ -415,11 +415,11 @@ Ubuntu.Components.PageHeadConfiguration
 Ubuntu.Components.PageHeadConfiguration 1.3: Object
     readonly property Action actions
     property Action backAction
-    property Item contents
+    property var contents
     property color foregroundColor
     property bool locked
     property string preset
-    readonly property PageHeadSections sections
+    property PageHeadSections sections
     property bool visible
 Ubuntu.Components.PageHeadSections 1.1: QtObject
     property bool enabled
@@ -1001,7 +1001,7 @@ Ubuntu.Components.UbuntuShape.FillMode:
 Ubuntu.Components.UbuntuShape.HAlignment: Enum
-    AlignRight
+    AlignTop
 Ubuntu.Components.UbuntuShape.VAlignment: Enum
@@ -1017,7 +1017,6 @@ Ubuntu.Components.UriHandler 1.0 0.1: Qt
 Ubuntu.Components.ViewItems 1.2: QtObject
     property bool dragMode
     signal dragUpdated(ListItemDrag event)
-    property bool selectMode
     property QList<int> selectedIndices
 Ubuntu.Components.i18n 1.0 0.1: QtObject
     property string domain
@@ -1027,7 +1026,7 @@ Ubuntu.Components.i18n 1.0 0.1: QtObject
     function string tr(string singular, string plural, int n)
     function string dtr(string domain, string text)
     function string dtr(string domain, string singular, string plural, int n)
-    function string ctr(string context, string text)
+    function string ctr(string context, string text, bool newArgument)
     function string dctr(string domain, string context, string text)
     function string tag(string text)
     function string tag(string context, string text)

See what happened there? Several changes show up in the diff output, including changed arguments, removed and added members and even the removal of the readonly keyword.

In the case of the Ubuntu UI Toolkit a components.api file lives in the repository. A qmake target generates from the local branch and prints a diff of the two files. This is run as part of make check, meaning any changes to the API become visible at the time you run unit tests, as well as CI builds for merge requests made on Launchpad. Any changes will cause make check to fail so the branch has to include an updated componets.api which shows up in Launchpad reviews and bzr command line tools.

If any of this got you excited, maybe you wanna add it to your own components and improve QA?

Read more
David Callé

Are you involved in Ubuntu phone, desktop, cloud or IoT development? Voice your opinion on what factors contribute to your choice of developing on Ubuntu by getting involved in the biggest developer survey yet.

Vision Mobile have launched their 9th edition developer economics survey today, covering developer sentiment across platforms, revenues, apps, tools, APIs, segments and regions. This ambitious survey covers everything from mobile, desktop, cloud and IoT. Key insights from the survey will be provided as a free download in late July, and a free chapter from one of VisionMobile’s premium paid reports will also be given immediately upon completion, taking a close look at app profits & costs.

Tell us your thoughts about the latest developer trends, take the 10 minute survey now - some amazing prizes are up for grabs including the BQ Aquaris E4.5 Ubuntu Edition, Apple Sports Watch, iPhone 6, Oculus Rift Dev Kit + many more gadgets!

Read more
David Callé

Internationalizing your QML app


As a developer, you probably want to see your apps in many hands. One way to make it happen is to enable your application for translation.

With minimal effort, you can mark your application strings for translation, expose them to community translators and integrate these translations into your package. The translations building process is handled by the SDK itself and if you happen to use Launchpad, translators will quickly see your project and help you, but you still need to mark your user-visible strings as translatable.

Let's get started ›

Read more
Loïc Molinari

A magnifying glass in QML

To create sharp visual components, we need to make sure our renderings look good at the pixel level. This is a common task and the terms precision and pixel-perfectness have become ubiquitous in discussions among programmers and designers at Canonical. In the last years, the industry started to increase the pixel density of screens, again (remember the CRT era), resulting in a higher number of pixels within a specified space (see Retina Display for instance). A consequence is that jaggies are less visible than before because we are reaching the point where the pixels are small enough that the eye is not able to detect them. In an idealized world of high density screens that would completely remove the need of anti-aliasing algorithms to smooth edges, but the fact of the matter is that we are not there yet and we will still have to thoroughly inspect the quality of anti-aliasing algorithms for a while.

Handheld magnifying glass

At a previous job, a colleague of mine used to keep a handheld magnifying glass on his desk. I was quite amused to see him glued to his screen validating the visual quality of commits with this thing. As the graphics engine programmer, I barely remember the reason for which I never proposed the inclusion of a software magnifier, it could be because of the overloaded backlog we had to deal with at the time but I guess it actually was just out of sheer mischief. Most desktop environments include a software magnifier, but depending on its quality (efficiency and ease of use), it often makes sense to integrate a custom magnifier directly in the application being developed (it makes less sense to ship it in release builds though...). This article explains how to implement an efficient one with QML using offscreen framebuffers and shaders.

Offscreen framebuffers (exposed as FBOs in OpenGL), vertex shaders and fragment shaders are now widely available in mobile and mid-range GPUs allowing the creation of interesting real-time post-processing effects for most devices on the market. Magnification, or to be more precise zooming & panning (magnification solely being the process of rendering an image at a higher scale), is one of it. In low-level graphics programming terms, all it takes is to do a first pass that renders the scene in a FBO and a second pass that renders a texture mapped quad to the default framebuffer reading the FBO as a texture. Image zooming and panning is a basic 2D scale and translate transformation that can be efficiently implemented by tweaking the texture coordinates used to sample the FBO at the second pass. The vertex shader, executed for the 4 vertices making our quad, will easily take care of it using a single multiply-add op (transformed_coords = scale * coords + translation) and the hardware accelerated rasterizer and texture units will make the actual rendering very efficient. In order to clearly distinguish the magnified pixels, it is important to use a simple nearest neighbour filter. These low-level bits are nicely exposed to QML through the ShaderEffectSource and  ShaderEffect items. The former allows to render a given Item to a FBO and the latter provides support for quads rendered using custom vertex and fragment shaders.

Here is the QML code of the magnifier:

import QtQuick 2.4

Item {
    // Public properties.
    property Item scene: null
    property MouseArea area: null

    id: root
    visible: scene != null
    property real __scaling: 1.0
    property variant __translation: Qt.point(0.0, 0.0)

    // The FBO abstraction handling our first offscreen pass.
    ShaderEffectSource {
        id: effectSource
        anchors.fill: parent
        sourceItem: scene
        hideSource: scene != null
        visible: false
        smooth: false  // Nearest neighbour texture filtering.

    // The shader abstraction handling our second pass with the
    // translation and scaling in the vertex shader and the simple
    // texturing from the FBO in the fragment shader.

    ShaderEffect {
            id: effect
            anchors.fill: parent
            property real scaling: __scaling
            property variant translation: __translation
            property variant texture: effectSource

            vertexShader: "
                uniform highp mat4 qt_Matrix;
                uniform mediump float scaling;
                uniform mediump vec2 translation;
                attribute highp vec4 qt_Vertex;
                attribute mediump vec2 qt_MultiTexCoord0;
                varying vec2 texCoord;
                void main() {
                    texCoord =

                        qt_MultiTexCoord0 * vec2(scaling)
                        + translation;
                    gl_Position = qt_Matrix * qt_Vertex;

            fragmentShader: "
                uniform sampler2D texture;
                uniform lowp float qt_Opacity;
                varying mediump vec2 texCoord;
                void main() {
                    gl_FragColor =

                        texture2D(texture, texCoord) * qt_Opacity;


    // Mouse handling.
    Connections {
        target: scene != null ? area : null


And here is how to use it:

import QtQuick 2.4

Item {
    id: root

    Item {
        id: scene
        anchors.fill: parent

    ZoomPan {
        id: zoomPan
        anchors.fill: parent
        scene: scene
        area: mouseArea

    MouseArea {
        id: mouseArea
        anchors.fill: parent
        enabled: true
        hoverEnabled: true
        acceptedButtons: Qt.AllButtons


Mouse handling has been snipped off the code for conciseness but it can be studied directly from the code repository. One important point to notice is that for zooming to be a pleasant experience, it has to be implemented using a logarithmic scale as opposed to a linear scale. Each scale value at a zooming level is the previous one multiplied by the desired scale factor, so a scale factor of 2 and a zooming level n give a scale value of 2n. Another point is that to scale an image up, the range of its texture coordinates must be scaled down, this explains why the actual scaling is inverted. So a scale value of 2n would give an actual scaling of 2-n. A bit counterintuitive at first…

We’re done with the theory. Let’s have a look at the final result:


This technique helped me in the making of several visual elements, I would be glad if other programmers find it useful too. Zooming and panning is a very common feature in image viewers, the technique could be adapted for that use case too (with potentially some tweaks to support tiling of big pictures). Maybe that would be a good addition to the Ubuntu UI toolkit, don’t hesitate to ask if you would like official support for it.

The source code is available on launchpad:

Read more
David Planella

Ubuntu Online Summit
The 15.04 release frenzy over, but the next big event in the Ubuntu calendar is just around the corner. In about a week, from the 5th to 7th of May, the next edition of the Ubuntu Online Summit is taking off. Three days of sessions for developers, designers, advocates, users and all members of our diverse community.

Along the developer-oriented discussions you’ll find presentations, workshops, lightning talks and much more. It’s a great opportunity for existing and new members to get together and contribute to the talks, watch a workshop to learn something new, or ask your questions to many of the rockstars who make Ubuntu.

While the schedule is being finalized, here’s an overview (and preview) of the content that you should expect in each one of the tracks:

  • App & scope development: the SDK and developer platform roadmaps, phone core apps planning, developer workshops
  • Cloud: Ubuntu Core on clouds, Juju, Cloud DevOps discussions, charm tutorials, the Charm, OpenStack
  • Community: governance discussions, community event planning, Q+As, how to get involved in Ubuntu
  • Convergence: the road to convergence, the Ubuntu desktop roadmap, requirements and use cases to bring the desktop and phone together
  • Core: snappy Ubuntu Core, snappy post-vivid plans, snappy demos and Q+As
  • Show & Tell: presentations, demos, lightning talks (read: things that break and explode) on a varied range of topics

Joining the summit is easy, you’ll just need to follow the instructions and register for free to the Ubuntu Online Summit >

UOS highlights: back to the desktop, snappy and the road to convergence

This is going to be perhaps one of the most important summits in recent times. After a successful launch of the phone, followed by the exciting announcement and delivery of snappy Ubuntu Core, Ubuntu is entering a new era. An era of lean, secure, minimal and modular systems that can run on the cloud, on Internet-enabled devices, on the desktop and virtually anywhere.

While the focus on development in the last few cycles has been on shaping up and implementing the phone, this doesn’t mean other key parts of the project have been left out. The phone has helped create the platform and tools that will ultimately bring all these projects together, into a converged code base and user experience. From desktop to phone, to the cloud, to things, and back to the desktop.

The Ubuntu 15.10 cycle begins, and so does this exciting new era. The Ubuntu Online Summit will be a unique opportunity to pave the road to convergence and discuss how the next generation of the Ubuntu desktop is built. So the desktop is back on the spotlight, and snappy will be taking the lead role in bringing Ubuntu for devices and desktop together. Expect a week of interesting discussions and of thinking out of the box to get there!

Participating in the Ubuntu Online Summit

Does this whet your appetite? Come and join us at the Summit, learn more and contribute to shaping the future of Ubuntu! There are different ways of taking part in the online event via video hangouts:

  • Participate or watch sessions – everyone is welcome to participate and join a discussion to provide input or offer contribution. If you prefer to take a rear seat, that’s fine too. You can either subscribe to sessions, watch them on your browser or directly join a live hangout. Just remember to register first and learn how to join a session.
  • Propose a session – do you want to take a more active role in contributing to Ubuntu? Do you have a topic you’d like to discuss, or an idea you’d like to implement? Then you’ll probably want to propose a session to make it happen. There is still a week for accepting proposals, so why don’t you go ahead and propose a session?

Looking forward to seeing you all at the Summit!

The post Announcing the next Ubuntu Online Summit appeared first on David Planella.

Read more
Zoltán Balogh

14.04 - 1.0 release

The 1.0 release of the UITK was built mostly for demonstrative purposes, but works well to a certain extent, it is the LTS release after all. Available from the Trusty archive (0.1.46+14.04.20140408.1-0ubuntu1) and from the SDK PPA (0.1.46+14.10.20140520-0ubuntu1~0trusty2). The “demonstrative purpose” in this context is a pretty serious thing. This release was the ultimate proof of concept that the Qt (5.2 by then) and QML technology with our design and components provides a framework for a charmingly beautiful and killing fast user interface. Obviously there is no commercial touch device with this UITK release, but it is good to make a simple desktop application with the UX of a mobile app. If your desktop PC is running 14.04 LTS Ubuntu and you have installed the Ubuntu SDK then the IDE is using this release of the UITK.

The available components and features are documented either online or offline under the file:///usr/share/ubuntu-ui-toolkit/doc/html local directory if the ubuntu-ui-toolkit-doc is installed.

14.10 - 1.1 release

It was the base for the first real Ubuntu phone. Most mission critical components and toolkit features were shipped with this edition.  The highlights of the goodies you can see on the Utopic edition of the UITK (version 1.1.1279+14.10.20141007-0ubuntu1):

  • Settings API

  • Ubuntu.Web

  • ComboButton

  • Header replaces bottom toolbar

  • PullToRefresh

  • Ubuntu.DownloadManager

  • Ubuntu.Connectivity

The focus of the UITK development was to complete the component set and achieve superb performance. It is important to note that these days, this exact version you can find only on very few community ported Ubuntu Touch devices, and even those early adaptations should be updated to 15.04.  The most common place to meet this edition of the UITK is the 14.10 Ubuntu desktop. This UITK can be indeed used to build pretty nice looking desktop applications. The Ubuntu specific UI extensions of the QtCreator IDE are built on our very own UITK. So, the UITK is ported and available for desktop app development with some limitations since 14.04.

14.09  - the RTM release

The development of the RTM (Ready To Market) branch of the UITK  was focusing on bugfixes and final polishing of the components. Dozens of functional, visual and performance related issues were tackled and closed in this release.

A few of relevant changes in the RTM branch:

  • Internationalization related improvements

  • Polishing the haptics feedback of components

  • Fixes in the ActivityIndicator

  • UX improvements of the TextField/TextArea

  • Dialog component improvements

This extended 1.1 release of the UITK is what is shipped with the bq Aquaris E4.5 devices. This is pretty serious stuff. Providing the very building rocks for the user experience is a big responsibility. During the development of this  release one of the most significant changes happened behind the scenes. The release process of the UITK was renewed and we have enforced very strict rules for accepting any changes.

To make sure that with the continuous development of the UITK we do not introduce functional problems and do not cause regressions we not only force to run about 400 autopilot test cases on the UITK, but an automatic test script validates all core and system apps with the release candidates. It means running thousands of  automatic functional tests before each release.

15.04 - 1.2 release

After the 14.09 aka RTM release was found good and the bq devices started to leave the factory lines the UITK development started to focus on two major areas. First of all we brought back to the development trunk all the fixes and improvements landed on the RTM branch and we merged back the whole RTM branch to the main line. The second area was to open the 1.2 queue of the toolkit and release the new features:

  • ListItem

  • New UbuntuShape rendering properties

  • New Header

Releasing the 1.2 UITK makes the first big iteration of the toolkit development complete.  In the last three cycles the Ubuntu application framework went through three minor Qt upgrades (5.2 - 5.3 - 5.4) and continuously adapted to the improving design and platform.

15.10 - 1.3 release

The upcoming cycle the focus is on convergence. We have shipped a super cool UI Toolkit for touch environment, now it is time to make it offer as complete and as fast toolkit for other form factors and for devices with other capabilities. The emphasis here is on capability. Not only form factor or device mode. The next release (1.3) of the UITK will adopt to the host environment according to its capabilities. Like input capabilities, size and others.

The highlights of the upcoming features:

  • Resolution independence

  • Improve visual rendering (pixel perfectness at any device ratio)

  • Improve performance (CPU and GPU wise)

  • Convergence

    • Tooltips

    • Key navigation - Tab/Shift+Tab

    • Date and Time Pickers

    • Menus

      • Application and

      • context menus

  • Support Sub-theming

  • Support of ListItem expansion

  • Text input magnification on selection

  • Simplified Popovers

  • Text input context menu

  • Deprecate Dialer (Ubuntu.Components.Pickers)

  • Deprecate PopupBase (Ubuntu.Components.Popups)

  • Focused component highlight

  • Support for OSK to keep the focus component above the key rectangle

  • Integrate scope toolkit from Unity with the UI Toolkit

The 1.3 version of the UITK will be the first with the promise that application developers can create both fully functional desktop and phone applications. In practice it means that the runtime UITK will be the same as in the build environment.

16.04 - 2.0 release

Looking forward to our next LTS release our ambition is to polish together all the features and tune the UI Toolkit for the next major release. This edition of the toolkit will serve app developers for long time. The 2.0 will be the “mission completed”.  We expect few features to move from our original 15.10 plans to the 16.04:

  • Clean up deprecated components

  • Rename ThemeSettings to Theme

  • Toolbars for convergence

  • Modal Dialogs

  • Device mode (aka capability) detection

  • Complete scopes support

  • Backend for Alarm services

  • Separate service components from UI components

Read more
David Planella

Nearly two years ago, the Ubuntu Community Donations Program was created as an extension to the donations page on, where those individuals who download Ubuntu for free can choose to support the project financially with a voluntary contribution. In doing so, they can use a set of sliders to determine which parts of the project the amount they donate goes to (Ubuntu Desktop, Ubuntu for phone, Ubuntu for tablet, Ubuntu on public clouds, Cloud tools, Ubuntu Server with OpenStack, Community projects, Tip to Canonical).

While donations imply the trust from donors that Canonical is acting as a steward to manage their contributions, the feedback from the community back then was that the Community slider required a deeper level of attention in terms of management and transparency. With community being such an integral part of Ubuntu, and with the new opportunity to financially support new community projects, events or travel, it was just logical to ensure that the funds allocated to them were managed fairly and transparently, with public reporting every six months and a way for Ubuntu members to request funding.

Although the regular reports already provide a clear picture where the money donated for community projects is spent on, today I’d like to give an update on the bigger picture of the Community Donations Program and answer some questions community members have raised.

A successful two years

In a nutshell, we’re proud to say that the program continues to successfully achieve the goals it was set out for. Since its inception, it has given the ability to fund around 70,000 USD worth of community initiatives, conferences, travel and more. The money has always been allocated upon individual requests, the vast majority of which were accepted. Very few were declined, and when they were, we’ve always strived to provide good reasoning for the decision.

This process has given the opportunity to support a diverse set of teams and projects of the wide Ubuntu family, including flavours and sponsoring open source projects and conferences that have collaborated with Ubuntu over the years.

Program review and feedback

About two years into the Program, we felt a more thorough review was due: to assess how it has been working, to evaluate the community feedback and to decide if there are any adjustments required. Working with the Community Council on the review, we’ve also tried to address some questions from Ubuntu members that came in recently. Here is a summary of this review.

The feedback in general has been overwhelmingly positive. The Community Donations Program is not only seen as an initiative that hugely benefits the Ubuntu project, but also the figures and allocations on the reports and are a testament to this fact.

Criticism is also important to take, and when it has come, we’ve addressed it individually and updated the public policy or FAQ accordingly. Recently, it has arrived in two areas: the uncertainty in some cases where the exact cost is not known in advance (e.g. fluctuating travel costs from the date of the request until approval and booking) and the delay in actioning some of the requests. In the first case, we’ve updated the FAQ to reflect the fact that there is some flexibility allowed in the process to work with a reasonable estimate. In the second, we’ve tried to explain that while some requests are easy to approve and actioned in a matter of a few days (we review them all once a week), some others take longer due to several different factors: back and forth communication to clarify aspects of the requests, the amount of pending requests, and in some cases, the complexity of arranging the logistics. In general, we feel that it’s not unreasonable to expect sending a request at least a month in advance to what it is being planned to organize with the funds. We’re also making it clear that requests should be filled in advance as opposed to retroactively, so that community members do not end up in a difficult position should a request not be granted.

One of the questions that came in was regarding the flavour and upstream donation sliders. Originally, there were 3 community-related sliders on 1) Community participation in Ubuntu development, 2) Better coordination with Debian and upstreams, 3) Better support for flavours like Kubuntu, Xubuntu, Lubuntu. At some point during the 14.04 release sliders 2) and 3) were removed, leaving 1) as Community projects. Overall, this didn’t change the outcome of community allocations: since its beginning, the Community Donations Programme amounts have only come from the first slider, which is what the Canonical Community team are managing. From there, money is always allocated upon request fairly, not making a difference and benefiting Ubuntu, its flavours and upstreams equally.

All that said the lack of communication regarding the removal of the slider was something that was not intended and should have been communicated with the Community Team and the Community Council. It was a mistake for which we need to apologize. For any future changes in sliders that affect the community we will make sure that the Community Council is included in communications as an important stakeholder in the process.

Questions were also raised about the reporting on the community donations during the months in 2012/2013, between the donations page going live and the announcement of the Community Donations Program. As mentioned before, the Program was born out of the want to provide a higher level of transparency for the funds assigned to community projects. Up until then (and in the same way as they do today for the rest of the donation sliders) donors were trusting Canonical to manage the allocations fairly. Public reports were made retroactively only where it made sense (i.e. to align with fiscal quarters), but not going back all the way to the time before the start of the Program.

All in all, with these small adjustments we’re proud to say we’ll continue to support community projects with donations in the same way we’ve been doing these last two years.

And most especially, we’d like to say a big ‘thank you’ to everyone who has kindly donated and to everyone who has used the funds to help shaping the future of Ubuntu. You rock!

The post The Ubuntu Community Donations Program in review appeared first on David Planella.

Read more
Benjamin Zeller

Inner workings of the SDK

From time to time app developers ask how to manually build click packages from their QMake or CMake projects. To understand the answer to that question, knowing about how the SDK does things internally and the tools it uses helps a lot.

First we have to know about the click command. It is one of the most important tools we are about to use, because it provides ways to:

  • create a build environment
  • maintain the build environment
  • execute commands in the build environment
  • build click packages
  • review click packages
  • query click packages

Issuing click --help will show a complete list of options. The click command is not only used on the development machines but also on the device images, as it is also responsible for installing/removing click packages and to provide informations about the frameworks a device has to offer.

Assuming that the project source is already created, probably from a SDK template, and ready to be packed up in ~/myproject, creating a click package requires the following steps:

  1. Create a build target for the device that should be targeted
    click chroot -a armhf -f ubuntu-sdk-15.04 create
  2. Run qmake/cmake on the project to create the Makefiles
    mkdir ~/myproject-build
    cd ~/myproject-build
    click chroot -a armhf -f ubuntu-sdk-15.04 run cmake ../myproject #for cmake
    click chroot -a armhf -f ubuntu-sdk-15.04 run qt5-qmake-arm-linux-gnueabihf ../myproject #for qmake
  3. Run make to compile the project and run custom build steps
    click chroot -a armhf -f ubuntu-sdk-15.04 run make
  4. Run make install to collect all required files in a deploy directory
    rm -rf /tmp/deploy-myproject #make sure the deploy dir is clean
    click chroot -a armhf -f ubuntu-sdk-15.04 run make DESTDIR=/tmp/deploy-myproject install #for cmake
    click chroot -a armhf -f ubuntu-sdk-15.04 run make INSTALL_ROOT=/tmp/deploy-myproject install #for qmake
  5. Run click build on the deploy directory
    click build /tmp/deploy-myproject

We will look into each step at a greater detail and explain the tools behind it starting with:

Creating a build chroot and what exactly is that:

When building applications for a different architecture as the currently used development machine , for example x86 host vs armhf device, cross build toolchains are required. However toolchains are not easy to maintain and it requires a good deal of effort to make them work correctly. So our decision is to use "build chroots" to ease the maintenance of those toolchains. A build chroot is in fact nothing else as the normal Ubuntu you are using on your host machine. Probably its a different version, but it is still coming from the archive. That means we can make sure the toolchains, libraries and tools that are used to build click packages are well tested and receive the same updates as the ones on the device images.

To create a build chroot the following command is used:

click chroot -a armhf -f ubuntu-sdk-15.04 create

Grab a coffee while this is running, it will take quite some time. After the chroot was created for the first time, it is possible to keep it up to date with:

click chroot -a armhf -f ubuntu-sdk-15.04 upgrade

But how exactly does this work? A chroot environment is another complete Ubuntu root filesystem put inside a directory. The "chroot" command makes it possible to treat exactly this directory as the "root directory" for a login shell. Commands running inside that environment can not access the outer filesystem and do not know they are actually inside a virtualized Ubuntu installation. That makes sure your host file system can not be tainted by anything that is done inside the chroot.

To make things a bit easier, /home and /tmp directories are mounted into the chroot. That means those paths are the same inside and outside the chroot. No need to copy files around. But that also means projects can only be in /home by default. It is possible to change that but thats not in the scope of this blog post (hint: check /etc/schroot/default/fstab).

Run qmake/cmake on the project to create the Makefiles

In order to compile the project, CMake or QMake need to create a Makefile from the project description files. The SDK IDE always uses a builddirectory to keep the source clean. That is the recommended way of building projects.

Now that we have a chroot created, we need a way to actually execute commands inside the virtual environment. It is possible to log into the chroot or just run single commands. The click chroots have 2 different modes, one of them is the production mode and one is the  maintenance mode.

Everything that is changed on the chroot filesystem in production mode will be reverted when the active session is closed to make sure the chroot is always clean. The maintenance mode can be used to install build dependencies, but its the job of the user to make sure those dependencies are available on the phone images as well. Rule of thumb is, if something is not installed in the chroot by default it is probably not officially supported and might go away anytime.

click chroot -a armhf -f ubuntu-sdk-15.04 run  #production
click chroot -a armhf -f ubuntu-sdk-15.04 maint #maintenance

Running one of these commands without specifying which command should be executed inside the chroot will open a login shell inside the chroot environment. If multiple successive commands should be executed it is faster to use a login shell, because the chroot is mounted/unmounted every time a session is opened/closed.

For QMake projects usually the IDE takes care of selecting the correct QMake binary, however in manual mode the user has to call the qt5-qmake-arm-linux-gnueabihf in armhf chroots instead of the plain qmake command. The reason for this is that qmake needs to be compiled in a special way for cross build targets and the "normal" qmake can not be used.

Run make to compile the project and run custom build steps

This step does not need much explanations, it triggers the actual build of the project and needs to be executed inside the chroot again of course.

Run make install to collect all required files in a deploy directory

Now that the project build was successful, step 4 collects all the required files for the click package and installs them into a deploy directory. When building with the IDE the directory is located in the current build dir and is named ".ubuntu-sdk-deploy".

It is a good place to check if all files were put into the right place or check if the manifest file is correct.

In order for that step to work correctly all files that should be in the click package need to be put into the INSTALL targets. The app templates in the SDK should give a good hint on how this is to be done.

The deploy directory now contains the directory structure of the final click package.

Run click build on the deploy directory

The last step now is to build the actual click package. This command needs to be executed outside the chroots, simply because of the fact that the click command is not installed by default inside the chroots. What will happen now is that all files inside /tmp/deploy-myproject will be put inside the click package and a click review is executed. The click review will tell if the click package is valid and can be uploaded to the store.

If all went well, the newly created click package should show up in the directory where click was executed, it can now be uploaded to the store or installed on a device.

Read more