Canonical Voices

Posts tagged with 'english'

Zsombor Egri

In 2012 we started the Ubuntu UI Toolkit development with QML-only components, all logic being provided in Javascript. This allowed us to deploy components quickly, to do fast prototyping, and tweak the behaviors and look-and-feel on the fly without the need to re-package or rebuild the entire toolkit. With all its benefits, this approach revealed its negative side, which is the impact on the performance. Complex applications are the most affected, which use many components, and their startup as well as rendering time is heavily affected by the component performance.

Then came the theming, the grid units, and the i18n localization, which introduced the plugin. The theming engine was the only component implemented in C++ as we knew from the beginning that we needed to be fast on loading and especially applying the styles on components. The style loading was done in QML using loaders, which kept the flexibility on tweaking. After several attempts on optimizing the engine, we decided to refactor it, and we managed to come up with a theming that was little more than twice as fast as the previous one. Although we started to gain speed on components initialization, components were still too slow to be applicable in kinetic scrolling. List views were still laggish, delegate creation of the simplest ListItem module component was still 60 times slower than of an Item. Therefore we decided to move components to C++ one by one, especially the ones on the critical path. StyledItem was one of the first, followed by a new ListItem component, which by now you are all familiar with. So it became crystal clear that, if we want you guys to be able to play with full QML apps and still have decent performance in your apps, we must provide at least the core logic of the components in C++, and do the styling in QML. This thought was confirmed also by the Qt developers, when they announced the start of the next generation of the Qt Quick Controls.

But let’s take the biggest issues that brought us to press the reset button.

API

When a person takes a toolkit in his/her hand, the first thing (s)he will encounter is the API. Are the component names self-explanatory, is the API easy to use, no ambiguities when using it, etc, etc. Many developers do not read the API docs, they just jump in and start running the example codes, copying the examples from the documentation without reading a line from it. And next, they will try experimenting on the components, start changing properties, add functionality to the code, and so they start shaping their ideas for their apps.

I think API wise we are in a pretty good shape, we tried to be as close to the declarative QML world as possible, and follow the practices imposed by the Qt Company. I know, not everything is configurable in the components, and that is mostly due to the policy we started with, which was to keep as much configuration in the styling as possible, so we can keep consistency in between components in the applications. But there are different ways to achieve consistency and still keep configurability on a level that developers will be happy to use the API. Both sides have their benefits: smaller API is less complex than one which has plenty of configurations, even if those are color values, on the other hand it is impossible to change its visuals. Some of you may think the API is crap because we don’t provide enough customization, or access to certain elements of the component. We do feel and understand your pain, and we will try to come over it and compensate you in the future.

Behavior

When the developer starts using a component, he/she expects the component to do what it is meant for. A Button is expected to be clickable, a text input to accept text editing gestures, a list item to provide content layouting functionality when used in views and a header to display a title and some other vital functionality for the application. If a component can cooperate with another one when placed side by side, without the developer doing anything, that is the cherry on the cake. But that’s where the problem starts: a component which must take into account its surroundings and change adapts its behavior creates confusion. A much cleaner approach is to let the developer do this rather than the components themselves, but components should provide connectors and enablers so these interactions can be achieved. Yes, application developers will have to do more, but now they will be in control.

Context Properties as Singletons

Context properties are nice when an application wants to expose a model or other logic to its QML UI layer. Those are pretty simple to implement, however also provide unreadable code for those who read the two worlds (QML and C++ or other non-QML code) separately. The problem gets even worse when these context properties are representing singletons. QML has the notion of singletons but those were not suitable for the functionality we needed for localization (i18n) theming and grid units. The quickest decision was to provide them as context properties, so whenever the locale, system theme or the screen’s grid unit changes during the application’s lifetime, these will be automatically updated, so when used in bindings, those will be automatically re-evaluated. However these context properties cannot be used in shared Javascript libraries. And our measurements had proven that importing a module which contains and uses code-behind implementation javascript libraries takes almost 3 times longer than one which has shared libraries. In addition, now when convergence brings the multi-monitor feature to Ubuntu, each monitor can have a different grid unit size, which means the global units context property singleton is not usable in an application which uses multiple windows. So we must get rid of these kinds of interpretations of the singletons and provide proper ones which are naturally supported by QML.

Complex Theming

Now this is one of the biggest problems. The theming went through a complete evolution: from CSS-like styling to a complete QML-based declarative styling, and then to sub-theming, so each application can use multiple themes at the same time. The performance increased dramatically when we dropped the first version in favor of the declarative one, but it is still slower when compared to a component which implements its visuals on top of a template that provides the logic (see QtQuick Controls second generation).

Performance

Oh, yes. All above are contributing to the slow performance of the components, which results in bad performance in applications. Styling is still a bottleneck. We’ve ported some components from QML to C++ to gain some speed in both loading and UI response time, however we have still components entirely written in QML and Javascript, and those are clearly performance eaters. And these monsters are catching your eyes, because they are used the most: AdaptivePageLayout turned to be the most loved component due to its support for the converged application development, but there are the text inputs (TextField and TextArea) which are again components taking too long to instantiate. We have to make them performant, and the only solution is to make them in C++. Of course, C++ is not the Holy Grail, one can make nasty things there too. But so far, we’ve managed to get the components we’ve ported to C++ to behave really well and even provided performance gain to the components derived from them. There was a reason why BlackBerry made its toolkit in C++ and exposed it to QML...

The Plan

So we came up with a plan. And the plan includes you. The plan needs you to succeed, it won’t work without you.

First we thought that we can introduce the new features and slowly turn all the components into performant ones. But then came the DPR support, which despite the fact that from Qt 5.7 onwards it will support floating point value, QWidget based apps will still be broken, as those only support integer sizes. This can be handled behind the scenes, however apps with multiple windows must support different grid unit/DPR sizes when those windows are laid out on different screens. This means that we must do something about the way we handle the grid units, and that, unfortunately, cannot be done without an API break.

But then, if we break it, let’s do it properly! This leads us to really go for the second generation if the UI toolkit, which we were already dreaming of for about a year. This means breaking the backwards compatibility in some APIs. However, whenever is possible, we will keep the interface compatible, but that may not apply to component inheritance.

API design

We will start sharing all API designs with you, so you can contribute! We don’t have a clear plan yet, but we could introduce a “labs” module where the API can be tried out for each component before it lands to the stable module. We must find a way to share the API documents with you so you can comment and request interface changes/additions. (So later you can blame yourself for the mistakes :) ) The policy will be the same, an API once released cannot be revoked, only deprecated. By introducing the labs module, we could give a few weeks or months of time for you to try it out, and provide fixes/comments. Of course, components which were already designed will keep the API but will be exposed for additional requests. And also, we will try to minimize the API to the use cases we have.

Styling

When it comes to component implementation we will follow the template+UI layer design, so components will be implemented on top of templates. If your application requires different layout, you will be free to implement it yourself using the template. Therefore we can say that we will have two API layers: the template layer APIs and the UI layer APIs, this last bringing additional properties to the component customizing the look and feel of the component itself, without modifying the logic of the component (i.e. colors, borders, transitions). Both layers will be treated with the same stability promise.

In addition, the theming will still be available, but will not contain anything else but the palette, and the font of the theme. We don’t know yet how will this be available to you, either through a component property of attached properties, we have to benchmark both solutions and see which one is more reliable. Both solutions have their pros and cons, let’s see which one will be the winner.

When Do We Start?

As soon as possible! First we need to open a repository and provide the skeleton for it, and then move the former singletons so we have a clear API for them. Then we need to get the components one by one from the 1.x into the new base, and revisit each component’s API with you all. We will let you know when the trunk is available so you can start playing with it.

When Will It Be Available?

The journey will be a bit longer as we must keep UI Toolkit 1.3 up to date and stable, and in parallel provide features to 2.0. The expectation is that by the end of October we should have a few components in the labs module so those can be tested. We expect to have components appearing in the labs written in C++, so no QML first then move to C++ approach anymore, as the idea is once the component API is seen to be stable enough, we move that to the released package without any effort. Also, as all the major version changes used to be, this version will not be backwards compatible nor usable with 1.x versions, meaning that your QML application would not be able to import 1.x and 2.0 same time.

Shouldn’t We Take The Next Generation of QtQuick Controls as base?

That is a good point, and we’ve been considering that option too. However some of our components’ behavior is so different that it may make sense to simply follow a different path rather than take those as base. But we promise we will consider it as an option. We’ve had a discussion back in last December when we talked about the QtQuick Controls blending in with UI Toolkit, see it here.

Final words

It will be a long journey, a tough one, but finally it will be properly open. Lots of IRC discussions, hangouts, videos, labs works… It’ll be fun! I cannot promise pizza, or beer for you guys, but I promise it'll be hell of a good ride!

 

Read more
Didier Roche

A quick note reminding you to submit your feedback to the IoT Developer Survey 2016!


The survey is organized by the Eclipse IoT Working Group, IEEE IoT Initiative and the AGILE-IoT H2020 Research Project. Your input will help in understanding the IoT community requirements on software and related tools and to develop resources to more effectively help developers working on the Internet of Things.

This will help as well driving our snappy Ubuntu Core IoT experience, in parternship with those great open source projects!

The deadline to participate is March 25, 2016.

Read more
Zoltán Balogh

SDK Planning for 16.10

On your mark!

We have a clear commitment for the upcoming cycle: We will make it faster! One might ask what exactly we want to make faster and how much faster? From the point of SDK the most important place for performance improvements are the applications and all the consumers of the UI Toolkit. So we will make the applications start up faster, respond faster, use less memory and less power. The starting point for this is to profile and refactor some of the components we know could use some boosting. We will migrate the following components from QML to C++:

  • AdaptivePageLayout

  • Page  

  • Picker

  • DateTimePicker

  • TextField

  • TextArea

In addition to improving the components we will check out the most critical core applications and consult with their developers on how to get the most out of the UITK. We all know how easy it is to create and develop apps in QML, but when it comes to performance, every millisecond matters.

Closing the feature gaps

The groundwork for convergence in the UI Toolkit was successfully put in place but it is not complete yet. We have a few more components to implement and hand over to application developers:

  • header subtitle  

  • keyboard control for the header

  • toolbar scrolling

  • exclusive group

  • radio buttons  

  • popup window

  • context menus  

  • new dialog component  

  • context menu

  • application menu

These components will land as part of Ubuntu.Components 1.3.

APIs and Frameworks

Frameworks and framework versions are not the easiest part of the application development story, to put it mildly.  It is clear that we must have a framework definition that specifies the APIs (platform package, name, import version) that are supported on that particular device and each released framework must have a public definition of the APIs it supports.  
The most fundamental requirements for the API and framework control are:

  • Each platform module that provides application or platform development APIs must have an API definition and an API change tracker.

  • Each platform module that provides an API must have a version number bumping mechanism when it changes its API.

  • Each platform module that provides an API must have a blocking mechanism for API deprecation to prevent API breakage.

  • For application developers it must be trivial to learn what version of each QML module is available in a framework version and what QML import they can use.

  • For application developers there must be an API scanner that checks if the application is using only supported APIs; maybe that tool could help the developer figure out what minimum framework is required by the app.

In the following development cycles we are going to implement and release an API tracker and framework management system to address all these (and many more) related problems.

In short, we will have a simple and consistent framework management solution that will be almost transparent for application developers.

UITK v2.0

Even if the v1.3 of the UI Toolkit still receives new APIs and fixes we better start working on the next major release. It is still way too early to talk about detailed plans, but we already know that for example the multi-screen DPR support will land on 2.0. Other than that this work is still in the brainstorming and planning phase. Stay tuned and expect a blog post about this topic from Zsombor. All in all, now is the  time to step forward and join the discussion!

Development Tools and IDE

In the last few months we have changed a lot around the development tools and the IDE. Finally the IDE is decoupled from the system Qt packages, so we can offer a consistent developer experience regardless of what Ubuntu release the SDK is installed on. This compact and confined packaging gives us more freedom and  flexibility, plus finally we can be as close to the upstream QtCreator and Qt as possible.
The next steps will be to improve the application build process and the app runtime testing. We are prototyping a new builder and runtime container that is based on LXD. It will be much faster, lighter and more reliable than the present schroot based builder. As a bonus we can move the app and scope runtime from the host desktop to the LXD container. It means that developers can see their apps and scopes in a native Unity8 shell in a container without the overhead of the full system emulator.
In the following weeks we will be releasing the Pocket PC edition of the IDE and we will add more and more Snappy support to our tools.

Integration and releases

We will keep up the pace we have set in the last cycle. The goal is to push out a new UI Toolkit release twice a month. Each release takes about a week to be validated. Releasing a new Toolkit is a complex process. We need to make sure that all the visual changes are inline with the design guidelines and we need to run thousands of functional tests (automatic, thanks to autopilot) and check if the new Toolkit blends in with the whole system. Just recently we have added a new feature to our CI. Each merge request can be tested from a simple installable click package. So if somebody wants to follow the UITK development it can be safely done without messing up the system.

Keeping it all open

Even under all the pressure to deliver the UI Toolkit and the SDK we will not forget that what we do is not only open source software but we do it in an open way. It is obvious that the UI Toolkit and the IDE projects are all open, but we need to ensure that how we work is open, too. We will keep publishing technical blog posts and videocasts about what is going on in the SDK workshops. We welcome all contributions to our projects and we are going to participate in various events to hear developers and to share information.

 

Read more
David Callé

Last weekend for the Scopes Showdown!

Almost six weeks since the start of the Showdown and we have been overly impressed by the number of questions we have received and hope we have been able to provide the right support for you to complete your entry.

Here is a kind reminder that the deadline to submit your entry is monday February 29th.

If you are still hesitant to enter, have a look at the following resources to get you started with a JavaScript scope:

For other languages (C++ and Go), you can browse our list of tutorials and guides.

Happy hacking !

Read more
Tim Peeters

PageHeader tutorial

The new header property

This is a tutorial on how to use the new PageHeader component. So far, we had one header per application, implemented in the MainView, and configurable for each page using the Page.head property (which is an instance of PageHeadConfiguration). We deprecated that approach, and added the Page.header property, which can be set to be any Item so that each page has its own header instance. When it is set, two things happen:

  1. The (deprecated) application header is disabled and any configuration that may be set using the old Page.head property is ignored,

  2. The Page.header item is parented to the page.

Because the header is parented to the page, inside the header you can refer to its parent, and other items inside the page can anchor to the header.

 

Example 1

import QtQuick 2.4
import Ubuntu.Components 1.3
MainView {
    width: units.gu(50)
    height: units.gu(30)
    Page {
        id: page
        header: Rectangle {
            color: UbuntuColors.orange
            width: parent.width
            height: units.gu(8)
            Label {
                anchors.centerIn: parent
                text: "Title"
                color: "white"
            }
        }
        Rectangle {
            anchors {
                left: parent.left
                right: parent.right
                bottom: parent.bottom
                top: page.header.bottom
            }
            color: UbuntuColors.blue
            border.width: units.gu(1)
            Label {
                anchors.centerIn: parent
                text: "Hello, world!"
                color: "white"
            }
        }
    }
}

Use PageHeader for an Ubuntu header

In order to get a header that looks like an Ubuntu header, use the PageHeader component. This component provides properties to set the title and actions in the header and shows them the way you are used to in Ubuntu apps. Below I will show how to customize the header styling, but first an example that uses the default visuals:

Example 2

import QtQuick 2.4
import Ubuntu.Components 1.3
MainView {
    width: units.gu(50)
    height: units.gu(20)
    Page {
        header: PageHeader {
            title: "Ubuntu header"
            leadingActionBar.actions: [
                Action {
                    iconName: "contact"
                    text: "Navigation 1"
                },
                Action {
                    iconName: "calendar"
                    text: "Navigation 2"
                }
            ]
            trailingActionBar.actions: [
                Action {
                    iconName: "settings"
                    text: "First"
                },
                Action {
                    iconName: "info"
                    text: "Second"
                },
                Action {
                    iconName: "search"
                    text: "Third"
                }
            ]
        }
    }
}

The leadingActionBar and trailingActionBar are instances of ActionBar, and can thus be configured in the same way any action bar can be configured. The default settings for the leading action bar configures the number of slots as 1, sets the overflow icon to "navigation-menu", and the default value for actions is navigationActions, which is a property of the PageHeader set by the PageStack or AdaptivePageLayout to show a back button when needed. If leadingActionBar.actions is explicitly defined as in the example above, the back button added by a PageStack or AdaptivePageLayout is ignored. The trailing action bar automatically updates its number of slots depending on the space available for showing actions. On a phone it will show 3 actions by default, but on a tablet or desktop more. When actions.length > numberOfSlots for the leading or trailing action bar, an overflow button will automatically be shown that shows the other actions when tapped.

Automatic show/hide behavior

The examples above show a static header where the page contents is anchored to the bottom of the header. When the page contains a Flickable or ListView, this can be linked to the header so that it automatically moves with the flickable:

import QtQuick 2.4
import Ubuntu.Components 1.3
MainView {
    width: units.gu(50)
    height: units.gu(40)
    Page {
        header: PageHeader {
            title: "Ubuntu header"
            flickable: listView
            trailingActionBar.actions: [
                Action {
                    iconName: "info"
                    text: "Information"
                }
            ]
        }
        ListView {
            id: listView
            anchors.fill: parent
            model: 20
            delegate: ListItem {
                Label {
                    anchors.centerIn: parent
                    text: "Item " + index
                }
            }
        }
    }
}

When PageHeader.flickable is set, the header automatically scrolls with the flickable, and the topMargin of the Flickable or ListView is set to leave enough space for the header, so the flickable can fill the page. Besides using the flickable to scroll the header in and out of the view, the PageHeader.exposed property can be set to show or hide the header. The PageHeader will automatically become more slim on a phone in landscape orientation to make better use of the limited vertical screen space.

Extending the header

Some applications require more functionality in the header besides the actions in the leading and trailing action bars. For this, we have the extension property which can be any Item that will be attached at the bottom of the header. For changing what is 'inside' the header, there is the contents property which will replace the default title label. The following example shows how to use the extension and contents properties to implement search mode and edit mode that can be switched between by clicking the action buttons in the default header:

Example 4a

import QtQuick 2.4
import Ubuntu.Components 1.3
MainView {
    width: units.gu(50)
    height: units.gu(20)
    Page {
        id: page
        header: standardHeader
        Label {
            anchors {
                horizontalCenter: parent.horizontalCenter
                top: page.header.bottom
                topMargin: units.gu(5)
            }
            text: "Use the icons in the header."
            visible: standardHeader.visible
        }
        PageHeader {
            id: standardHeader
            visible: page.header === standardHeader
            title: "Default title"
            trailingActionBar.actions: [
                Action {
                    iconName: "search"
                    text: "Search"
                    onTriggered: page.header = searchHeader
                },
                Action {
                    iconName: "edit"
                    text: "Edit"
                    onTriggered: page.header = editHeader
                }
            ]
        }
        PageHeader {
            id: searchHeader
            visible: page.header === searchHeader
            leadingActionBar.actions: [
                Action {
                    iconName: "back"
                    text: "Back"
                    onTriggered: page.header = standardHeader
                }
            ]
            contents: TextField {
                anchors {
                    left: parent.left
                    right: parent.right
                    verticalCenter: parent.verticalCenter
                }
                placeholderText: "Search..."
            }
        }
        PageHeader {
            id: editHeader
            visible: page.header === editHeader
            property Component delegate: Component {
                AbstractButton {
                    id: button
                    action: modelData
                    width: label.width + units.gu(4)
                    height: parent.height
                    Rectangle {
                        color: UbuntuColors.slate
                        opacity: 0.1
                        anchors.fill: parent
                        visible: button.pressed
                    }
                    Label {
                        anchors.centerIn: parent
                        id: label
                        text: action.text
                        font.weight: text === "Confirm"
                                     ? Font.Normal
                                     : Font.Light
                    }
                }
            }
            leadingActionBar {
                anchors.leftMargin: 0
                actions: Action {
                    text: "Cancel"
                    iconName: "close"
                    onTriggered: page.header = standardHeader
                }
                delegate: editHeader.delegate
            }
            trailingActionBar {
                anchors.rightMargin: 0
                actions: Action {
                    text: "Confirm"
                    iconName: "tick"
                    onTriggered: page.header = standardHeader
                }
                delegate: editHeader.delegate
            }
            extension: Toolbar {
                anchors {
                    left: parent.left
                    right: parent.right
                    bottom: parent.bottom
                }
                trailingActionBar.actions: [
                    Action { iconName: "bookmark-new" },
                    Action { iconName: "add" },
                    Action { iconName: "edit-select-all" },
                    Action { iconName: "edit-copy" },
                    Action { iconName: "select" }
                ]
                leadingActionBar.actions: Action {
                    iconName: "delete"
                    text: "delete"
                    onTriggered: print("Delete action triggered")
                }
            }
        }
    }
}

Customize the looks

Now that we covered the basic functionality of the page header, let's see how we can customize the visuals. That can be done by creating a theme for your app that overrides the PageHeaderStyle of the Ambiance theme, or by using StyleHints inside the PageHeader. To change the looks of the header for every page in your application, it is recommended to create a custom theme, but the example below shows how to use StyleHints to change the properties of PageHeaderStyle for a single PageHeader instance:

Example 5

import QtQuick 2.4
import Ubuntu.Components 1.3
MainView {
    width: units.gu(50)
    height: units.gu(20)
    Page {
        header: PageHeader {
            title: "Ubuntu header"
            StyleHints {
                foregroundColor: "white"
                backgroundColor: UbuntuColors.blue
                dividerColor: UbuntuColors.ash
                contentHeight: units.gu(7)
            }
            trailingActionBar {
                actions: [
                    Action {
                        iconName: "info"
                        text: "Information"
                    },
                    Action {
                        iconName: "settings"
                        text: "Settings"
                    }
                ]
            }
        }
    }
}

If the header needs to be customized even further, use the contents property, as is demonstrated in the example above for the search mode, or use any Item for Page.header, as was shown in the first example. For a fully custom header that will still show and hide automatically when the user scrolls in the page, use the Header component which is the parent component of PageHeader, and set its flickable property.

That’s all. I am looking forward to see what you will do with the new and often-requested flexibility of the new header in the Ubuntu UI Toolkit.

Read more
Robin Winslow

There has been a growing movement to get all websites to use SSL connections where possible. Nowadays, Google even uses it as a criterion for ranking websites.

I've written before about how to host an HTTPS website for free with StartSSL and OpenShift. However, StartSSL is very hard to use and provides very basic certificates, and setting up a website on OpenShift is a fairly technical undertaking.

There now exists a much simpler way to setup an HTTPS website with CloudFlare and GitHub Pages. This only works for static sites.

If your site is more complicated and needs a database or dynamic functionality, or you need an SHA-1 fallback certificate (explained below) then look at my other post about the OpenShift solution. However, if a static site works for you, read on.

GitHub Pages

As most developers will be aware by now, GitHub offer a fantastic free static website hosting solution in GitHub Pages.

All you have to do is put your HTML files in the gh-pages branch of one of your repositories and they will be served as a website at {username}.github.io/{project-name}.

GitHub pages minimal files

And all files are passed through the Jekyll parser first, so if you want to split up your HTML into templates you can. And if you don't want to craft your site by hand, you can use the Automatic Page Generator.

automatic page generator themes

Websites on github.io also support HTTPS, so you can serve your site up at https://{username}.github.io/{project-name} if you want.

mytestwebsite GitHub pages

GitHub Pages also support custom domains (still for free). Just add a CNAME file to the repository with your domain name in it - e.g. mytestwebsite.robinwinslow.uk - and then go and setup the DNS CNAME to point to {username}.github.io.

mytestwebsite GitHub pages files

The only thing you can't do directly with GitHub Pages is offer HTTPS on your custom domain - e.g. https://mytestwebsite.robinwinslow.uk. This is where CloudFlare comes in.

CloudFlare

CloudFlare offer a really quite impressive free DNS and CDN service. This free service includes some really impressive offerings, the first three of which are especially helpful for our current HTTPS mission:

The most important downside to CloudFlare's free tier SSL is that it doesn't include the fall-back to legacy SHA-1 for older browsers. This means that the most out-of-date (and therefore probably the poorest) 1.5% of global citizens won't be able to access your site without upgrading their browser. If this is important to you, either find a different HTTPS solution or upgrade to a paid CloudFlare account.

Setting up HTTPS

Because CloudFlare are a CDN and a DNS host, they can do the HTTPS negotiation for you. They've taken advantage of this to provide you with a free HTTPS certificate to encrypt communication between your users and their cached site.

First simply setup your DNS with CloudFlare to point to {username}.github.io, and allow CloudFlare to cache the site.

mytestwebsite CloudFlare DNS setup

Between CloudFlare and your host the connection doesn't have to be encrypted, but I would certainly suggest that it still should be. But crucially for us, this encrypted connection doesn't actually need a valid HTTPS certificate. To enable this we should select the "Full" (rather than "Flexible" or "Strict") option.

CloudFlare full SSL encryption

Et voilà! You now have an encrypted custom domain in front of GitHub Pages completely for free!

mytestwebsite with a secure domain

Ensuring all visitors use HTTPS

To make our site properly secure, we need to ensure all users are sent to the HTTPS site (https://mytestwebsite.robinwinslow.uk) instead of the HTTP one (http://mytestwebsite.robinwinslow.uk).

Setting up a page rule

The first step to get visitors to use HTTPS is to send a 301 redirect from http://mytestwebsite.robinwinslow.uk to https://mytestwebsite.robinwinslow.uk.

Although this is not supported with GitHub Pages, it can be achieved with CloudFlare page rules.

Just add a page rule for http://*{your-domain.com}/* (e.g. http://*robinwinslow.uk/*) and turn on "Always use HTTPS":

CloudFlare always use HTTPS page rule

Now we can check that our domain is redirecting users to HTTPS by inspecting the headers:

$ curl -I mytestwebsite.robinwinslow.uk
HTTP/1.1 301 Moved Permanently
...
Location: https://mytestwebsite.robinwinslow.uk/

HTTP Strict Transport Security (HSTS)

To protect our users from man-in-the-middle attacks, we should also turn on HSTS with CloudFlare (still for free). Note that this can cause problems if you're ever planning on removing HTTPS from your site.

If you're using a subdomain (e.g. mytestwebsite.robinwinslow.uk), remember to enable "Apply HSTS policy to subdomains".

CloudFlare: HSTS setting

This will tell modern browsers to always use the HTTPS protocol for this domain.

$ curl -I https://mytestwebsite.robinwinslow.uk
HTTP/1.1 200 OK
...
Strict-Transport-Security: max-age=15552000; includeSubDomains; preload
X-Content-Type-Options: nosniff

It can take several weeks for your domain to make it into the Chromium HSTS preload list. You can check if it's in there, or add it again, by visiting chrome://net-internals/#hsts in a Chrome or Chromium browser and looking for the static_sts_domain setting.

That's it!

You now have an incredibly quick and easy way to put a fully secure website online in minutes, totally for free! (Apart from the domain name).

(Also posted on my personal blog - which uses the free CloudFlare plan.)

Read more
Robin Winslow

I often find myself wanting to play around with a tiny Python web application with native Python without installing any extra modules - the Python developer's equivalent of creating an index.html and opening it in the browser just to play around with markup.

For example, today I found myself wanting to inspect how the Google API Client Library for Python handles requests, and a simple application server was all I needed.

In these situations, the following minimal WSGI application, using the built-in wsgiref library is just the ticket:

from wsgiref.simple_server import make_server

def application(env, start_response):
    """
    A basic WSGI application
    """

    http_status = '200 OK'
    response_headers = [('Content-Type', 'text/html')]
    response_text = "Hello World"

    start_response(http_status, response_headers)
    return [response_text]

if __name__ == "__main__":
    make_server('', 8000, application).serve_forever()

Put this in a file - e.g. wsgi.py - and run it with:

(I've also saved this as a Gist).

This provides you with a very raw way of parsing HTTP requests. All the HTTP variables come in as items in the env dictionary:

def application(env, start_response):
    # To get the requested path
    # (the /index.html in http://example.com/index.html)
    path = env['PATH_INFO']
    
    # To get any query parameters
    # (the foo=bar in http://example.com/index.html?foo=bar)
    qs = env['QUERY_STRING']

What I often do from here is use ipdb to inspect incoming requests, or directly manipulate the response headers or content.

Alternatively, if you're looking for something slightly more full-featured (but still very lightweight) try Flask.

(Also posted over on robinwinslow.uk).

Read more
Zsombor Egri

UI Toolkit for OTA9

Hello folks, it’s been a while since the last update came from our busy toolkit ants. As OTA9 came out recently, it is the time for a refreshment from our side to show you the latest and greatest cocktail of features our barmen have prepared. Beside the bugfixes we’ve provided, here is a list of the big changes we’ve introduced in OTA9. Enjoy!

PageHeader

One of the most awaited components is the PageHeader. This now makes it possible to have a detached header component which then can be used in a Page, a Rectangle, an Item, wherever you wish. It is composed of a base, plain Header component, which does not have any layout, but handles the default behavior like showing, hiding the header and dealing with the auto-hiding when an attached Flickable is moved. Some part of that API has been introduced in OTA8, but because it wasn’t yet polished enough, we decided not to announce it there and provide more distilled functionality now.

The PageHeader then adds the navigation and the trailing actions through the - hopefully - well known ActionBar component.

Toolbar

Yes, it’s back. Voldemort is back! But this time it is back as a detached component :) The API is pretty similar to PageHeader (it contains a leading and trailing ActionBar), and you can place it wherever you wish. The only restriction so far is that its layout only supports horizontal orientation.

Facelifted Scrollbar

Yes, finally we got a loan headcount to help us out in creating some nice facelift for the Scrollbar. The design follows the same principles we have for the upcoming 16.04 desktop, with the scroll handler residing inside the bar, and having two pointers to drive page up/down scrolling.

This guy also convinced us that we need a Scrollview, like in QtQuick Controls v1, so we can handle the “buddy” scrollbars, the situation when horizontal and vertical scrollbars are needed at the same time and their overlapping should be dealt with. So, we have that one too :) And let's name the barman: Andrea Bernabei aka faenil is the one!

The unified BottomEdge experience

Finally we got a complete design pattern ready for the bottom edge behavior, so it was about the time to get a component around the pattern. It can be placed within any component, and its content can be staged, meaning it can be changed while the content is dragged. The content is always loaded asynchronously for now, we will add support to force synchronous loading in the upcoming releases.

Focus handling in CheckBox, Switch, Button ActionBar

Starting now, pressing Tab and Shift+Tab on a keyboard will show a focus ring on components that support it. CheckBox, Switch, Button and ActionBar have this right now, others will follow soon.

Action mnemonics

As we are heading towards the implementation of contextual menus, we are preparing a few features as prerequisite work for the menus. For one adding mnemonic handling to Action.

So far there was only one way to define shortcuts for an Action, through the shortcut property. This now can be achieved by specifying the mnemonic in the text property of the Action using the ‘&’ character. This character will then be converted into a shortcut and, if there is a hardware keyboard attached, it will underline the mnemonic.

Read more
David Callé

Today we announce the launch of our second Ubuntu Scopes Showdown! We are excited to bring you yet another engaging developer competition, where the Ubuntu app developer community brings innovative and interesting new experiences for Ubuntu on mobile devices.

Scopes in Javascript and Go were introduced recently and are the hot topic of this competition!

Contestants will have six weeks to build and publish their Unity8 scopes to the store using the Ubuntu SDK and Scopes API (JavaScript, Go or C++), starting Monday January 18th.

A great number of exciting prizes are up for grabs: a System76 Meerkat computer, BQ E5 Ubuntu phones, Steam Controllers, Steam Link, Raspberry Pi 2 and convergence packs for Ubuntu phones!

Find out more details on how to enter the competition. Good luck and get developing! We look forward to seeing your scopes on our Ubuntu homescreens!

Read more
David Planella

Ubuntu is about people

Ubuntu has been around for just over a decade. That’s a long time for a project built around a field that evolves at such a rapid pace as computing. And not just any computing –software made for (and by) human beings, who have also inevitably grown and evolved with Ubuntu.

Over the years, Ubuntu has changed and has lead change to keep thriving in such a competitive space. The first years were particularly exciting: there was so much to do, countless possibilities and plenty of opportunities to contribute.

Everyone that has been around for a while has fond memories of the Ubuntu Developer Summit, UDS in short. An in-person event run every 6 months to plan the next version of the OS. Representatives of different areas of the community came together every half year, somewhere in the US or Europe, to discuss, design and lay out the next cycle, both in terms of community and technology.

It was in this setting where Ubuntu governance and leadership were discussed, the decisions of which default apps to include were made, the switch to Unity’s new UX, and much more. It was a particularly intense event, as often discussions continued into the hallways and sometimes up to the bar late at night.

In a traditionally distributed community, where discussions and planning happen online and across timezones, getting physically together in one place helped us more effectively resolve complex issues, bring new ideas, and often agree to disagree in a respectful environment.

Ubuntu Catalan team party

This makes Ubuntu special

Change takes courage, it takes effort in thinking outside the box and going all the way through, but it is not always popular. I personally believe, though, that without disruptive changes we wouldn’t be where we are today: millions of devices shipped with Ubuntu pre-installed, leadership in the cloud space, Ubuntu phones shipped worldwide, the convergence story, Ubuntu on drones, IoT… and a strong, welcoming and thriving community.

At some point, UDS morphed into UOS, an online-only event, which despite its own merits and success, it does admittedly lack the more personal component. This is where we are now, and this is not a write-up to hark back to the good old days, or to claim that all decisions we’ve made were optimal –acknowledging those lead by Canonical.

Ubuntu has evolved, we’ve solved many of the technological issues we were facing in the early days, and in many areas Ubuntu as a platform “just works”. Where we were seeing interest in contributing to the plumbing of the OS in the past, today we see a trend where communities emerge to contribute taking advantage of a platform to build upon.

Ubuntu Convergence

The full Ubuntu computer experience, in your pocket

Yet Ubuntu is just as exciting as it was in those days. Think about carrying your computer running Ubuntu in your pocket and connecting it to your monitor at home for the full experience, think about a fresh and vibrant app developer community, think about an Open Source OS powering the next generation of connected devices and drones. The areas of opportunity to get involved are much more diverse than they have ever been.

And while we have adapted to technological and social change in the project over the years, what hasn’t changed is one of the fundamental values of Ubuntu: its people.

To me personally, when I put aside open source and exciting technical challenges, I am proud to be part of this community because its open, welcoming, it’s driven by collaboration, I keep meeting and learning from remarkable individuals, I’ve made friendships that have lasted years… and I could go on forever. We are essentially people who share a mission: that of bringing access to computer to everyone, via Free Software and open collaboration.

And while over the years we have learnt to work productively in a remote environment, the need to socialize is still there and as important as ever to reaffirm this bonding that keep us together.

Enter UbuCons.

The rise of the UbuCons

UbuCons are in-person conferences around the world, fully driven by teams of volunteers who are passionate about Ubuntu and about community. They are also a remarkable achievement, showing an exceptional commitment and organizational effort from Ubuntu advocates to make them happen.

Unlike other big Ubuntu events such as release parties -celebrating new releases every six months- UbuCons happen generally once a year. They vary in size, going from tens to hundreds to thousands, include talks by Ubuntu community members and cross-collaboration with other Open Source communities. Most importantly, they are always events to remember.

UbuCons across the globe

A network of UbuCons

A few months back, at the Ubuntu Community Team we started thinking of how we could bring the community together in a similar way we used to do with a big central event, but also in a way that was sustainable and community-driven.

The existing network of UbuCons came as the natural vehicle for this, and in this time we’ve been working closely with UbuCon organizers to take UbuCons up a notch. It has been from this team work where initiatives such as the UbuContest leading to UbuCon DE in Berlin were made possible. And more support for worldwide UbuCons general: in terms of speakers and community donations to cover some of the organizational cost for instance, or most recently the UbuCon site.

It has been particularly rewarding for us to have played even a small part on this, where the full credit goes to the international teams of UbuCon organizers. Today, six UbuCons are running worldwide, with future plans for more.

And enter the Summit

Community power

Community power

But we were not content yet. With UbuCons covering a particular geographical area, we still felt a bigger, more centralized event was needed for the community to rally around.

The idea of expanding to a bigger summit had already been brainstormed with members of the Ubuntu California LoCo in the months coming to the last UbuCon @ SCALE in LA. Building up on the initial concept, the vision for the Summit was penciled in at the Community Leadership Summit (CLS) 2015 together with representatives from the Ubuntu Community Council.

An UbuCon Summit is a super-UbuCon, if you will: with some of the most influential members of the wider Ubuntu community, with first-class talks content, and with a space for discussions to help shape the future of particular areas of Ubuntu. It’s the evolution of an UbuCon.

UbuCon Europe planning

The usual suspects planning the next UbuCon Europe

As a side note, I’m particularly happy to see that the US Summit organization has already set the wheels in motion for another summit in Europe next year. A couple of months ago I had the privilege to take part in one of the most reinvigorating online sessions I’ve been in recent times, where a highly enthusiastic and highly capable team of organizers started laying out the plans for UbuCon Europe in Germany next year! But back to the topic…

Today, the first UbuCon Summit in the US is brought to you by a passionate team of organizers in the Ubuntu California LoCo, the Ubuntu Community Team at Canonical and SCALE, who hope you enjoy it and contribute to the event as much as we are planning to :-)

Jono Bacon, who we have to thank for participating in and facilitating the initial CLS discussions, wrote an excellent blog post on why you should go to UbuCon in LA in January, which I highly recommend you read.

In a nutshell, here’s what to expect at the UbuCon Summit:
– A two-day, two-track conference
– User and developer talks by the best experts in the Ubuntu community
– An environment to propose topics, participate and influence Ubuntu
– Social events to network and get together with those who make Ubuntu
100% Free registration, although we encourage participants to also consider registering for the full 4 days of SCALE 14x, who are the host to the UbuCon

I’m really looking forward to meeting everyone there, to seeing old and new faces and getting together to keep the big Ubuntu wheels turning.

The post Ubuntu is about people appeared first on David Planella.

Read more
Benjamin Zeller

In the last couple of weeks, we had to completely rework the packaging for the SDK tools and jump through hoops to bring the same experience to everyone regardless if they are on LTS or the development version of Ubuntu. It was not easy but we finally are ready to hand this beauty to the developer’s hands.

The two new packages are called “ubuntu-sdk-ide” and “ubuntu-sdk-dev” (applause now please).

The official way to get the Ubuntu SDK installed is from now on by using the Ubuntu SDK Team release PPA:

https://launchpad.net/~ubuntu-sdk-team/+archive/ubuntu/ppa

Releasing from the archive with this new way of packaging is sadly not possible yet, in Debian and Ubuntu Qt libraries are installed into a standard location that does not allow installing multiple minor versions next to each other. But since both, the new QtCreator and Ubuntu UI Toolkit, require a more recent version of Qt than the one the last LTS has to offer we had to improvise and ship our own Qt versions. Unfortunately that also blocks us from using the archive as a release path.

If you have the old SDK installed, the default QtCreator from the archive will be replaced with a more recent version. However apt refuses to automatically remove the packages from the archive, so that is something that needs to be done manually, best before the upgrade:

sudo apt-get remove qtcreator qtcreator-plugin*

Next step is to add the ppa and get the package installed.

sudo add-apt-repository ppa:ubuntu-sdk-team/ppa \
    && sudo apt update \
    && sudo apt dist-upgrade \
    && sudo apt install ubuntu-sdk

That was easy, wasn’t it :).

Starting the SDK IDE is just as before, either by running qtcreator or ubuntu-sdk directly and also by running it from the dash. We tried to not break old habits and just reused the old commands.

However, there is something completely new. An automatically registered Kit called the “Ubuntu SDK Desktop Kit”. That kit consists of the most recent UITK and Qt used on the phone images. Which means it offers a way to develop and run apps easily even on an LTS Ubuntu release. Awesome, isn’t it Stuart?

The old qtcreator-plugin-ubuntu package is going to be deprecated and will most likely be removed in one of the next Ubuntu versions. Please make sure to migrate to the new release path to always get the most recent versions.

Read more

As a follow-up to our previous post A Fast Thumbnailer for Ubuntu, we have published a new tutorial to help you make the most of this new SDK feature in your apps.

You will learn how to generate on-demand thumbnails for pictures, video and audio files by simply importing the module in your QML code and slightly tweaking your use of Image components.

Read the tutorial ›

Read more
Thibaut Rouffineau

The Eclipse Foundation has become a new home for a number of IoT projects. For the newcomers in the IoT world it’s always hard to see the forest for the trees in the number of IoT related Eclipse projects. So here is a first blog to get you started with IoT development using Eclipse technology.

The place to start with IoT development is MQTT (Messaging Queuing Telemetry Transport). MQTT is a messaging protocol used to send information between your Things and the Cloud. It’s a bit like the REST API of the IoT world, it’s standardised and supported by most clients, servers and IOT Backend As A Service (BaaS) vendors (AWS IOT, IBM Bluemix, Relayr, Evrything to name a few).

If you’re not familiar with MQTT here is a quick rundown of how it works:

  • MQTT was created for efficient and lightweight message exchanges between Things (embedded devices / sensors).

  • An MQTT client is typically running on the embedded device and sends messages to an MQTT broker located on a server.

  • MQTT messages are made of 2 fields a topic and a message.

  • MQTT clients can send (publish in MQTT linguo) messages on a specific topic. Typically a light in my kitchen would send a message of this type to indicate it’s on:  topic =”Thibaut/Light/Kitchen/Above_sink/pub” message=”on”.

  • MQTT clients can listen (subscribe in MQTT linguo) to messages on a specific topic. Typically a light in my kitchen would subscribe to messages to await for instruction to be turned off by subscribing to the  topic =”Thibaut/Light/Kitchen/Above_sink/sub” and waiting for a message: message=”turn_off”.

  • MQTT brokers listen to incoming messages and retransmit the messages to clients subscribed to a specific topic. In this way it resembles a multicast network.

  • Most MQTT brokers are running in the cloud but increasingly MQTT brokers can be found on IoT gateways in order to do message filtering and create local rules for emergency or privacy reasons. For example a typical local rule in my house would be if a presence sensor in the kitchen sends a message saying that no one is in the kitchen a simple rule would send a message to the light to switch it. Our rules engine would look like: if receive message: topic=”Thibaut/presence_sensor/Kitchen/pub” message =”No presence”  then send message on topic =”Thibaut/Light/Kitchen/Above_sink/sub” with message=”turn_off”

  • BaaS vendors would typically offer a simple rules engine sitting on top of the MQTT broker, even though most developers would probably build their rules within their code. Your choice!

  • To get started Eclipse provides multiple MQTT client under the Paho project

  • To get started with your own broker Eclipse provides an MQTT broker under the Mosquitto project

  • Communication between MQTT client and broker supports different level of authentication from none to using public /private keys through username / password

  • When using a public MQTT broker (like the Eclipse sandbox) your messages will be visible to all people who subscribe to your topics so if you’re going to do anything confidential make sure you have your own MQTT broker (either through a BaaS or build your own on a server).

That’s all there is to know about MQTT! As you can see it’s both simple and powerful which is why it’s been so successful and why so many vendors have implemented it to get everyone started with IoT.
And now is your time to get started!! To help out here’s a quick example on Github that shows you how you can get the Paho Python MQTT running on Ubuntu Core and talking to the Eclipse Foundation MQTT sandbox server. Have a play with it and tell us what you’ve created with it!

Read more
David Planella

Ubuntu Online Summit
Starting on Tuesday 3rd to Thursday 5th of November, a new edition of the Ubuntu Online Summit is taking place next week.

Three days of free and live content all around Ubuntu and Open Source: discussions, tutorials, demos, presentations and Q+As for anyone to get in touch with the latest news and technologies, and get started contributing to Ubuntu.

The tracks

As in previous editions, the sessions runs along multiple tracks that group related topics as a theme:

  • App & scope development: the SDK and developer platform roadmaps, phone core apps planning, developer workshops
  • Cloud: Ubuntu Core on clouds, Juju, Cloud DevOps discussions, charm tutorials, the Charm, OpenStack
  • Community: governance discussions, community event planning, Q+As, how to get involved in Ubuntu
  • Convergence: the road to convergence, the Ubuntu desktop roadmap, requirements and use cases to bring the desktop and phone together
  • Core: snappy Ubuntu Core, snappy post-vivid plans, snappy demos and Q+As
  • Show & Tell: presentations, demos, lightning talks (read: things that break and explode) on a varied range of topics

The highlights

Here are some of my personal handpicks on sessions not to miss:

  • Opening keynote: Mark Shuttleworth, Canonical and Ubuntu founder will be opening the Online Summit with his keynote, on Tuesday 3rd Nov, 14:00 UTC
  • Ask the CEO: Jane Silber, Canonical’s CEO will be talking with the audience and answering questions from the community on her Q+A session, on Wednesday 4th Nov, 17:00 UTC
  • Snappy Clinic: join the snappy team on an interactive session about bringing robotics to Ubuntu – porting ROS apps to snappy Ubuntu Core, on Tuesday 3rd Nov, 18:00 UTC
  • JavaScript scopes hands-on: creating Ubuntu phone scopes is now easier than ever with JavaScript; learn all about it with resident scopes expert Marcus Tomlinson on Thursday 5th Nov, 15:00 UTC
  • An introduction to LXD: Stéphane Graber will be demoing LXD, the container hypervisor, and discussing features and upcoming plans on Thursday 5th Nov, 16:00 UTC
  • UbuCon Europe planning: a community team around the Ubuntu German LoCo will be getting together to plan the next in-person UbuCon Summit in Europe next year on Wednesday 4th Nov, 18:00 UTC

Check out the full schedule for more! >

Participating

Joining the summit is easy. Simply remember to:

Once you’ve done that, there are different ways of taking part online event via video hangouts and IRC:

  • Participate or watch sessions – everyone is welcome to participate and join a discussion to provide input or offer contribution. If you prefer to take a rear seat, that’s fine too. You can either subscribe to sessions, watch them on your browser or directly join a live hangout.
  • Propose a session – do you want to take a more active role in contributing to Ubuntu? Do you have a topic you’d like to discuss, or an idea you’d like to implement? Then you’ll probably want to propose a session to make it happen. There is still a week for accepting proposals, so why don’t you go ahead and propose a session?

Looking forward to seeing you all at the Summit!

The post The Ubuntu Online Summit starts next week appeared first on David Planella.

Read more
David Planella

ubuntu-community
I am thrilled to announce the next big event in the Ubuntu calendar: the UbuCon Summit, taking place in Pasadena, CA, in the US, from the 21st to 22nd of January 2016, hosted at SCALE and with Mark Shuttleworth on the opening keynote.

Taking UbuCons to the next level

UbuCons are a remarkable achievement from the Ubuntu community: a network of conferences across the globe, organized by volunteers passionate about Open Source and about collaborating, contributing, and socializing around Ubuntu. The UbuCon at SCALE has been one of the most successful ones, and this year we are kicking it up a notch.

Enter the UbuCon Summit. In discussions with the Community Council, and after the participation of some Ubuntu team members at the Community Leadership Summit a few months ago, one of the challenges that we identified our community is facing was the lack of a global event to meet face to face after the UDS era. While UbuCons continue to thrive as regional conferences, one of the conclusions we reached was that we needed a way to bring everyone together on a bigger setting to complement the UbuCon fabric: the Summit.

The Summit is the expansion of the traditional UbuCon: more content and at a bigger scale. But at the same maintaining the grass-roots spirit and the community-driven organization that has made these events successful.

Two days and two tracks of content

During these two days, the event will be structured as a traditional conference with presentations, demos and plenaries on the first day and as an unconference for the second one. The idea behind the unconference is simple: participants will propose a set of topics in situ, each one of which will be scheduled as a session. For each session the goal is to have a discussion and reach a set of conclusions and actions to address the topics. Some of you will be familiar with the setting :)

We will also have two tracks to group sessions by theme: Users, for those interested in learning about the non-tech, day-to-day part of using Ubuntu, but also including the component on how to contribute to Ubuntu as an advocate. The Developers track will cover the sessions for the technically minded, including app development, IoT, convergence, cloud and more. One of the exciting things about our community is that there is so much overlap between these themes to make both tracks interesting to everyone.

All in all, the idea is to provide a space to showcase, learn about and discuss the latest Ubuntu technologies, but also to focus on new and vibrant parts of the community and talk about the challenges (and opportunities!) we are facing as a project.

A first-class team

In addition to the support and guidance from the Community Council, the true heroes of the story are Richard Gaskin, Nathan Haines and the Ubuntu California LoCo. Through the years, they have been the engines behind the UbuCon at SCALE in LA, and this time around they were quick again to jump and drive the Summit wagon too.

This wouldn’t have been possible without the SCALE team either: an excellent host to UbuCon in the past and again on this occasion. In particular Gareth Greenaway and Ilan Rabinovitch, who are helping us with the logistics and organization all along the way. If you are joining the Summit, I very much recommend to stay for SCALE as well!

More Summit news coming soon

On the next few weeks we’ll be sharing more details about the Summit, revamping the global UbuCon site and updating the SCALE schedule with all relevant information.

Stay tuned for more, including the session about the UbuCon Summit at the next Ubuntu Online Summit in two weeks.

Looking forward to seeing some known and new faces at the UbuCon Summit in January!

Picture from an original by cm-t arudy

The post Announcing the UbuCon Summit appeared first on David Planella.

Read more
Zsombor Egri

Adaptive page layout made flexible

A few weeks ago Tim posted a nice article about Adaptive page layouts made easy. It is my turn now to continue the series, with the hope that you will all agree on the title.

Ladies and Gentlemen, we have good news and (slightly) bad news to announce about the AdaptivePageLayout. If the blogging would be interactive, I’d ask you which one to start with, and most probably you would say with the bad ones, as it is always good to get the cold shower first and then have a sunbath. Sorry folks, this time I’ll start with the good news.

The good news

We’ve added a column configurability API to the AdaptivePageLayout! From now on you can configure more than two columns in your layout, and for each column you can configure the minimum, maximum and preferred sizes as well as whether to fill the remaining width of the layout or not. And even more, if the minimum and maximum values of the column configuration differs, the column can be resized with mouse or touch. See the following video demonstrating the feature.

<commercials>
And all this is possible right now, right here, only with Ubuntu UI Toolkit!
</commercials>

You can configure any number of column configurations, with conditions when those should be applied. The one column mode doesn’t need to be configured, that is automatically applied when none of the specified column configuration conditions apply. However, if you wish, you can still configure the single column mode, in case you want to apply minimum width value for the column. Note however that the minimum width configuration will not (yet) be applied on the application’s minimum resizable width, as you can observe on the video above.

The video above was made based on the sample code from Tim’s post, with the following additions:

AdaptivePageLayout {
    id: layout
    \\ [...]
    layouts: [
        // configure two columns
        PageColumnsLayout {
            when: layout.width > units.gu(80)
            PageColumn {
                minimumWidth: units.gu(20)
                maximumWidth: units.gu(60)
                preferredWidth: units.gu(40)
            }
            PageColumn {
                fillWidth: true
            }
        },
        // configure minimum size for single column
        PageColumnsLayout {
            when: true
            PageColumn {
                minimumWidth: units.gu(20)
                fillWidth: true
            }
        }
    ]
}

The full source code is on lp:~zsombi/+junk/AdaptivePageLayoutMadeFlexible.

The bad news

Oh, yes, this is the time you guys start to get mad. But let’s see how bad it is going to be this time.

We started to apply the AdaptivePageLayout in a few core applications, when we realized that the UI is getting blocked when Pages with heavy content are added to the columns. As pages were created synchronously, we would have had to redo each app’s Page content management to be able to load at least partially asynchronously using Loaders. And that seemed to be a really bad omen for the component. So we decided to bring in an API break for the AdaptivePageLayout addPageTo{Current|Next}Column() functions, so if the second argument is a file URL or a Component, the functions now return an incubator object which can be used to track the loading completion. In the case of an existing Page instance, as you already have it, the functions will return null. More on how to use incubators in QML can be read from http://doc.qt.io/qt-5/qml-qtqml-component.html#incubateObject-method.

A code snippet to catch page completion would then look like

var incubator = layout.addPageToNextColumn(thisPage, Qt.resolvedUrl(pageDocument));
if (incubator && incubator.status == Component.Loading) {
    incubator.onStatusChanged = function(status) {
        if (status == Component.Ready) {
            // incubator.object contains the loaded Page instance
            // do whatever you wish with the Page
            incubator.object.title = "Dynamic Page";
        }
    }
}

Of course, if you want to set up the Page properties with some parameters, you can do it in the good old way, by specifying the parameters in the function, i.e.

addPageToNextColumn(thisPage, Qt.resolvedUrl(pageDocument), {title: “Dynamic Page”}).

The incubator approach you would need if you want to do some bindings on the properties of the page, which cannot be done with the creation parameters.

 

So, the bad news is not so bad after all, isn’t it? That’s why I started with the good news ;)

More “bad” news to come

Oh, yes, we have not finished yet with the bad news. So from now on pages added to the columns are asynchronous by default, except the very first page. That is still going to be loaded synchronously. The good news: it is not for long ;) We are planning to enable asynchronous loading of the primary page as well, and most probably you will get a signal triggered when the page is loaded. In this way you would be able to show something else while the first page is loading, an animation, another splash screen, or the Flying Dutchman, whatever :)


Stay tuned! We’ll be back!

 

Read more
David Planella

Snappy Ubuntu + Mycroft = Love

This is a guest post from Ryan Sipes, CTO of the Mycroft project, explaining how snappy Ubuntu will enable them to deliver a secure and open AI for everyone.

When we first undertook the Mycroft project, dubbed the “AI For Everyone”, we knew we would face interesting challenges. We were creating a voice-controlled platform not only for assisting you in your daily life with weather, news updates, calendar reminders, and answers to your questions - but also a hub which would allow you to control your Internet of Things, specifically in the form of home automation. Managing all these devices through a seamless user experience requires a strong backbone for developers, and this is where snappy Ubuntu Core works wonders.

Since choosing to base our open source, open hardware product called Mycroft on snappy Ubuntu Core, we have found the platform to be amazing. Being able to build and deliver apps easily through Snappy packages, makes for a quick and painless packaging experience with only a short bit required to get up to speed and start creating your own. We’ve taken advantage of this and are planning to use Snappy packages as the main delivery method of apps on our platform. Want to install the Spotify app on Mycroft? Just install the Snappy package, which you’ll be able to do with a just a click.

But Snappy Core’s usefulness goes beyond creating packages, the ability to do transactional updates of apps makes testing and stability easier. We’ve found that the ability to rollback an update to be critical in ensuring that we are our platform is working when it needs to, but it has also made it possible to test for bugs on versions that we are unsure about - and rollback when there is serious breakage. As we continue to learn more, we are every impressed with this feature of Snappy.

We’re going to be leveraging snappy Ubuntu Core and “Snaps” to deliver applications to Mycroft, and when talking about a platform that sits in your home and has the ability to install third party software an important conversation about privacy in necessary. We are doing our best to ensure that user’s critical data and interactions with Mycroft are kept private, and Snappy makes our job easier. Having a great deal of control over security policies of apps and being able to make applications run in a sandbox, allows us to take measure to ensure the core system isn’t compromised. In a world where you are interacting with lots of IoT devices every day, security is paramount, and Snappy Core Ubuntu doesn’t let you down.

In case you couldn’t tell from the paragraphs above, the Mycroft team is ecstatic to be using such an awesome technology on which to build our open source artificial intelligence and home automation platform. But one thing I didn’t talk about was the awesome community surrounding Ubuntu and the passionate people working for Canonical that have poured their time into this amazing project and that, above all, is the best reason for using Snappy Core.

If you are interested in learning more about Mycroft, please check out our Kickstarter and consider backing the project. We’ve only got a few days left, but we promise that we will continue to keep everyone posted about our experiences as we continue to use Snappy Core while we work on the #AIForEveryone.

I want AI for everyone too! >

Read more
Zoltán Balogh

The Next Generation SDK

Up until now the basic architecture of the SDK IDE and tools packaging was that we have packaged and distributed the QtCreator IDE and our Ubuntu plugins as separate distro packages which strongly depend on the Qt available in the same release.

Since 14.04 we have been jumping through hoops to provide the very same developer experience from a single development branch of the SDK projects. Just to give a quick picture on what we have available in the last few releases (note that 1.3 UITK is not yet released):

14.04 Trusty Qt 5.2.1 QtCreator 3.0.1 UI Toolkit 0.1
14.10 Utopic Qt 5.3. QtCreator 3.1.1 UI Toolkit 1.1
15.04 Vivid Qt 5.4.1 QtCreator 3.1.1 UI Toolkit 1.2
15.10 Wily Qt 5.4.2 QtCreator 3.5.0 UI Toolkit 1.3

Life could have been easier by sticking to one stable Qt and QtCreator and base our SDK on it. Obviously it was not a realistic option as the phone development needed the most recent Qt and our friend Kubuntu required a hot new engine under its hood too. So Qt was quickly moving forward and the SDK followed it. Of course it was all beneficial as new Qt releases brought us bugfixes, new features and improved performance.

But on the way we came to realize that continuously backporting the UITK and the QtCreator plugins to older releases and the LTS was simply not going to be possible. It went fine for some time, but the more API breaks the new Qt and QtCreator releases brought the more problems we had to face. Some people have asked why we don’t backport the latest Qt releases to the LTS or to the stable Ubuntu. As an idea it may sound good, but changing the Qt to 5.4.2 under an application in LTS what was built against 5.2.1 Qt would certainly break that application. So it is simply not cool to mess around with such fundamental bits of a stable and long term supported release.

The only option we had was to decouple the SDK from the archive release of Qt and build it as a standalone package without any external Qt dependencies. That way we could provide the exact same experience and tools to all developers regardless if they are playing safe on Trusty/LTS or enjoy the cutting edge on the daily developed release of Wily.

The idea manifested in a really funny project. The source tree of the project is pretty empty. Only cmake and the debian/rules take care of the job. The builder pulls the latest stable Qt, QtCreator and UITK. Builds and integrates the libdbusmenu-qt and appmenu-qt5 projects and deploys the SDK IDE. The package itself is super skinny. Opposing the old model where QtCreator has pulled most of the Qt modules as dependencies this package contains all it needs and the size of it is impressing 36MB. Cheap. Just the way I like it. Plus this package already contains the 1.3 UITK as our QtCreator plugin (Devices Tab) is using it. So in fact we are just one step from enabling desktop application development on 14.04 LTS with the same UI Toolkit as we use on the commercial phone devices. And that is a super hot idea.

The Ubuntu SDK IDE project lives here: https://launchpad.net/ubuntu-sdk-ide

If you want to check out how it is done:

$ bzr branch lp:ubuntu-sdk-ide

Since we considered such a big facelift on the SDK I thought why not to make the change much bigger. Some might remember that there was a discussion on the Ubuntu Phone mailing list about the possibility to improve the Kit creation in the IDE. Since then we have been playing with the idea and I think it is now a good time to unleash the static chroots.

The basic idea is that creating the builder chroots runtime is a super slow and fragile process. The bootstrapping of the click chroot already takes a long time and installing the SDK API packages (all the libs and dev packages with headers) into the chroot is also time consuming. So why not to create these root filesystems in advance and provide them as single installable packages.

This is exactly what we have done. The base of the API packages is the Vivid core image. It is small and contains only the absolutely necessary packages, we install the SDK libs, dev packages and development tools on the core image and configure the Overlay PPA too. So the final image is pretty much equivalent with the image on a freshly updated device out there. It means that the developer can build and test against the same API set as it is available on the devices.

These API packages are still huge. Their size is around 500MB, so on a slow connection it still takes ages to download, but still it is way faster than bootstrapping a 1.6GB chroot package by package.

This API packages contain a single tar.gz file and the post install script of the package puts the content of this tar.gz to the right place and wires it in, in the way it should be. Once the package is installed the new Kit will be automatically recognized by the IDE.

One important note on this API package! If you have an armhf 15.04 Kit (click chroot) already on your system when you install this package, then your original Kit will not be removed but simply renamed to backup-[timestamp]-[original name]. So do not worry if you have customized Kits, they are safe.

The Ubuntu SDK API project is only a packaging project with a simple script to take care of the dirty details. The project is hosted here: https://launchpad.net/ubuntu-sdk-api-15.04

And if you want to see what is in it just do

$ bzr branch lp:ubuntu-sdk-api-15.04  

The release candidate packages are available from the Tools Development PPA of the SDK team: https://launchpad.net/~ubuntu-sdk-team/+archive/ubuntu/tools-development

How to test these packages?

$ sudo add-apt-repository ppa:ubuntu-sdk-team/tools-development -y

$ sudo apt-get update

$ sudo apt-get install ubuntu-sdk-ide ubuntu-sdk-api-tools

$ sudo apt-get install ubuntu-sdk-api-15.04-armhf ubuntu-sdk-api-15.04-i386

After that look for the Ubuntu SDK IDE in the dash.

Read more
Michi Henning

A Fast Thumbnailer for Ubuntu

Over the past few months, James Henstridge, Xavi Garcia Mena, and I have implemented a fast and scalable thumbnailing service for Ubuntu and Ubuntu Touch. This post explains how we did it, and how we achieved our performance and reliability goals.

Introduction

On a phone as well as the desktop, applications need to display image thumbnails for various media, such as photos, songs, and videos. Creating thumbnails for such media is CPU-intensive and can be costly in bandwidth if images are retrieved over the network. In addition, different types of media require the use of different APIs that are non-trivial to learn. It makes sense to provide thumbnail creation as a platform API that hides this complexity from application developers and, to improve performance, to cache thumbnails on disk.

This article explains the requirements we had and how we implemented a thumbnailer service that is extremely fast and scalable, and robust in the face of power loss or crashes.

Requirements

We had a number of requirements we wanted to meet in our implementation.

  • Robustness
    In the event of a crash, the implementation must guarantee the integrity of on-disk data structures. This is particularly important on a phone, where we cannot expect the user to perform manual recovery (such as cleaning up damaged files). Because batteries can run out at any time, integrity must be guaranteed even in the face of power loss.
  • Scalability
    It is common for people to store many thousands of songs and photos on a device, so the cache must scale to at least tens of thousands of records. Thumbnails can range in size from a few kilobytes to well over a megabyte (for “thumbnails” at full-screen resolution), so the cache must deal efficiently with large records.
  • Re-usability
    Persistent and reliable on-disk storage of arbitrary records (ranging in size from a few bytes to potentially megabytes) is a common application requirement, so we did not want to create a cache implementation that is specific to thumbnails. Instead, the disk cache is provided as a stand-alone C++ API that can be used for any number of other purposes, such as a browser or HTTP cache, or to build an object file cache similar to ccache.
  • High performance
    The performance of the thumbnailer directly affects the user experience: it is not nice for the customer to look at “please wait a while” icons in, say, an image gallery while thumbnails are being loaded one by one. We therefore had to have a high-performance implementation that delivers cached thumbnails quickly (on the order of a millisecond per thumbnail on an Arm CPU). An efficient implementation also helps to conserve battery life.
  • Location independence and extensibility
    Canonical runs an image server at dash.ubuntu.com that provides album and artist artwork for many musicians and bands. Images from this server are used to display artwork in the music player for media that contains ID3 tags, but does not embed artwork in the media file. The thumbnailer must work with embedded images as well as remote images, and it must be possible to extend it for new types of media without unduly disturbing the existing code.
  • Low bandwidth consumption
    Mobile phones typically come with data caps, so the cache has to be frugal with network bandwidth.
  • Concurrency and isolation
    The implementation has to allow concurrent access by multiple applications, as well as concurrent access from a single implementation. Besides needing to be thread-safe, this means that a request for a thumbnail that is slow (such as downloading an image over the network) must not delay other requests.
  • Fault tolerance
    Mobile devices lose network access without warning, and users can add corrupt media files to their device. The implementation must be resilient to partial failures, such as incomplete network replies, dropped connections, and bad image data. Moreover, the recovery strategy for such failures must conserve battery and avoid repeated futile attempts to create thumbnails from media that cannot be retrieved or contains malformed data.
  • Security
    The implementation must ensure that applications cannot see (or, worse, overwrite) each other’s thumbnails or coerce the thumbnailer into delivering images from files that an application is not allowed to read.
  • Asynchronous API
    The customers of the thumbnailer are applications that are written in QML or Qt, which cannot block in the UI thread. The thumbnailer therefore must provide a non-blocking API. Moreover, the application developer should be able to get the best possible performance without having to use threads. Instead, concurrency must be internal to the implementation (which is able to put threads to use intelligently where they make sense), instead of the application throwing threads at the problem in the hope that it might make things faster when, in fact, it might just add overhead.
  • Monitoring
    The effectiveness of a cache cannot be assessed without statistics to show hit and miss rates, evictions, and other basic performance data, so it must provide a way to extract this information.
  • Error reporting
    When something goes wrong with a system service, typically the only way to learn about the problem is to look at log messages. In case of a failure, the implementation must leave enough footprints behind to allow someone to diagnose a failure after the fact with some chance of success.
  • Backward compatibility
    This project was a rewrite of an earlier implementation. Rather than delivering a “big bang” piece of software and potentially upsetting existing clients, we incrementally changed the implementation such that existing applications continued to work. (The only pre-existing interface was a QML interface that required no change.)

System architecture

Here is a high-level overview of the main system components.

A Fast Thumbnailer for UbuntuExternal API

To the outside world, the thumbnailer provides two APIs.

One API is a QML plugin that registers itself as an image provider for QQuickAsyncImageProvider. This allows the caller to to pass a URI that encodes a query for a local or remote thumbnail at a particular size; if the URI matches the registered provider, QML transfers control to the entry points in our plugin.

The second API is a Qt API that provides three methods:

QSharedPointer<Request> getThumbnail(QString const& filePath,
                                     QSize const& requestedSize);
QSharedPointer<Request> getAlbumArt(QString const& artist,
                                    QString const& album,
                                    QSize const& requestedSize);
QSharedPointer<Request> getArtistArt(QString const& artist,
                                     QString const& album,
                                     QSize const& requestedSize);

The getThumbnail() method extracts thumbnails from local media files, whereas getAlbumArt() and getArtistArt() retrieve artwork from the remote image server. The returned Request object provides a finished signal, and methods to test for success or failure of the request and to extract a thumbnail as a QImage. The request also provides a waitForFinished() method, so the API can be used synchronously.

Thumbnails are delivered to the caller in the size they are requested, subject to a (configurable) 1920-pixel limit. As an escape hatch, requests with width and height of zero deliver artwork at its original size, even if it exceeds the 1920-pixel limit. The scaling algorithm preserves the original aspect ratio and never scales up from the original, so the returned thumbnails may be smaller than their requested size.

DBus service

The thumbnailer is implemented as a DBus service with two interfaces. The first interface provides the server-side implementation of the three methods of the external API; the second interface is an administrative interface that can deliver statistics, clear the internal disk caches, and shut down the service. A simple tool, thumbnailer-admin, allows both interfaces to be called from the command line.

To conserve resources, the service is started on demand by DBus and shuts down after 30 seconds of idle time.

Image extraction

Image extraction uses an abstract base class. This interface is independent of media location and type. The actual image extraction is performed by derived implementations that download images from the remote server, extract them from local image files, or extract them from local streaming media files. This keeps knowledge of image location and encoding out of the main caching and error handling logic, and allows us to support new media types (whether local or remote) by simply adding extra derived implementations.

Image extraction is asynchronous, with currently three implementations:

  • Image downloader
    To retrieve artwork from the remote image server, the service talks to an abstract base class with asynchronous download_album() and download_artist() methods. This allows multiple downloads to run concurrently and makes it easy to add new local or remote image providers without disturbing the code for existing ones. A class derived from that abstract base implements a REST API with QNetworkAccessManager to retrieve images from dash.ubuntu.com.
  • Photo extractor
    The photo extractor is responsible for delivering images from local image files, such as JPEG or PNG files. It simply delegates that work to the image converter and scaler.
  • Audio and video thumbnail extractor
    To extract thumbnails from audio and video files, we use GStreamer. Due to reliability problems with some codecs that can hang or crash, we delegate the task to a separate vs-thumb executable. This shields the service from failures and also allows us to run several GStreamer pipelines concurrently without a crash of one pipeline affecting the others.

Image converter and scaler

We use a simple Image class with a synchronous interface to convert and scale different image formats to JPEG. The implementation uses Gdk-Pixbuf, which can handle many different input formats and is very efficient.

For JPEG source images, the code checks for the presence of EXIF data using libexif and, if it contains a thumbnail that is at least as large as the requested size, scales the thumbnail from the EXIF data. (For images taken with the camera on a Nexus 4, the original image size is 3264×1836, with an embedded EXIF thumbnail of 512×288. Scaling from the EXIF thumbnail is around one hundred times faster than scaling from the full-size image.)

Disk cache

The thumbnailer service optimizes performance and conserves bandwidth and battery by adopting a layered caching strategy.

Two-level caching with failure lookup

Internally, the service uses three separate on-disk caches:

  • Full-size cache
    This cache stores images that are expensive to retrieve (images that are remote or are embedded in audio and video files) at original resolution (scaled down to a 1920-pixel bounding box if the original image is larger). The default size of this cache is 50 MB, which is sufficient to hold around 400 images at 1920×1080 resolution. Images are stored in JPEG format (at a 90% quality setting).
  • Thumbnail cache
    This cache stores thumbnails at the size that was requested by the caller, such as 512×288. The default size of this cache is 100 MB, which is sufficient to store around 11,000 thumbnails at 512×288, or around 25,000 thumbnails at 256×144.
  • Failure cache
    The failure cache stores the keys for images that could not be extracted because of a failure. For remote images, this means that the server returned an authoritative answer “no such image exists”, or that we encountered an unexpected (non-authoritative) failure, such as the server not responding or a DNS lookup timing out. For local images, it means either that the image data could not be processed because it is damaged, or that an audio file does not contain embedded artwork.

The full-size cache exists because it is likely that an application will request thumbnails at different sizes for the same image. For example, when scrolling through a list of songs that shows a small thumbnail of the album cover beside each song, the user is likely to select one of the songs to play, at which point the media player will display the same cover in a larger size. By keeping full-size images in a separate (smallish) cache, we avoid performing an expensive extraction or download a second time. Instead, we create additional thumbnails by scaling them from the full-size cache (which uses an LRU eviction policy).

The thumbnail cache stores thumbnails that were previously retrieved, also using LRU eviction. Thumbnails are stored as JPEG at the default quality setting of 75%, at the actual size that was requested by the caller. Storing JPEG images (rather than, say, PNG) saves space and increases cache effectiveness. (The minimal quality loss from compression is irrelevant for thumbnails). Because we store thumbnails at the size they are actually needed, we may have several thumbnails for the same image in the cache (each thumbnail at a different size). But applications typically ask for thumbnails in only a small number of sizes, and ask for different sizes for the same image only rarely. So, the slight increase in disk space is minor and amply repaid by applications not having to scale thumbnails after they receive them from the cache, which saves battery and achieves better performance overall.

Finally, the failure cache is used to stop futile attempts to repeatedly extract a thumbnail when we know that the attempt will fail. It uses LRU eviction with an expiry time for each entry.

Cache lookup algorithm

When asked for a thumbnail at a particular size, the lookup and thumbnail generation proceed as follows:

  1. Check if a thumbnail exists in the requested size in the thumbnail cache. If so, return it.
  2. Check if a full-size image for the thumbnail exists in the full-size cache. If so, scale the new thumbnail from the full-size image, add the thumbnail to the thumbnail cache, and return it.
  3. Check if there is an entry for the thumbnail in the failure cache. If so, return an error.
  4. Attempt to download or extract the original image for the thumbnail. If the attempt fails, add an entry to the failure cache and return an error.
  5. If the original image was delivered by the remote server or was extracted locally from streaming media, add it to the full-size cache.
  6. Scale the thumbnail to the desired size, add it to the thumbnail cache, and return it.

Note that these steps represent only the logical flow of control for a particular thumbnail. The implementation executes these steps concurrently for different thumbnails.

Designing for performance

Apart from fast on-disk caches (see below), the thumbnailer must make efficient use of I/O bandwidth and threads. This not only means making things fast, but also to not unnecessarily waste resources such as threads, memory, network connections, or file descriptors. Provided that enough requests are made to keep the service busy, we do not want it to ever wait for a download or image extraction to complete while there is something else that could be done in the mean time, and we want it to keep all CPU cores busy. In addition, requests that are slow (because they require a download or a CPU-intensive image extraction) must not block requests that are queued up behind them if those requests would result in cache hits that could be returned immediately.

To achieve a high degree of concurrency without blocking on long-running operations while holding precious resources, the thumbnailer uses a three-phase lookup algorithm:

  1. In phase 1, we look at the caches to determine if we have a hit or an authoritative miss. Phase 1 is very fast. (It takes around a millisecond to return a thumbnail from the cache on a Nexus 4.) However, cache lookup can briefly stall on disk I/O or require a lot of CPU to extract and scale an image. To get good performance, phase 1 requests are passed to a thread pool with as many threads as there are CPU cores. This allows the maximum number of lookups to proceed concurrently.
  2. Phase 2 is initiated if phase 1 determines that a thumbnail requires download or extraction, either of which can take on the order of seconds. (In case of extraction from local media, the task is CPU intensive; in case of download, most of the time is spent waiting for the reply from the server.) This phase is scheduled asynchronously from an event loop. This minimizes task switching and allows large numbers of requests to be queued while only using a few bytes for each request that is waiting in the queue.
  3. Phase 3 is really a repeat of phase 1: if phase 2 produces a thumbnail, it adds it to the cache; if phase 2 does not produce a thumbnail, it creates an entry in the failure cache. By simply repeating phase 1, the lookup then results in either a thumbnail or an error.

If phase 2 determines that a download or extraction is required, that work is performed concurrently: the service schedules several downloads and extractions in parallel. By default, it will run up to two concurrent downloads, and as many concurrent GStreamer pipelines as there are CPUs. This ensures that we use all of the available CPU cores. Moreover, download and extraction run concurrently with lookups for phase 1 and 3. This means that, even if a cache lookup briefly stalls on I/O, there is a good chance that another thread can make use of the CPU.

Because slow operations do not block lookup, this also ensures that a slow request does not stall requests for thumbnails that are already in the cache. In other words, it does not matter how many slow requests are in progress: requests that can be completed quickly are indeed completed quickly, regardless of what is going on elsewhere.

Overall, this strategy works very well. For example, with sufficient workload, the service achieves around 750% CPU utilization on an 8-core desktop machine, while still delivering cache hits almost instantaneously. (On a Nexus 4, cache hits take a little over 1 ms while concurrent extractions or downloads are in progress.)

A re-usable persistent cache for C++

The three internal caches are implemented by a small and flexible C++ API. This API is available as a separate reusable PersistentStringCache component (see persistent-cache-cpp) that provides a persistent store of arbitrary key–value pairs. Keys and values can be binary, and entries can be large. (Megabyte-sized values do not present a problem.)

The implementation uses leveldb, which provides a very fast NoSQL database that scales to multi-gigabyte sizes and provides integrity guarantees. In particular, if the calling process crashes, all inserts that completed at the API level will be intact after a restart. (In case of a power failure or kernel crash, a few buffered inserts can be lost, but the integrity of the database is still guaranteed.)

To use a cache, the caller instantiates it with a path name, a maximum size, and an eviction policy. The eviction policy can be set to either strict LRU (least-recently-used) or LRU with an expiry time. Once a cache reaches its maximum size, expired entries (if any) are evicted first and, if that does not free enough space for a new entry, entries are discarded in least-recently-used order until enough room is available to insert a new record. (In all other respects, expired entries behave like entries that were never added.)

A simple get/put API allows records to be retrieved and added, for example:

auto c = core::PersistentStringCache::open(
    “my_cache”, 100 * 1024 * 1024, core::CacheDiscardPolicy::lru_only);
// Look for an entry and add it if there is a cache miss.
string key = "Bjarne";
auto value = c->get(key);
if (value) {
    cout << key << ″: ″ << *value << endl;
} else {
    value = "C++ inventor";  // Provide a value for the key. 
    c->put(key, *value);     // Insert it.
}

Running this program prints nothing on the first run, and “Bjarne: C++ inventor” on all subsequent runs.

The API also allows application-specific metadata to be added to records, provides detailed statistics, supports dynamic resizing of caches, and offers a simple adapter template that makes it easy to store complex user-defined types without the need to clutter the code with explicit serialization and deserialization calls. (In a pinch, if iteration is not needed, the cache can be used as a persistent map by setting an impossibly large cache size, in which case no records are ever evicted.)

Performance

Our benchmarks indicate good performance. (Figures are for an Intel Ivy Bridge i7-3770k 3.5 GHz machine with a 256 GB SSD.) Our test uses 60-byte string keys. Values are binary blobs filled with random data (so they are not compressible), 20 kB in size with a standard deviation of 7,000, so the majority of values are 13–27 kB in size. The cache size is 100 MB, so it contains around 5,000 records.

Filling the cache with 100 MB of records takes around 2.8 seconds. Thereafter, the benchmark does a random lookup with an 80% hit probability. In case of a cache miss, it inserts a new random record, evicting old records in LRU order to make room for the new one. For 100,000 iterations, the cache returns around 4,800 “thumbnails” per second, with an aggregate read/write throughput of around 93 MB/sec. At 90% hit rate, we see twice the performance at around 7,100 records/sec. (Writes are expensive once the cache is full due to the need to evict entries, which requires updating the main cache table as well as an index.)

Repeating the test with a 1 GB cache produces identical timings so (within limits) performance remains constant for large databases.

Overall, performance is restricted largely by the bandwidth to disk. With a 7,200 rpm disk, we measured around one third of the performance with an SSD.

Recovering from errors

The overall design of the thumbnailer delivers good performance when things work. However, our implementation has to deal with the unexpected, such as network requests that do not return responses, GStreamer pipelines that crash, request overload, and so on. What follows is a partial list of steps we took to ensure that things behave sensibly, particularly on a battery-powered device.

Retry strategy

The failure cache provides an effective way to stop the service from endlessly trying to create thumbnails that, in an earlier attempt, returned an error.

For remote images, we know that, if the server has (authoritatively) told us that it has no artwork for a particular artist or album, it is unlikely that artwork will appear any time soon. However, the server may be updated with more artwork periodically. To deal with this, we add an expiry time of one week to the entries in the failure cache. That way, we do not try to retrieve the same image again until at least one week has passed (and only if we receive a request for a thumbnail for that image again later).

As opposed to authoritative answers from the image server (“I do not have artwork for this artist.”), we can also encounter transient failures. For example, the server may currently be down, or there may be some other network-related issue. In this case, we remember the time of the failure and do not try to contact the remote server again for two hours. This conserves bandwidth and battery power.

The device may also disconnected from the network, in which case any attempt to retrieve a remote image is doomed. Our implementation returns failure immediately on a cache miss for a remote image if no network is present or the device is in flight mode. (We do not add an entry to the failure cache in this case).

For local files, we know that, if an attempt to get a thumbnail for a particular file has failed, future attempts will fail as well. This means that the only way for the problem to get fixed is by modifying or replacing the actual media file. To deal with this, we add the inode number, modification time, and inode modification time to the key for local images. If a user replaces, say, a music file with a new one that contains artwork, we automatically pick up the new version of the file because its key has changed; the old version will eventually fall out of the cache.

Download and extraction failures

We monitor downloads and extractions for timely completion. (Timeouts for downloads and extractions can be configured separately.) If the server does not respond within 10 seconds, we abandon the attempt and treat it it as a transient network error. Similarly, the vs-thumb processes that extract images from audio and video files can hang. We monitor these processes and kill them if they do not produce a result within 10 seconds.

Database corruption

Assuming an error-free implementation of leveldb, database corruption is impossible. However, in practice, an errant command could scribble over the database files. If leveldb detects that the database is corrupted, the recovery strategy is simple: we delete the on-disk cache and start again from scratch. Because the cache contents are ephemeral anyway, this is fine (other than slower operation until the working set of thumbnails makes it into the cache again).

Dealing with backlog

The asynchronous API provided by the service allows an application to submit an unlimited number of requests. Lots of requests happen if, for example, the user has inserted a flash card with thousands of photos into the device and then requests a gallery view for the collection. If the service’s client-side API blindly forwards requests via DBus, this causes a problem because DBus terminates the connection once there are more than around 400 outstanding requests.

To deal with this, we limit the number of outstanding requests to 200 and send another request via DBus only when an earlier request completes. Additional requests are queued in memory. Because this happens on the client side, the number of outstanding requests is limited only by the amount of memory that is available to the client.

A related problem arises if a client submits many requests for a thumbnail for the same image. This happens when, for example, the user looks at a list of tracks: tracks that belong to the same album have the same artwork. If artwork needs to be retrieved from the remote server, naively forwarding cache misses for each thumbnail to the server would end up re-downloading the same image several times.

We deal with this by maintaining an in-memory map of all remote download requests that are currently in progress. If phase 1 reports a cache miss, before initiating a download, we add the key for the remote image to the map and remove it again once the download completes. If more requests for the same image encounter a cache miss while the download for the original request is still in progress, the key for the in-progress download is still in the map, and we hold additional requests for the same image until the download completes. We then schedule the held requests as usual and create their thumbnails from the image that was cached by the first request.

Security

The thumbnailer runs with normal user privileges. We use AppArmor’s aa_query_label() function to verify that the calling client has read access to a file it wants a thumbnail for. This prevents one application from accessing thumbnails produced by a different application, unless both applications can read the original file. In addition, we place the entire service under an AppArmor profile to ensure that it can write only to its own cache directory.

Conclusion

Overall, we are very pleased with the overall design and performance of the thumbnailer. Each component has a clearly defined role with a clean interface, which made it easy for us to experiment and to refine the design as we went along. The design is extensible, so we can support additional media types or remote data sources without disturbing the existing code.

We used threads sparingly and only where we saw worthwhile concurrency opportunities. Using asynchronous interfaces for long-running operations kept resource usage to a minimum and allowed us to take advantage of I/O interleaving. In turn, this extracts the best possible performance from the hardware.

The thumbnailer now runs on Ubuntu Touch and is used by the gallery, camera, and music apps, as well as for all scopes that display media thumbnails.

This article has been originally published on Michi Henning's blog.

Read more
Tim Peeters

Adaptive page layouts made easy

Convergent applications

We want to make it easy for app developers to write an app that can run on different form factors without changes in the code. This implies that an app should support screens of various sizes, and the layout of the app should be optimal for each screen size. For example, a messaging app running on a desktop PC in a big window could show a list of conversations in a narrow column on the left, and the selected conversation in a wider column on the right side. The same application on a phone would show only the list of conversations, or the selected conversation with a back-button to return to the list. It would also be useful if the app automatically switches between the 1-column and 2-column layouts when the user resizes the window, or attaches a large screen to the phone.

To accomplish this, we introduced the AdaptivePageLayout component in Ubuntu.Components 1.3. This version of  Ubuntu.Components is still under development (expect an official release announcement soon), but if you are running the latest version of the Ubuntu UI Toolkit, you can already try it out by updating your import Ubuntu.Components to version 1.3. Note that you should not mix import versions, so when you update one of your components to 1.3, they should all be updated.

AdaptivePageLayout

AdaptivePageLayout is an Item with the following properties and functions:

  • property Page primaryPage
  • function addPageToCurrentColumn(sourcePage, newPage)
  • function addPageToNextColumn(sourcePage, newPage)
  • function removePages(page)

To understand how it works, imagine that internally, the AdaptivePageLayout keeps track of an infinite number of virtual columns that may be displayed on your screen. Not all virtual columns are visible on the screen. By default, depending on the width of your AdaptivePageLayout, either one or two columns are visible. When a Page is added to a virtual column that is not visible, it will instead be shown in the right-most visible column.

The Page defined as primaryPage will initially be visible in the first (left-most) column and all the other columns are empty (see figure 1).

Figure 1: Showing only primaryPage in layouts of 100 and 50 grid-units.
Showing only primaryPage at 100 grid units. Showing primaryPage at 50 grid units.

To show another Page in the first column, call addPageToCurrentColumn() with as parameters the current page (primaryPage), and the new page. The new page will then show up in the same column with a back button in the header to close the new page and return to the previous page (see figure 2). So far, AdaptivePageLayout is no different than a PageStack.

Figure 2: Page with back button in the first column.
Page with back button in the first column at 100 grid units. Page with back button in first column at 50 grid units.

The differences with PageStack become evident when you want to keep the first page visible in the first column, while adding a new page to the next column. To do this, call addPageToNextColumn() with the same parameters as addPageToCurrentColumn() above. The new page will now show up in the following column on the screen (see figure  3).

Figure 3: Adding a page to the next column.
Added a page to the next column at 100 grid units. Added a page to the next column at 50 grid units.

However, if you resize the window so that it fits only one column, the left column will be hidden, and the page that was in the right column will now have a back button. Resizing back to get the two-column layout will again give you the first page on the left, and the new page on the right. Call removePages(page) to remove page and all pages that were added after page was added. There is one exception: primaryPage is never removed, so removePages(primaryPage) will remove all pages except primaryPage and return your AdaptivePageLayout to its initial state.

AdaptivePageLayout automatically chooses between a one and two-column layout depending on the width of the window. It also automatically shows a back button in the correct column when one is needed and synchronizes the header size between the different columns (see figure 4).

Figure 4: Adding sections to any column increases the height of the header in every column.
Added a page with sections to the right column at 100 grid units. Added a page with sections at 50 grid units.

Future extensions

The version of AdaptivePageLayout that is now in the UI toolkit is only the first version. What works now will keep working, but we will extend the API to support the following:

  • Layouts with more than two columns
  • Use different conditions for switching between layouts
  • User-resizable columns
  • Automatic and manual hiding of the header in single-column layouts
  • Custom proxy objects to support Autopilot tests for applications

Below you can read the full source code that was used to create the screenshots above. The screenhots do not cover all the possible orders in which the pages left and right can be added, so I encourage you to run the code for yourself and discover its full behavior. We are looking forward to see your first applications using the new AdaptivePageLayout component soon :). Of course if there are any questions you can leave a comment below or ping members of the SDK team (I am t1mp) in #ubuntu-app-devel on Freenode IRC.

 

import QtQuick 2.4
import Ubuntu.Components 1.3

MainView {
    width: units.gu(100)
    height: units.gu(70)

    AdaptivePageLayout {
        id: layout
        anchors.fill: parent
        primaryPage: rootPage

        Page {
            id: rootPage
            title: i18n.tr("Root page")

            Column {
                anchors {
                    top: parent.top
                    left: parent.left
                    margins: units.gu(1)
                }
                spacing: units.gu(1)

                Button {
                    text: "Add page left"
                    onClicked: layout.addPageToCurrentColumn(rootPage, leftPage)
                }
                Button {
                    text: "Add page right"
                    onClicked: layout.addPageToNextColumn(rootPage, rightPage)
                }
                Button {
                    text: "Add sections page right"
                    onClicked: layout.addPageToNextColumn(rootPage, sectionsPage)
                }
            }
        }

        Page {
            id: leftPage
            title: i18n.tr("First column")

            Rectangle {
                anchors {
                    fill: parent
                    margins: units.gu(2)
                }
                color: UbuntuColors.orange

                Button {
                    anchors.centerIn: parent
                    text: "right"
                    onTriggered: layout.addPageToNextColumn(leftPage, rightPage)
                }
            }
        }

        Page {
            id: rightPage
            title: i18n.tr("Second column")

            Rectangle {
                anchors {
                    fill: parent
                    margins: units.gu(2)
                }
                color: UbuntuColors.green

                Button {
                    anchors.centerIn: parent
                    text: "Another page!"
                    onTriggered: layout.addPageToCurrentColumn(rightPage, sectionsPage)
                }
            }
        }

        Page {
            id: sectionsPage
            title: i18n.tr("Page with sections")
            head.sections.model: [i18n.tr("one"), i18n.tr("two"), i18n.tr("three")]

            Rectangle {
                anchors {
                    fill: parent
                    margins: units.gu(2)
                }
                color: UbuntuColors.blue
            }
        }
    }
}

 

Read more