Canonical Voices

Posts tagged with 'article'

Robin Winslow

There has been a growing movement to get all websites to use SSL connections where possible. Nowadays, Google even uses it as a criterion for ranking websites.

I've written before about how to host an HTTPS website for free with StartSSL and OpenShift. However, StartSSL is very hard to use and provides very basic certificates, and setting up a website on OpenShift is a fairly technical undertaking.

There now exists a much simpler way to setup an HTTPS website with CloudFlare and GitHub Pages. This only works for static sites.

If your site is more complicated and needs a database or dynamic functionality, or you need an SHA-1 fallback certificate (explained below) then look at my other post about the OpenShift solution. However, if a static site works for you, read on.

GitHub Pages

As most developers will be aware by now, GitHub offer a fantastic free static website hosting solution in GitHub Pages.

All you have to do is put your HTML files in the gh-pages branch of one of your repositories and they will be served as a website at {username}.github.io/{project-name}.

GitHub pages minimal files

And all files are passed through the Jekyll parser first, so if you want to split up your HTML into templates you can. And if you don't want to craft your site by hand, you can use the Automatic Page Generator.

automatic page generator themes

Websites on github.io also support HTTPS, so you can serve your site up at https://{username}.github.io/{project-name} if you want.

mytestwebsite GitHub pages

GitHub Pages also support custom domains (still for free). Just add a CNAME file to the repository with your domain name in it - e.g. mytestwebsite.robinwinslow.uk - and then go and setup the DNS CNAME to point to {username}.github.io.

mytestwebsite GitHub pages files

The only thing you can't do directly with GitHub Pages is offer HTTPS on your custom domain - e.g. https://mytestwebsite.robinwinslow.uk. This is where CloudFlare comes in.

CloudFlare

CloudFlare offer a really quite impressive free DNS and CDN service. This free service includes some really impressive offerings, the first three of which are especially helpful for our current HTTPS mission:

The most important downside to CloudFlare's free tier SSL is that it doesn't include the fall-back to legacy SHA-1 for older browsers. This means that the most out-of-date (and therefore probably the poorest) 1.5% of global citizens won't be able to access your site without upgrading their browser. If this is important to you, either find a different HTTPS solution or upgrade to a paid CloudFlare account.

Setting up HTTPS

Because CloudFlare are a CDN and a DNS host, they can do the HTTPS negotiation for you. They've taken advantage of this to provide you with a free HTTPS certificate to encrypt communication between your users and their cached site.

First simply setup your DNS with CloudFlare to point to {username}.github.io, and allow CloudFlare to cache the site.

mytestwebsite CloudFlare DNS setup

Between CloudFlare and your host the connection doesn't have to be encrypted, but I would certainly suggest that it still should be. But crucially for us, this encrypted connection doesn't actually need a valid HTTPS certificate. To enable this we should select the "Full" (rather than "Flexible" or "Strict") option.

CloudFlare full SSL encryption

Et voilà! You now have an encrypted custom domain in front of GitHub Pages completely for free!

mytestwebsite with a secure domain

Ensuring all visitors use HTTPS

To make our site properly secure, we need to ensure all users are sent to the HTTPS site (https://mytestwebsite.robinwinslow.uk) instead of the HTTP one (http://mytestwebsite.robinwinslow.uk).

Setting up a page rule

The first step to get visitors to use HTTPS is to send a 301 redirect from http://mytestwebsite.robinwinslow.uk to https://mytestwebsite.robinwinslow.uk.

Although this is not supported with GitHub Pages, it can be achieved with CloudFlare page rules.

Just add a page rule for http://*{your-domain.com}/* (e.g. http://*robinwinslow.uk/*) and turn on "Always use HTTPS":

CloudFlare always use HTTPS page rule

Now we can check that our domain is redirecting users to HTTPS by inspecting the headers:

$ curl -I mytestwebsite.robinwinslow.uk
HTTP/1.1 301 Moved Permanently
...
Location: https://mytestwebsite.robinwinslow.uk/

HTTP Strict Transport Security (HSTS)

To protect our users from man-in-the-middle attacks, we should also turn on HSTS with CloudFlare (still for free). Note that this can cause problems if you're ever planning on removing HTTPS from your site.

If you're using a subdomain (e.g. mytestwebsite.robinwinslow.uk), remember to enable "Apply HSTS policy to subdomains".

CloudFlare: HSTS setting

This will tell modern browsers to always use the HTTPS protocol for this domain.

$ curl -I https://mytestwebsite.robinwinslow.uk
HTTP/1.1 200 OK
...
Strict-Transport-Security: max-age=15552000; includeSubDomains; preload
X-Content-Type-Options: nosniff

It can take several weeks for your domain to make it into the Chromium HSTS preload list. You can check if it's in there, or add it again, by visiting chrome://net-internals/#hsts in a Chrome or Chromium browser and looking for the static_sts_domain setting.

That's it!

You now have an incredibly quick and easy way to put a fully secure website online in minutes, totally for free! (Apart from the domain name).

(Also posted on my personal blog - which uses the free CloudFlare plan.)

Read more
Robin Winslow

I often find myself wanting to play around with a tiny Python web application with native Python without installing any extra modules - the Python developer's equivalent of creating an index.html and opening it in the browser just to play around with markup.

For example, today I found myself wanting to inspect how the Google API Client Library for Python handles requests, and a simple application server was all I needed.

In these situations, the following minimal WSGI application, using the built-in wsgiref library is just the ticket:

from wsgiref.simple_server import make_server

def application(env, start_response):
    """
    A basic WSGI application
    """

    http_status = '200 OK'
    response_headers = [('Content-Type', 'text/html')]
    response_text = "Hello World"

    start_response(http_status, response_headers)
    return [response_text]

if __name__ == "__main__":
    make_server('', 8000, application).serve_forever()

Put this in a file - e.g. wsgi.py - and run it with:

(I've also saved this as a Gist).

This provides you with a very raw way of parsing HTTP requests. All the HTTP variables come in as items in the env dictionary:

def application(env, start_response):
    # To get the requested path
    # (the /index.html in http://example.com/index.html)
    path = env['PATH_INFO']
    
    # To get any query parameters
    # (the foo=bar in http://example.com/index.html?foo=bar)
    qs = env['QUERY_STRING']

What I often do from here is use ipdb to inspect incoming requests, or directly manipulate the response headers or content.

Alternatively, if you're looking for something slightly more full-featured (but still very lightweight) try Flask.

(Also posted over on robinwinslow.uk).

Read more
liam zheng

(原文作者:刘晓国

在之前的培训教程"在Ubuntu OS上创建一个dianping Scope (Qt JSON)"中,介绍了如何使用C++来在Ubuntu平台上开发一个Scope;在文章"使用golang来设计Ubuntu Scope"里也展示了如何使用go语言来在Ubuntu上开发一个Scope。今天将展示如何利用Javascript语言来开发一个Scope。这对于网页开发的开发者来说,无疑是一个好消息,不需要学习另外一种语言就可以轻松地开发一个Scope。更多关于Scope开发的知识可以在这里获得。

一、安装 

首先,必须强调的是Javascrip支持Scope的开发始于Ubuntu 15.04(vivid)系统及以后的版本。在开发之前,开发者必须按照文章"Ubuntu SDK 安装"安装好的SDK。同时,必须做如下的JS Scope开发工具的安装:

$ sudo apt install unity-js-scopes-dev
$ unity-js-scopes-tool setup

在这里必须注意的是,必须在安装完Ubuntu SDK后才可以执行上面的安装,并在SDK的安装中chroots必须安装完整。经过上面的安装,基本上已经完成了所有的工具的安装。

 

二、JS Scope开发文档

所有的开发离不开所需要的技术文档,JS Scope的开发文档的地址可以在early build找到,当然也可以通过安装unity-js-scopes-doc包来得到帮助。

 

三、创建一个我们的Scope

A、Webservice API:

以使用百度天气API为例,该API的连接为:

http://api.map.baidu.com/telematics/v3/weather?output=json&ak=DdzwVcsGMoYpeg5xQlAFrXQt&location=%E5%8C%97%E4%BA%AC

点击上面的连接后,可以得到JSON格式的输出:

{"error":0,"status":"success","date":"2016-01-18","results":[{"currentCity":"北京","pm25":"13","index":[{"title":"穿衣","zs":"寒冷","tipt":"穿衣指数","des":"天气寒冷,建议着厚羽绒服、毛皮大衣加厚毛衣等隆冬服装。年老体弱者尤其要注意保暖防冻。"},{"title":"洗车","zs":"较适宜","tipt":"洗车指数","des":"较适宜洗车,未来一天无雨,风力较小,擦洗一新的汽车至少能保持一天。"},{"title":"旅游","zs":"一般","tipt":"旅游指数","des":"天气较好,温度稍低,而且风稍大,让您感觉有些冷,会对外出有一定影响,外出注意防风保暖。"},{"title":"感冒","zs":"极易发","tipt":"感冒指数","des":"天气寒冷,昼夜温差极大且空气湿度较大,易发生感冒,请注意适当增减衣服,加强自我防护避免感冒。"},{"title":"运动","zs":"较不宜","tipt":"运动指数","des":"天气较好,但考虑天气寒冷,风力较强,推荐您进行室内运动,若在户外运动请注意保暖并做好准备活动。"},{"title":"紫外线强度","zs":"弱","tipt":"紫外线强度指数","des":"紫外线强度较弱,建议出门前涂擦SPF在12-15之间、PA+的防晒护肤品。"}],"weather_data":[{"date":"周一 01月18日 (实时:-8℃)","dayPictureUrl":"http://api.map.baidu.com/images/weather/day/qing.png","nightPictureUrl":"http://api.map.baidu.com/images/weather/night/qing.png","weather":"晴","wind":"北风3-4级","temperature":"-4 ~ -11℃"},{"date":"周二","dayPictureUrl":"http://api.map.baidu.com/images/weather/day/qing.png","nightPictureUrl":"http://api.map.baidu.com/images/weather/night/duoyun.png","weather":"晴转多云","wind":"微风","temperature":"-1 ~ -8℃"},{"date":"周三","dayPictureUrl":"http://api.map.baidu.com/images/weather/day/duoyun.png","nightPictureUrl":"http://api.map.baidu.com/images/weather/night/yin.png","weather":"多云转阴","wind":"微风","temperature":"0 ~ -7℃"},{"date":"周四","dayPictureUrl":"http://api.map.baidu.com/images/weather/day/yin.png","nightPictureUrl":"http://api.map.baidu.com/images/weather/night/duoyun.png","weather":"阴转多云","wind":"微风","temperature":"-3 ~ -6℃"}]}]}

开发的Scope需要解析上面的JSON格式的输出,并在Scope中呈现。

 

B、创建一个最基本的scope

在这一节中,来练习创建一个JS Scope,可以利用在Ubuntu SDK中所提供的template来轻松地创建一个Scope。首先,打开SDK,选择"New File or Project":

在最后的几步中,必须为每个所选择的Kit都要做同样的步骤以完成整个项目的生成。这时可以运行(点击SDK左下角的绿色按钮)Scope:

显示如下,基本上没有什么特别的东西。它在默认的情况下显示的是一个天气的Scope,但可以在它里面输入一些感兴趣的城市的名称来得到当前城市的天气情况。也可以选择SDK屏幕做下角的Desktop或Ubuntu Desktop SDK kit来在Desktop的环境下运行。当在手机上运行时,必须选择Ubuntu SDK for armhf来运行:

 

项目总览及npm集成:

在上面的演示中,已经生产了一个scope项目。先来查看一下项目结构:

liuxg@liuxg:~/release/chinaweatherjs$ tree
.
├── chinaweatherjs.apparmor
├── CMakeLists.txt
├── CMakeLists.txt.user
├── manifest.json.in
├── po
│   ├── chinaweatherjs.pot
│   ├── CMakeLists.txt
│   ├── Makefile.in.in
│   ├── POTFILES.in
│   └── POTFILES.in.in
└── src
    ├── chinaweatherjs.js
    ├── CMakeLists.txt
    ├── data
    │   ├── chinaweatherjs.ini.in
    │   ├── chinaweatherjs-settings.ini.in
    │   ├── icon.png
    │   └── logo.png
    ├── etc
    └── node_modules
        ├── last-build-arch.txt
        └── unity-js-scopes
            ├── bin
            │   └── unity-js-scopes-launcher
            ├── index.js
            ├── lib
            │   └── scope-core.js
            └── unity_js_scopes_bindings.node

8 directories, 20 files

从上面的结构中,可以看出来核心的文件将是src/chinaweatherjs.js文件。在node_modules中含有所需要的库,如果先前已经做过一些Scope开发,那么重新利用该文件来构造Scope将是非常简单的。如果还没有开发过任何其它的Scope的话,那么,请继续阅读下面的介绍。

 

npm集成

细心的开发者可能已经注意到一个叫做node_modules的目录,JS Scope使用的框架就是npm + Scope,可以很方便地使用unity-js-scopes-tool来加入所需要的npm包到Scope项目中去,运行的命令如下:

$ unity-js-scopes-tool install <path/to/project/src/node_modules> <npm package>

上述命令将安装任何一个所需要的npm包到项目中去,如果对npm还不是很熟话,请参阅连接https://www.npmjs.com/

 

API总览

在这一节中,将介绍一下所使用的API及如何实现所需要的Scope。

Javascript Scope的基本架构

为了能够连接到Scope的runtime,Scope只需要遵守几个简单的准则:

  • 导入 Javascript Scope模块到你的代码中
  • 设置你的Scope的runtime上下文

这些步骤简单地说就是如下的代码:

var scopes = require('unity-js-scopes')
scopes.self.initialize({}, {});

一旦被导入,unity-js-scopes核心模块即是和Scope runtime交互的入口点,runtime可设置Scope,和Dash进行交互及显示用户在Scope交互所生产的结果等。

在上面的初始化代码中,"self"属性是用来实现交互,它引用当前正在运行的Scope的上下文。可以在上面显示的index.js文件中看到如下的代码:

Object.defineProperty(
    module.exports,
    "self",
    {
        get: function() {
            if (! self) {
                self = new Scope();
            }
            return self;
        },
    });

除了定义一些Scope在运行时的一下runtime元素以外,runtime上下文还允许检查当前Scope的设置及接受scope runtime环境变化时所生产的变化等。

 

Runtime 元素

现在,可以来重新回顾Scope代码并开始定义一些重要的运行时的函数的行为。

一旦Scope和runtime建立起连接并被用户所启动,scope runtime将发送来所有的由用户所产生的动作。最终这些动作将被发送到Scope在Initialize过程中所定义的API函数中。

这些API函数可以由Scope来有选择地定义。它们将在runtime时反应出那些最重要的被触发的步骤。下面列举那些最重要的runtime回调函数.

  • run: 当一个scope准备运行时,这个回调函数将被调用.
  • start: 当一个scope准备启动时,这个函数将被调用
  • stop: 当一个scope准备停止时,这个函数将被调用
  • search: 当用户请求一个搜索时,这个函数将被调用.runtime将将提供所有的关于搜索所需要的信息给这个函数的调用.开发者的任务就是通过和runtime的交互把所有可能的结果push给runttime.你也可以控制如何显示这些结果
  • preview: 显示一个在上面search中显示结果的preview.runtime将提供关于这个preview所需要的所有的信息

一个简单的模版为:

var scopes = require('unity-js-scopes')
scopes.self.initialize({}, {
    run: function() {
        console.log('Running...');
    },
    start: function(scope_id) {
        console.log('Starting scope id: ' + scope_id + ', ' + scopes.self.scope_config)
    },
    search: function(canned_query, metadata) {
        return null
    },
    preview: function(result, metadata) {
        return null
    },
}});

对于每一个scope runtime的回调函数来说,它相应于一个用户的交互。scope runtime希望scope发送回一个描述各个关键交互所需要的对象。

比如,对search回调函数来说,它希望scope发送回一个叫做SearchQuery的object。使用这个object来定义用户进行搜索时的行为。

SearchQuery object可以定义一个run回调函数。当搜索发生时,该函数将被调用。同时它也可以定义一个cancel的回调函数,当一个搜索被停止时,该函数将被调用。

Scope runtime同时也传入一个叫做SearchReply的object,这个object可以被用来push一些结果到scope runtime。

上面的这种交互模式是贯穿了整个scope及scope rumtime设计的核心交互模式。

 

推送搜索结果

上面讲到的一个最核心的搜索交互就是scope可以把所需要的结果推送到scope runtime。这些结果是通过SearchReply来完成推送的,这个函数希望一个叫做CategorisedResult类型的数据被创建,并被推送到scope runtime。这个result对象将让我们的scope来定义诸如title, icon,uri等信息。

 

CategorisedResult的一个额外的功能就是在创建它时,可以指定它结果显示的layout。这个layout是由Category及CategoryRender对象共同定义的。下面就是一个天气scope中所使用的一个例子。为了能够获取百度天气API的数据,必须重新定义tempalate中的变量:

var query_host = "api.map.baidu.com"
var weather_path = "/telematics/v3/weather?output=json&ak=DdzwVcsGMoYpeg5xQlAFrXQt&location=" 
var URI = "http://www.weather.com.cn/html/weather/101010100.shtml"; 

initialize中的search方法定义如下:

                search: function(canned_query, metadata) {
                    return new scopes.lib.SearchQuery(
                                canned_query,
                                metadata,
                                // run
                                function(search_reply) {
                                    var qs = canned_query.query_string();
                                    if (!qs) {
                                        qs = "北京"
                                    }

                                    console.log("query string: " + qs);

                                    var weather_cb = function(response) {
                                        var res = '';

                                        // Another chunk of data has been recieved, so append it to res
                                        response.on('data', function(chunk) {
                                            res += chunk;
                                        });

                                        // The whole response has been recieved
                                        response.on('end', function() {
                                            // console.log("res: " + res);

                                            r = JSON.parse(res);

                                            // Let's get the detailed info
                                            var request_date = r.date
                                            console.log("date: " + date);

                                            var city = r.results[0].currentCity;
                                            console.log("city: " + city);

                                            var pm25 = r.results[0].pm25
                                            console.log("pm25: " + pm25)

                                            var category_renderer = new scopes.lib.CategoryRenderer(JSON.stringify(WEATHER_TEMPLATE));
                                            var category = search_reply.register_category("Chineweather", city, "", category_renderer);

                                            try {
                                                r = JSON.parse(res);
                                                var length = r.results[0].weather_data.length
                                                console.log("length: " + length)

                                                for (var i = 0; i < length; i++) {
                                                    var categorised_result = new scopes.lib.CategorisedResult(category);

                                                    var date = r.results[0].weather_data[i].date
                                                    console.log("date: "+  date);

                                                    var dayPictureUrl = r.results[0].weather_data[i].dayPictureUrl;
                                                    console.log("dayPictureUrl: " + dayPictureUrl);

                                                    var nightPictureUrl = r.results[0].weather_data[i].nightPictureUrl;
                                                    console.log("nightPictureUrl: " + nightPictureUrl);

                                                    var weather = r.results[0].weather_data[i].weather;
                                                    console.log("weather: " + weather);

                                                    var wind = r.results[0].weather_data[i].wind;
                                                    console.log("wind: " + wind);

                                                    var temperature = r.results[0].weather_data[i].temperature;
                                                    console.log("temperature: " + temperature);

                                                    categorised_result.set("weather", weather);
                                                    categorised_result.set("wind", wind);
                                                    categorised_result.set("temperature", temperature);

                                                    categorised_result.set_uri(URI);
                                                    categorised_result.set_title("白天: " + date );
                                                    categorised_result.set_art(dayPictureUrl);
                                                    categorised_result.set("subtitle", weather);
                                                    search_reply.push(categorised_result);

                                                    categorised_result.set_title("夜晚: " + date );
                                                    categorised_result.set_art(nightPictureUrl);
                                                    search_reply.push(categorised_result);

                                                }

                                                // We are done, call finished() on our search_reply
//                                              search_reply.finished();
                                            }
                                            catch(e) {
                                                // Forecast not available
                                                console.log("Forecast for '" + qs + "' is unavailable: " + e)
                                            }
                                        });
                                    }

                                    console.log("request string: " + query_host + weather_path + qs);

                                    http.request({host: query_host, path: weather_path + encode_utf8(qs)}, weather_cb).end();
                                },

                                // cancelled
                                function() {
                                });
                },

 

Preview搜索结果

一旦搜索结果被推送到scope runtime并被显示,用户可以点击显示的结果并请求一个关于该结果的preview.Scope runtime将通过scope中所定义的preview回调来显示所需要的结果.

 

就像上面对search所描述的那样,scope runtime希望的scope返回一个PreViewQuery的对象来作为一个交互的桥梁。这个对象必须指定一个run及一个cancel的函数.这两个函数和上面介绍的search中的语义是一样的。这里不再累述。

 

对Preview来说,有两个最重要的元素:column layout及Preview Widgets。就像它们的名字所描述的那样,column layout元素是用来定义Preview页面中Preview Component的layout的。Preview Widget是用来在Preview页面中组成页面的。

 

一旦明白了上面所讲的,预览插件并且它被绑定的数据之间的关联是通过“ID”来完成。下面是百度天气里的preview的实现:

  preview: function(result, action_metadata) {
                    return new scopes.lib.PreviewQuery(
                                result,
                                action_metadata,
                                // run
                                function(preview_reply) {
                                    var layout1col = new scopes.lib.ColumnLayout(1);
                                    var layout2col = new scopes.lib.ColumnLayout(2);
                                    var layout3col = new scopes.lib.ColumnLayout(3);
                                    layout1col.add_column(["imageId", "headerId", "temperatureId", "windId"]);

                                    layout2col.add_column(["imageId"]);
                                    layout2col.add_column(["headerId", "temperatureId", "windId"]);

                                    layout3col.add_column(["imageId"]);
                                    layout3col.add_column(["headerId", "temperatureId", "windId"]);
                                    layout3col.add_column([]);

                                    preview_reply.register_layout([layout1col, layout2col, layout3col]);

                                    var header = new scopes.lib.PreviewWidget("headerId", "header");
                                    header.add_attribute_mapping("title", "title");
                                    header.add_attribute_mapping("subtitle", "subtitle");

                                    var image = new scopes.lib.PreviewWidget("imageId", "image");
                                    image.add_attribute_mapping("source", "art");

                                    var temperature = new scopes.lib.PreviewWidget("temperatureId", "text");
                                    temperature.add_attribute_mapping("text", "temperature");

                                    var wind = new scopes.lib.PreviewWidget("windId", "text");
                                    wind.add_attribute_mapping("text", "wind");

                                    preview_reply.push([image, header, temperature, wind ]);
                                    preview_reply.finished();
                                },
                                // cancelled
                                function() {
                                });
                }

运行Scope,可得到以下输出:

可以通过如下的方式来把Scope部署到手机上:

 

Read more
David Callé

Today we announce the launch of our second Ubuntu Scopes Showdown! We are excited to bring you yet another engaging developer competition, where the Ubuntu app developer community brings innovative and interesting new experiences for Ubuntu on mobile devices.

Scopes in Javascript and Go were introduced recently and are the hot topic of this competition!

Contestants will have six weeks to build and publish their Unity8 scopes to the store using the Ubuntu SDK and Scopes API (JavaScript, Go or C++), starting Monday January 18th.

A great number of exciting prizes are up for grabs: a System76 Meerkat computer, BQ E5 Ubuntu phones, Steam Controllers, Steam Link, Raspberry Pi 2 and convergence packs for Ubuntu phones!

Find out more details on how to enter the competition. Good luck and get developing! We look forward to seeing your scopes on our Ubuntu homescreens!

Read more
April Wang

2015中国移动全球合作伙伴大会于2015年12月14日在广州保利世贸博览馆正式开启,此次大会以“和移动助力互联网+”为主题,由中国移动通信集团有限公司主办,聚百家终端、互联网、渠道等合作伙伴于一堂共同展示一年来的进程和未来新一年的合作远景。科能(Canonical)公司也受邀参加了此次活动,在独立展位展出了目前Ubuntu操作系统在移动手机端和智能硬件方面的最新亮点及开发进程。 

Ubuntu的展台除了展示目前已上市欧洲的两款现有Ubuntu手机机型,还有核心的三大亮点部分组成,首先展出了即将推出的一个重大手机功能性更新 - 让Ubuntu手机轻松变身个人电脑的Convergence技术,此外亮相现场的还有目前与中国移动合作研发的一款手机展示,以及Ubuntu系统在智能物联网硬件芯片上应用的展示。 

从今年初开始Ubuntu陆续有三款机型上市欧洲,首款BQ E4.5机型还在2015年的MWC大会上有展出,此后魅族MX4和BQ E5也陆续上架欧洲。 除了欧洲市场,今年Ubuntu的BQ版手机还在印度, 俄罗斯两国分别进行了销售,让当地的小伙伴们也尝到了鲜。 

在此次中国移动全球合作伙伴大会上,有一项会让现有手机功能极大提升的特征,又被称为Convergence的技术,在现场亮相展出。 它通过蓝牙连接无线鼠标,键盘,HDMI接线连接显示屏,可以将手机内容呈现到屏幕上;这样用户可以将自己的手机作为一台电脑来使用,完成一些很难或根本无法操作的任务,比如在数据表中进行数据统计等等。这是一个移动设备发展的趋势,科能(Canonical)公司早在2013年就提出了讲个人手机转型成个人电脑的设想,并在国外众筹平台提交次项目,虽然项目没还有筹备到目标金额,却依然很明显的证明了整个移动设备和传统个人电脑边界模糊的趋势,以及现存的市场需求。而在这次的大会中,现场的小伙伴们可以亲身体验一下了。目前版本依然处于一个Alpha版本状态, 但是喜欢自己动手“折腾”的小朋友现在就可以通过使用Nexus 4手机或Nexus 7的平板来使用Ubuntu移动产品这一先锋性的技术。

而展台另一面的亮点便是和中国移动合作研发中的一款硬件设备,这款设备上除了Ubuntu特有的常用型Scope,还展现了集合中国移动咪咕平台内容的几款Scope,包括和阅读, 和音乐,以及和彩云服务。除了内容上的亮点,重头戏其实是这款设备上呈现的RCS技术。 RCS英文全名又被称为Rich Communication Services,简单而言这套技术就是可以让手机用户在传统手机短信的基础上可以有更加丰富的信息数据的发送接收, 例如语音对话,视频对话,发送图片,甚至类似阅后即焚的功能。RCS技术其实在国外已经有很多家运营商实现了此功能,中国移动是国内首家开始正式研发RCS的中国运营商,而在此次大会上这款技术首次在一款Ubuntu手机上展示了出来,不能不说这是很重要的一个亮点。 

Ubuntu作为一款开源操作系统除了运行在云,PC,和手机移动设备之外, 在智能物联网的世界其实已经存在有一段时间了。很多早期开发机器人,无人机的先锋都是在使用Ubuntu操作系统。Ubuntu还是开源机器人基金(Open source robotics foundation)的软件平台基础。目前戴尔的Edge Gateway 5000 Series,以及DJI大疆的机载电脑“妙算 Manifold”也都是运行着Ubuntu的系统。这次移动大会现场,大疆的妙算也亮相现场。 

中国移动全球合作伙伴大会历时三天,展示了整个移动硬件,软件,系统, 芯片等等一整套生态线。 科能也有幸通过这次机会给所有参会伙伴们展示了Ubuntu在2015年一年来的进展和未来手机移动开发的方向和在智能物联网行业的长久计划。 
 

 

Read more
Benjamin Zeller

In the last couple of weeks, we had to completely rework the packaging for the SDK tools and jump through hoops to bring the same experience to everyone regardless if they are on LTS or the development version of Ubuntu. It was not easy but we finally are ready to hand this beauty to the developer’s hands.

The two new packages are called “ubuntu-sdk-ide” and “ubuntu-sdk-dev” (applause now please).

The official way to get the Ubuntu SDK installed is from now on by using the Ubuntu SDK Team release PPA:

https://launchpad.net/~ubuntu-sdk-team/+archive/ubuntu/ppa

Releasing from the archive with this new way of packaging is sadly not possible yet, in Debian and Ubuntu Qt libraries are installed into a standard location that does not allow installing multiple minor versions next to each other. But since both, the new QtCreator and Ubuntu UI Toolkit, require a more recent version of Qt than the one the last LTS has to offer we had to improvise and ship our own Qt versions. Unfortunately that also blocks us from using the archive as a release path.

If you have the old SDK installed, the default QtCreator from the archive will be replaced with a more recent version. However apt refuses to automatically remove the packages from the archive, so that is something that needs to be done manually, best before the upgrade:

sudo apt-get remove qtcreator qtcreator-plugin*

Next step is to add the ppa and get the package installed.

sudo add-apt-repository ppa:ubuntu-sdk-team/ppa \
    && sudo apt update \
    && sudo apt dist-upgrade \
    && sudo apt install ubuntu-sdk

That was easy, wasn’t it :).

Starting the SDK IDE is just as before, either by running qtcreator or ubuntu-sdk directly and also by running it from the dash. We tried to not break old habits and just reused the old commands.

However, there is something completely new. An automatically registered Kit called the “Ubuntu SDK Desktop Kit”. That kit consists of the most recent UITK and Qt used on the phone images. Which means it offers a way to develop and run apps easily even on an LTS Ubuntu release. Awesome, isn’t it Stuart?

The old qtcreator-plugin-ubuntu package is going to be deprecated and will most likely be removed in one of the next Ubuntu versions. Please make sure to migrate to the new release path to always get the most recent versions.

Read more

As a follow-up to our previous post A Fast Thumbnailer for Ubuntu, we have published a new tutorial to help you make the most of this new SDK feature in your apps.

You will learn how to generate on-demand thumbnails for pictures, video and audio files by simply importing the module in your QML code and slightly tweaking your use of Image components.

Read the tutorial ›

Read more
Thibaut Rouffineau

The Eclipse Foundation has become a new home for a number of IoT projects. For the newcomers in the IoT world it’s always hard to see the forest for the trees in the number of IoT related Eclipse projects. So here is a first blog to get you started with IoT development using Eclipse technology.

The place to start with IoT development is MQTT (Messaging Queuing Telemetry Transport). MQTT is a messaging protocol used to send information between your Things and the Cloud. It’s a bit like the REST API of the IoT world, it’s standardised and supported by most clients, servers and IOT Backend As A Service (BaaS) vendors (AWS IOT, IBM Bluemix, Relayr, Evrything to name a few).

If you’re not familiar with MQTT here is a quick rundown of how it works:

  • MQTT was created for efficient and lightweight message exchanges between Things (embedded devices / sensors).

  • An MQTT client is typically running on the embedded device and sends messages to an MQTT broker located on a server.

  • MQTT messages are made of 2 fields a topic and a message.

  • MQTT clients can send (publish in MQTT linguo) messages on a specific topic. Typically a light in my kitchen would send a message of this type to indicate it’s on:  topic =”Thibaut/Light/Kitchen/Above_sink/pub” message=”on”.

  • MQTT clients can listen (subscribe in MQTT linguo) to messages on a specific topic. Typically a light in my kitchen would subscribe to messages to await for instruction to be turned off by subscribing to the  topic =”Thibaut/Light/Kitchen/Above_sink/sub” and waiting for a message: message=”turn_off”.

  • MQTT brokers listen to incoming messages and retransmit the messages to clients subscribed to a specific topic. In this way it resembles a multicast network.

  • Most MQTT brokers are running in the cloud but increasingly MQTT brokers can be found on IoT gateways in order to do message filtering and create local rules for emergency or privacy reasons. For example a typical local rule in my house would be if a presence sensor in the kitchen sends a message saying that no one is in the kitchen a simple rule would send a message to the light to switch it. Our rules engine would look like: if receive message: topic=”Thibaut/presence_sensor/Kitchen/pub” message =”No presence”  then send message on topic =”Thibaut/Light/Kitchen/Above_sink/sub” with message=”turn_off”

  • BaaS vendors would typically offer a simple rules engine sitting on top of the MQTT broker, even though most developers would probably build their rules within their code. Your choice!

  • To get started Eclipse provides multiple MQTT client under the Paho project

  • To get started with your own broker Eclipse provides an MQTT broker under the Mosquitto project

  • Communication between MQTT client and broker supports different level of authentication from none to using public /private keys through username / password

  • When using a public MQTT broker (like the Eclipse sandbox) your messages will be visible to all people who subscribe to your topics so if you’re going to do anything confidential make sure you have your own MQTT broker (either through a BaaS or build your own on a server).

That’s all there is to know about MQTT! As you can see it’s both simple and powerful which is why it’s been so successful and why so many vendors have implemented it to get everyone started with IoT.
And now is your time to get started!! To help out here’s a quick example on Github that shows you how you can get the Paho Python MQTT running on Ubuntu Core and talking to the Eclipse Foundation MQTT sandbox server. Have a play with it and tell us what you’ve created with it!

Read more
April Wang

-大疆创新“妙算”Manifold是一台嵌入式高性能机载电脑

-无缝兼容大疆经纬M100飞行平台,优化无人机的实时数据分析能力并大幅提高计算效率,释放飞行平台的全部潜能。

全球飞行影像系统开拓者DJI大疆创新发布专为飞行平台设计的嵌入式高性能机载电脑“妙算” Manifold。配合大疆Onboard SDK,妙算提供了便捷易用的全新功能,让开发者释放创造力,打造更加强大的无人机行业应用。

大疆创新战略合作总监Michael Perry表示:“妙算将开启智能飞行平台的全新时代,作为联接地面设备和飞行终端的智能协作中枢,妙算可为复杂的行业应用提供解决方案。我们非常期待开发者通过妙算开发出令人眼前一亮的应用”。

妙算能够广泛扩展第三方传感器,开发者在经纬M 100上可通过妙算连接红外摄像机、气象研究设备以及地理信息采集设备,并可在飞行中实时收集和分析数据。

妙算搭载Canonical公司的Ubuntu操作系统,并支持CUDA, OpenCV以及ROS。配备英伟达Tegra嵌入式处理器,其包含四核ARM Cortex A-15处理器和Kepler架构的图形处理单元,这使得妙算不仅能实现强大的图像处理能力,且能高效地处理并行任务。此外,妙算还可广泛应用于计算机视觉、深度学习等人工智能领域,并提供USB、Ethernet、HDMI等丰富的接口,用于连接传感器、显示屏等多种扩展设备。

Canonical公司智能设备及全球战略合作副总裁Mark Murphy说:“我们非常高兴能与DJI大疆创新合作,Canonical和大疆创新分享同样的愿景,致力于推动科技进步,为开发者铺平前进的道路”。

搭载Ubuntu 14.04 LTS版本的妙算将于今日在大疆创新官方商城全球同步预售,中国大陆地区该产品售价为人民币2999元。欲获取更多详情,请访问https://developer.dji.com/cn/manifold/

Read more
April Wang

TC 北京黑客马拉松

Ubuntu在中国已经举办了两次黑客松了,而这次受TC 中国邀请有机会作为赞助方参加了TC北京黑客马拉松活动。规模当然更高、更大、更尚,这次活动让我们遇到了更多Ubuntu小伙伴们,也让更多志同道合的程序猿们进一步了解到Ubuntu;最开心的是在这次活动中还有遇到之前活动中认识的老朋友呢!

这次黑客松在位于北京五棵松的Hi-Park举行,这里需要特别强调并称赞一下TC中国TechNode队伍的能力和体力,让这个室内篮球场地一夜间变身Hi-Tech Power House. 正巧碰到是万圣节,活动现场诡异事件连连发生。开玩笑了,现场布置是一番万圣趴的气氛,相比寻常黑客松,也另增了一份活泼。

黑客松命题在这次活动中采用了混搭方式, 有三项挑战任务,设有专项命题和作品要求,有机会获得特别的几项大奖;同时开发者们也可以随意出作品做展示,依然有机会获得主办方为大家准备的丰厚礼品。作为命题挑战任务之一的Ubuntu任务,其实算是一个题目比较开放的任务,大家可以通过为Ubuntu手机开发应用或Scope来参与挑战, 也可以通过使用snappy Ubuntu Core来搭建任何智能物联网项目来参与挑战。

TC北京黑客松命题

这次黑马是正式从第一天的下午1点进入组队开工的,在第二天的上午9点半就开始提交作品, 实际真正写打码的时间也就是20多个小时的样子。作品展示是在次日上午10点钟正式开始的,一共有29组成功完成了作品展示,这里我们挑俩组针对Ubuntu挑战任务而来的作品介绍一下,希望在之后的日子里能看到所有参加挑战的作品成功上线。

 

Musicor:

双人小组, 专为难以入睡的你们(夜猫子们)定制。这是一款基于Ubuntu手机的应用,通过播放音乐来协助入眠, 同时这款应用可以和手环对接, 通过手环对人体睡眠状态的检测给到应用提示来调整音乐音量,从而达到你已入睡音乐也停,解决睡意正浓时刻起身关音的痛苦。这款应用巧妙结合使用不同智能设备,完成解决了一个大家都曾遇到过的问题。我已经期待能早日在Ubuntu商店中看到并下载这款应用了。

 

SnapChirp:

看这名字大家也大约可以猜到会和我们的snappy Ubuntu Core有些关系了,没错,这款应用通过使用snappy Ubuntu Core利用音频来测算智能设备之间距离的跨平台(ubuntu,安卓和IOS)应用,简而言之就是智能设备相互距离的量尺。听起来仿佛很简单,在这个智能设备日益寻常的今天,它被进一步应用的场景其实展示了更多的可能性。你有想到吗?

snapchirp

能在昼夜不停的黑马中获奖当然是非常让人兴奋的一件事情,而它更让人兴奋的应该是看到的下一个新起点吧。

TC Beijing Hackathon

 

 

 


 

 

 

 

 

Read more
April Wang

手机更新: OTA-7

Ubuntu手机最新更新如下:

Scopes

- 社交应用功能提高, 现在支持点“赞”和转发功能

浏览器

- 新增搜索历史记录
- 提高的场景菜单有下载链接选项
- Http基础验证支持

图片库

- 支持SVG格式
- Soundcloud网页版应用可以在后台播放

修复的“八哥”

- 修复 test.mmrow exploit
- https://launchpad.net/canonical-devices-system-image/+milestone/ww40-2015
- 修复UI冻结 (FD leaks)
- 默认不会在stable channel发布奔溃报告
- 修复 QML cache 和重新存储一致应用启动次数
- 在浏览器中默认使用更少的记忆空间,并且避免网页应用呈现白屏
- 用感应器侦测距离,自动关闭电话背光

Read more
April Wang

Ubuntu走向融合之路

原作者:Richard Collins

随着首款自带完整Ubuntu桌面界面的智能手机即将上市,Ubuntu设备融合也正在成为现实。

当一部智能手机能够为用户提供和他们常用电脑同样的用户体验时,这部智能手机才是在真正意义上同时起到了移动手机兼个人电脑的重任。 这也是我们作为真正智能手机融合的一个起点 - (通过一款智能手机)来为成千上百对Ubuntu桌面电脑非常熟悉的用户提供同样的Ubuntu个人电脑体验。 简而言之,就是用户对一台个人电脑的使用体验期待必须也能够在他们的智能手机上获得。 这包括了:

- 轻松的多任务多窗口管理
- 全套支持移动和生产力的桌面应用和瘦客户端支持
- 带有桌面提示的集成服务
- 具有应用管理及便捷打开常用应用的能力
- 简单翻阅文档,创建和管理文档文件夹
- 响应性应用专为触屏和点击输入开发,可以自行根据设备环境调整UI呈现方式
- 综合性系统操控以及在需要时对底层操作系统的访问
- 包含一系列兼容第三方服务的统一应用商店
- 在桌面界面上使用手机电话和短信应用来进行交流

操作系统融合之路最初是从Unity 8开始的。 Unity 8 是Ubuntu自有的用户界面和呈现框架,它将预计被运行于所有基于同样底层代码库的Ubuntu设备上,支持一个常用的应用和服务开发基础架构。Unity 8的目标就是能够作为首要呈现框架运行于任何Ubuntu智能产品上。

这就意味着应用程序拥有了其他操作系统无法提供的一个东西:唯一的视觉框架以及一套让应用程序可以在任何类型的Ubuntu智能设备上运行的工具。为移动设备开发的应用程序可以轻松的扩展适用于桌面呈现,同时还支持点击类输入。我们的SDK会为移动应用开发者提供创建这些应用桌面版场景的工具。 类似的,桌面应用的开发者可以使用我们的SDK来延伸并加强他们程序应用于移动端的功能。 融合为开发者们带来了一套全新的场景,而我们的SDK将为开发者们让他们应用程序轻松应用于任何界面提供了基础类工具。

你在(ubuntu)手机上和(ubuntu)桌面上看到和使用的同一款应用程序, 他们将会是完全一样的一套代码运行着这款应用。Ubuntu不需要区别这款应用是专门为移动端还是为桌面呈现而编写的,而是这款应用会自动根据运行的设备呈现环境来自动调用相应的交互界面。第三方开发者们只需要为Ubuntu编写一次代码完成应用开发, 这款应用便可以运行于不同的Ubuntu界面。

我谈论智能手机进化成为一个融合型形态,提供个人电脑体验,是一个业内真实相关的需求为时很久了。 但是一个真正融合化的智能手机或平板,结合移动和桌面生产力而设计,是在使用搭建于唯一而且完全受控代码库基础上的操作系统才可以为被称为真正完成。

Read more
April Wang

支持 App 的蜘蛛机器人

本文是 Erle-Robotics 团队为《创业故事》系列博客文章撰写的一篇客座文章。《创业故事》系列主要介绍创新型公司为何以及如何运用 Ubuntu 技术。

 

Erle-Spider 是第一款由 ROS 提供支持并运行 Snappy Ubuntu Core 的多足无人机。 这款智能机器人配备 900 MHz 四核 ARM Cortex-A7 处理器,原生运行 Linux 操作系统,并嵌入了多个板载传感器。 它具有六只机器足,旨在满足学习、研究和开发领域对机器人套件不断增长的需求,同时所受的监管程度较低。 该无人机还能够进入管道和灾区等难以到达的地方,携带摄像头,并支持 Wi-Fi、蓝牙以及 3G 和 4G 网络技术,以根据需要提供连接。

 

这台 Linux 六足计算机由 Snappy Ubuntu Core 提供支持,并可连接 Canonical 推出的基于云的 App 商店,让用户可开发和出售机器人行为和无人机应用程序。 很快,此无人机将提供计算机视觉算法、不同的动态模型和传感器实施等一些功能。

 

Erle-Spider 项目已在 Indiegogo 平台上启动 (https://www.indiegogo.com/projects/erle-spider-the-ubuntu-drone-with-legs/),并将于今年圣诞节上市,售价仅为 399 美元。

Read more
Shuduo

After Victor Palau wrote a new blog for his PiGlow API snap, I tried to operate PiGlow LED and learnt following ways to interact with it.

a. Use python:

python3 -c 'from urllib.request import urlopen; print(urlopen("http://REALIP:8000/v1/on", data=b""))'

b. Use curl:

curl -i -X POST "http://REALIP:8000/v1/on"

c. Use html web page in web browser

<head>
<title>test piglow</title>
</head>
<body>
<form action="http://192.168.0.151:8000/v1/flare" method="post">
<input type=submit value=flare>
</form>
</body>

Read more
Zsombor Egri

Adaptive page layout made flexible

A few weeks ago Tim posted a nice article about Adaptive page layouts made easy. It is my turn now to continue the series, with the hope that you will all agree on the title.

Ladies and Gentlemen, we have good news and (slightly) bad news to announce about the AdaptivePageLayout. If the blogging would be interactive, I’d ask you which one to start with, and most probably you would say with the bad ones, as it is always good to get the cold shower first and then have a sunbath. Sorry folks, this time I’ll start with the good news.

The good news

We’ve added a column configurability API to the AdaptivePageLayout! From now on you can configure more than two columns in your layout, and for each column you can configure the minimum, maximum and preferred sizes as well as whether to fill the remaining width of the layout or not. And even more, if the minimum and maximum values of the column configuration differs, the column can be resized with mouse or touch. See the following video demonstrating the feature.

<commercials>
And all this is possible right now, right here, only with Ubuntu UI Toolkit!
</commercials>

You can configure any number of column configurations, with conditions when those should be applied. The one column mode doesn’t need to be configured, that is automatically applied when none of the specified column configuration conditions apply. However, if you wish, you can still configure the single column mode, in case you want to apply minimum width value for the column. Note however that the minimum width configuration will not (yet) be applied on the application’s minimum resizable width, as you can observe on the video above.

The video above was made based on the sample code from Tim’s post, with the following additions:

AdaptivePageLayout {
    id: layout
    \\ [...]
    layouts: [
        // configure two columns
        PageColumnsLayout {
            when: layout.width > units.gu(80)
            PageColumn {
                minimumWidth: units.gu(20)
                maximumWidth: units.gu(60)
                preferredWidth: units.gu(40)
            }
            PageColumn {
                fillWidth: true
            }
        },
        // configure minimum size for single column
        PageColumnsLayout {
            when: true
            PageColumn {
                minimumWidth: units.gu(20)
                fillWidth: true
            }
        }
    ]
}

The full source code is on lp:~zsombi/+junk/AdaptivePageLayoutMadeFlexible.

The bad news

Oh, yes, this is the time you guys start to get mad. But let’s see how bad it is going to be this time.

We started to apply the AdaptivePageLayout in a few core applications, when we realized that the UI is getting blocked when Pages with heavy content are added to the columns. As pages were created synchronously, we would have had to redo each app’s Page content management to be able to load at least partially asynchronously using Loaders. And that seemed to be a really bad omen for the component. So we decided to bring in an API break for the AdaptivePageLayout addPageTo{Current|Next}Column() functions, so if the second argument is a file URL or a Component, the functions now return an incubator object which can be used to track the loading completion. In the case of an existing Page instance, as you already have it, the functions will return null. More on how to use incubators in QML can be read from http://doc.qt.io/qt-5/qml-qtqml-component.html#incubateObject-method.

A code snippet to catch page completion would then look like

var incubator = layout.addPageToNextColumn(thisPage, Qt.resolvedUrl(pageDocument));
if (incubator && incubator.status == Component.Loading) {
    incubator.onStatusChanged = function(status) {
        if (status == Component.Ready) {
            // incubator.object contains the loaded Page instance
            // do whatever you wish with the Page
            incubator.object.title = "Dynamic Page";
        }
    }
}

Of course, if you want to set up the Page properties with some parameters, you can do it in the good old way, by specifying the parameters in the function, i.e.

addPageToNextColumn(thisPage, Qt.resolvedUrl(pageDocument), {title: “Dynamic Page”}).

The incubator approach you would need if you want to do some bindings on the properties of the page, which cannot be done with the creation parameters.

 

So, the bad news is not so bad after all, isn’t it? That’s why I started with the good news ;)

More “bad” news to come

Oh, yes, we have not finished yet with the bad news. So from now on pages added to the columns are asynchronous by default, except the very first page. That is still going to be loaded synchronously. The good news: it is not for long ;) We are planning to enable asynchronous loading of the primary page as well, and most probably you will get a signal triggered when the page is loaded. In this way you would be able to show something else while the first page is loading, an animation, another splash screen, or the Flying Dutchman, whatever :)


Stay tuned! We’ll be back!

 

Read more
David Planella

Snappy Ubuntu + Mycroft = Love

This is a guest post from Ryan Sipes, CTO of the Mycroft project, explaining how snappy Ubuntu will enable them to deliver a secure and open AI for everyone.

When we first undertook the Mycroft project, dubbed the “AI For Everyone”, we knew we would face interesting challenges. We were creating a voice-controlled platform not only for assisting you in your daily life with weather, news updates, calendar reminders, and answers to your questions - but also a hub which would allow you to control your Internet of Things, specifically in the form of home automation. Managing all these devices through a seamless user experience requires a strong backbone for developers, and this is where snappy Ubuntu Core works wonders.

Since choosing to base our open source, open hardware product called Mycroft on snappy Ubuntu Core, we have found the platform to be amazing. Being able to build and deliver apps easily through Snappy packages, makes for a quick and painless packaging experience with only a short bit required to get up to speed and start creating your own. We’ve taken advantage of this and are planning to use Snappy packages as the main delivery method of apps on our platform. Want to install the Spotify app on Mycroft? Just install the Snappy package, which you’ll be able to do with a just a click.

But Snappy Core’s usefulness goes beyond creating packages, the ability to do transactional updates of apps makes testing and stability easier. We’ve found that the ability to rollback an update to be critical in ensuring that we are our platform is working when it needs to, but it has also made it possible to test for bugs on versions that we are unsure about - and rollback when there is serious breakage. As we continue to learn more, we are every impressed with this feature of Snappy.

We’re going to be leveraging snappy Ubuntu Core and “Snaps” to deliver applications to Mycroft, and when talking about a platform that sits in your home and has the ability to install third party software an important conversation about privacy in necessary. We are doing our best to ensure that user’s critical data and interactions with Mycroft are kept private, and Snappy makes our job easier. Having a great deal of control over security policies of apps and being able to make applications run in a sandbox, allows us to take measure to ensure the core system isn’t compromised. In a world where you are interacting with lots of IoT devices every day, security is paramount, and Snappy Core Ubuntu doesn’t let you down.

In case you couldn’t tell from the paragraphs above, the Mycroft team is ecstatic to be using such an awesome technology on which to build our open source artificial intelligence and home automation platform. But one thing I didn’t talk about was the awesome community surrounding Ubuntu and the passionate people working for Canonical that have poured their time into this amazing project and that, above all, is the best reason for using Snappy Core.

If you are interested in learning more about Mycroft, please check out our Kickstarter and consider backing the project. We’ve only got a few days left, but we promise that we will continue to keep everyone posted about our experiences as we continue to use Snappy Core while we work on the #AIForEveryone.

I want AI for everyone too! >

Read more
Zoltán Balogh

The Next Generation SDK

Up until now the basic architecture of the SDK IDE and tools packaging was that we have packaged and distributed the QtCreator IDE and our Ubuntu plugins as separate distro packages which strongly depend on the Qt available in the same release.

Since 14.04 we have been jumping through hoops to provide the very same developer experience from a single development branch of the SDK projects. Just to give a quick picture on what we have available in the last few releases (note that 1.3 UITK is not yet released):

14.04 Trusty Qt 5.2.1 QtCreator 3.0.1 UI Toolkit 0.1
14.10 Utopic Qt 5.3. QtCreator 3.1.1 UI Toolkit 1.1
15.04 Vivid Qt 5.4.1 QtCreator 3.1.1 UI Toolkit 1.2
15.10 Wily Qt 5.4.2 QtCreator 3.5.0 UI Toolkit 1.3

Life could have been easier by sticking to one stable Qt and QtCreator and base our SDK on it. Obviously it was not a realistic option as the phone development needed the most recent Qt and our friend Kubuntu required a hot new engine under its hood too. So Qt was quickly moving forward and the SDK followed it. Of course it was all beneficial as new Qt releases brought us bugfixes, new features and improved performance.

But on the way we came to realize that continuously backporting the UITK and the QtCreator plugins to older releases and the LTS was simply not going to be possible. It went fine for some time, but the more API breaks the new Qt and QtCreator releases brought the more problems we had to face. Some people have asked why we don’t backport the latest Qt releases to the LTS or to the stable Ubuntu. As an idea it may sound good, but changing the Qt to 5.4.2 under an application in LTS what was built against 5.2.1 Qt would certainly break that application. So it is simply not cool to mess around with such fundamental bits of a stable and long term supported release.

The only option we had was to decouple the SDK from the archive release of Qt and build it as a standalone package without any external Qt dependencies. That way we could provide the exact same experience and tools to all developers regardless if they are playing safe on Trusty/LTS or enjoy the cutting edge on the daily developed release of Wily.

The idea manifested in a really funny project. The source tree of the project is pretty empty. Only cmake and the debian/rules take care of the job. The builder pulls the latest stable Qt, QtCreator and UITK. Builds and integrates the libdbusmenu-qt and appmenu-qt5 projects and deploys the SDK IDE. The package itself is super skinny. Opposing the old model where QtCreator has pulled most of the Qt modules as dependencies this package contains all it needs and the size of it is impressing 36MB. Cheap. Just the way I like it. Plus this package already contains the 1.3 UITK as our QtCreator plugin (Devices Tab) is using it. So in fact we are just one step from enabling desktop application development on 14.04 LTS with the same UI Toolkit as we use on the commercial phone devices. And that is a super hot idea.

The Ubuntu SDK IDE project lives here: https://launchpad.net/ubuntu-sdk-ide

If you want to check out how it is done:

$ bzr branch lp:ubuntu-sdk-ide

Since we considered such a big facelift on the SDK I thought why not to make the change much bigger. Some might remember that there was a discussion on the Ubuntu Phone mailing list about the possibility to improve the Kit creation in the IDE. Since then we have been playing with the idea and I think it is now a good time to unleash the static chroots.

The basic idea is that creating the builder chroots runtime is a super slow and fragile process. The bootstrapping of the click chroot already takes a long time and installing the SDK API packages (all the libs and dev packages with headers) into the chroot is also time consuming. So why not to create these root filesystems in advance and provide them as single installable packages.

This is exactly what we have done. The base of the API packages is the Vivid core image. It is small and contains only the absolutely necessary packages, we install the SDK libs, dev packages and development tools on the core image and configure the Overlay PPA too. So the final image is pretty much equivalent with the image on a freshly updated device out there. It means that the developer can build and test against the same API set as it is available on the devices.

These API packages are still huge. Their size is around 500MB, so on a slow connection it still takes ages to download, but still it is way faster than bootstrapping a 1.6GB chroot package by package.

This API packages contain a single tar.gz file and the post install script of the package puts the content of this tar.gz to the right place and wires it in, in the way it should be. Once the package is installed the new Kit will be automatically recognized by the IDE.

One important note on this API package! If you have an armhf 15.04 Kit (click chroot) already on your system when you install this package, then your original Kit will not be removed but simply renamed to backup-[timestamp]-[original name]. So do not worry if you have customized Kits, they are safe.

The Ubuntu SDK API project is only a packaging project with a simple script to take care of the dirty details. The project is hosted here: https://launchpad.net/ubuntu-sdk-api-15.04

And if you want to see what is in it just do

$ bzr branch lp:ubuntu-sdk-api-15.04  

The release candidate packages are available from the Tools Development PPA of the SDK team: https://launchpad.net/~ubuntu-sdk-team/+archive/ubuntu/tools-development

How to test these packages?

$ sudo add-apt-repository ppa:ubuntu-sdk-team/tools-development -y

$ sudo apt-get update

$ sudo apt-get install ubuntu-sdk-ide ubuntu-sdk-api-tools

$ sudo apt-get install ubuntu-sdk-api-15.04-armhf ubuntu-sdk-api-15.04-i386

After that look for the Ubuntu SDK IDE in the dash.

Read more
April Wang

因为黑客松活动,在炎炎夏日的8月里第一次飞到了深圳,这个潮湿闷热的城市让我“大跌眼镜”(不停冒汗,眼镜真的一直往下掉)。 活动现场是在位于福田区华强北商圈中的华强创客中心,不论是平日里还是周末,路边楼下仿佛永远都是人流涌动热闹非凡。华强北创客中心是由华强集团倾力打造,中国第一个为创业者提供一站式服务的综合型创新创业生态平台。一期建筑面积有5000㎡,位于华强广场B座7楼的空中花园,堪称是华强北闹市中的一片室外桃园。整体设计布满了类似街头艺术的graffiti式画作,置身其中就能感受到它灵感激发的能量,黑客松选这里自然是理所当然。

 

Canonical一直坚信激励创新的最佳方式就是将他们需要的技术给到创新者的手中,这次深圳黑客松除了Ubuntu手机操作系统平台之外,我们还带了Ubuntu Snappy Core, 一款安全易用的智能硬件操作系统技术。针对这个最新技术, 我们在活动TechTalk环节详细讲解了如何通过KVM来做开发的上手介绍。错过的同学可以在这里下载文档参看视频。而参加活动的同学们通过将Ubuntu手机平台和Snappy技术相结合将会获得特别IoT奖项。所以这场活动的亮点和看点更加有趣。 

22日的上午10点半,倒计时开始,黑客松正式进入hacking时段。不吃不喝不停不休的30个小时之后。

呵呵, 开玩笑了,一定是有吃有喝有玩有乐了,而且还有夜宵火锅,台式足球。

既然是场hackathon,重头戏当然还是这场hacking party产出的作品了。下面我就挑几组现场做了作品和大家分享。

QML Git-OSC是由开源中国团队开发的一款基于QML的Ubuntu手机应用,有了它程序猿攻城狮们可以直接通过Ubuntu手机端访问查看保存在自己在Git@OSC上的Repo详情和代码了。作为一款为写代码人群定制的应用,这组团队成功获得了最佳上手奖- 樱桃机械键盘。

iFace是第二组出场的demo作品,展示队员Mago用含蓄幽默的方式介绍了一款,在“这个看脸的时代”,用你的颜值当工具的应用。通过脸部验证登录,上网约聊。这么懂时代,能聊的队伍很不意外的拿走了由优麒麟赞助的能说会道奖-一款便携音箱。

这场黑客松中最吸引眼球的团队E Minor(E小调),在30个小时的黑客松内产出了两款作品:LibreOffice Impress Remote 和让我从第一天就在期待的Project MrRobot。Impress Remote作品正如其名,可以让你的Ubuntu手机即刻变成你Impress文档的remote,简单但超级实用。Project Mr Robot是一款由Ubuntu手机操控超萌Rapiro机器人的应用,通过这款应用你可以通过语音,按键和摇手机来操控它。两款作品的代码也完全开源,感兴趣的同学们可以在这里这里分别找到他们的代码。这支团队,轻松拿下我们的最佳颜值奖(Ubuntu双肩背包)和最佳极客奖(由华硕公司特别赞助的移动便携投影仪);这里也特别感谢为这组颁奖,也是我们现场评审之一的美女评审秦夏鸿女士(灵游科技的副总裁)。此外Project Robot在活动结束第二天就已经被Softpedia点名报道了。

IoT Ranger是一款专为电脑牵挂强迫症人群定制的应用。它为Ubuntu手机用户提供了一个随时监测家里电脑运行状态的应用。这款基于cordava的Ubuntu手机应用,巧妙使用运行在kvm环境下的网络服务框架,成功的将Ubuntu Snappy Core技术和Ubuntu手机应用开发相结合。绝对是这次黑客松中当之不愧的IoT特别奖项作品,自然拿下了我们精心准备的Beaglebone Black。

活动现场展示的作品还有几组我就不在这里一一介绍了, 感兴趣的同学们可以后续在Ubuntu开发者网站(cn.developer.ubuntu.com)的黑客松页面找到每组作品介绍。在短短的30个小时内,我们见证了如此之多的精彩,不禁就已经开始期待下一次了。 希望在后面的日子里,每个团队都能实现作品的成功部署,在外面的世界里成功立足。 代码写的辛苦,但是能和兴趣相投的人一起通宵畅聊应该是最过瘾的事了。在此献上活动现场制作的文化衫照片和大家分享。

最后要再次感谢这次活动的特约赞助商华硕,除了特别极客奖之外,还为大家提供了丰盛的签到奖和现场Demo奖;感谢线上线下的协办单位和论坛平台让这场黑客松成为可能(Git@OSC, SegmentFault,开源中国开源社Linux伊甸园Linux中国Linuxtoy.orgQTCN开发网, Meego南极圈深圳开放创新实验室SegmentFault腾讯开放平台优麒麟中芬设计园),感谢现场评审团队,还有让这场活动分外精彩的场地赞助华强北创客中心,为大家提供分外精彩的场地赞助,轻松愉快的氛围激发大家无限的创作灵感。

 

Read more
Michi Henning

A Fast Thumbnailer for Ubuntu

Over the past few months, James Henstridge, Xavi Garcia Mena, and I have implemented a fast and scalable thumbnailing service for Ubuntu and Ubuntu Touch. This post explains how we did it, and how we achieved our performance and reliability goals.

Introduction

On a phone as well as the desktop, applications need to display image thumbnails for various media, such as photos, songs, and videos. Creating thumbnails for such media is CPU-intensive and can be costly in bandwidth if images are retrieved over the network. In addition, different types of media require the use of different APIs that are non-trivial to learn. It makes sense to provide thumbnail creation as a platform API that hides this complexity from application developers and, to improve performance, to cache thumbnails on disk.

This article explains the requirements we had and how we implemented a thumbnailer service that is extremely fast and scalable, and robust in the face of power loss or crashes.

Requirements

We had a number of requirements we wanted to meet in our implementation.

  • Robustness
    In the event of a crash, the implementation must guarantee the integrity of on-disk data structures. This is particularly important on a phone, where we cannot expect the user to perform manual recovery (such as cleaning up damaged files). Because batteries can run out at any time, integrity must be guaranteed even in the face of power loss.
  • Scalability
    It is common for people to store many thousands of songs and photos on a device, so the cache must scale to at least tens of thousands of records. Thumbnails can range in size from a few kilobytes to well over a megabyte (for “thumbnails” at full-screen resolution), so the cache must deal efficiently with large records.
  • Re-usability
    Persistent and reliable on-disk storage of arbitrary records (ranging in size from a few bytes to potentially megabytes) is a common application requirement, so we did not want to create a cache implementation that is specific to thumbnails. Instead, the disk cache is provided as a stand-alone C++ API that can be used for any number of other purposes, such as a browser or HTTP cache, or to build an object file cache similar to ccache.
  • High performance
    The performance of the thumbnailer directly affects the user experience: it is not nice for the customer to look at “please wait a while” icons in, say, an image gallery while thumbnails are being loaded one by one. We therefore had to have a high-performance implementation that delivers cached thumbnails quickly (on the order of a millisecond per thumbnail on an Arm CPU). An efficient implementation also helps to conserve battery life.
  • Location independence and extensibility
    Canonical runs an image server at dash.ubuntu.com that provides album and artist artwork for many musicians and bands. Images from this server are used to display artwork in the music player for media that contains ID3 tags, but does not embed artwork in the media file. The thumbnailer must work with embedded images as well as remote images, and it must be possible to extend it for new types of media without unduly disturbing the existing code.
  • Low bandwidth consumption
    Mobile phones typically come with data caps, so the cache has to be frugal with network bandwidth.
  • Concurrency and isolation
    The implementation has to allow concurrent access by multiple applications, as well as concurrent access from a single implementation. Besides needing to be thread-safe, this means that a request for a thumbnail that is slow (such as downloading an image over the network) must not delay other requests.
  • Fault tolerance
    Mobile devices lose network access without warning, and users can add corrupt media files to their device. The implementation must be resilient to partial failures, such as incomplete network replies, dropped connections, and bad image data. Moreover, the recovery strategy for such failures must conserve battery and avoid repeated futile attempts to create thumbnails from media that cannot be retrieved or contains malformed data.
  • Security
    The implementation must ensure that applications cannot see (or, worse, overwrite) each other’s thumbnails or coerce the thumbnailer into delivering images from files that an application is not allowed to read.
  • Asynchronous API
    The customers of the thumbnailer are applications that are written in QML or Qt, which cannot block in the UI thread. The thumbnailer therefore must provide a non-blocking API. Moreover, the application developer should be able to get the best possible performance without having to use threads. Instead, concurrency must be internal to the implementation (which is able to put threads to use intelligently where they make sense), instead of the application throwing threads at the problem in the hope that it might make things faster when, in fact, it might just add overhead.
  • Monitoring
    The effectiveness of a cache cannot be assessed without statistics to show hit and miss rates, evictions, and other basic performance data, so it must provide a way to extract this information.
  • Error reporting
    When something goes wrong with a system service, typically the only way to learn about the problem is to look at log messages. In case of a failure, the implementation must leave enough footprints behind to allow someone to diagnose a failure after the fact with some chance of success.
  • Backward compatibility
    This project was a rewrite of an earlier implementation. Rather than delivering a “big bang” piece of software and potentially upsetting existing clients, we incrementally changed the implementation such that existing applications continued to work. (The only pre-existing interface was a QML interface that required no change.)

System architecture

Here is a high-level overview of the main system components.

A Fast Thumbnailer for UbuntuExternal API

To the outside world, the thumbnailer provides two APIs.

One API is a QML plugin that registers itself as an image provider for QQuickAsyncImageProvider. This allows the caller to to pass a URI that encodes a query for a local or remote thumbnail at a particular size; if the URI matches the registered provider, QML transfers control to the entry points in our plugin.

The second API is a Qt API that provides three methods:

QSharedPointer<Request> getThumbnail(QString const& filePath,
                                     QSize const& requestedSize);
QSharedPointer<Request> getAlbumArt(QString const& artist,
                                    QString const& album,
                                    QSize const& requestedSize);
QSharedPointer<Request> getArtistArt(QString const& artist,
                                     QString const& album,
                                     QSize const& requestedSize);

The getThumbnail() method extracts thumbnails from local media files, whereas getAlbumArt() and getArtistArt() retrieve artwork from the remote image server. The returned Request object provides a finished signal, and methods to test for success or failure of the request and to extract a thumbnail as a QImage. The request also provides a waitForFinished() method, so the API can be used synchronously.

Thumbnails are delivered to the caller in the size they are requested, subject to a (configurable) 1920-pixel limit. As an escape hatch, requests with width and height of zero deliver artwork at its original size, even if it exceeds the 1920-pixel limit. The scaling algorithm preserves the original aspect ratio and never scales up from the original, so the returned thumbnails may be smaller than their requested size.

DBus service

The thumbnailer is implemented as a DBus service with two interfaces. The first interface provides the server-side implementation of the three methods of the external API; the second interface is an administrative interface that can deliver statistics, clear the internal disk caches, and shut down the service. A simple tool, thumbnailer-admin, allows both interfaces to be called from the command line.

To conserve resources, the service is started on demand by DBus and shuts down after 30 seconds of idle time.

Image extraction

Image extraction uses an abstract base class. This interface is independent of media location and type. The actual image extraction is performed by derived implementations that download images from the remote server, extract them from local image files, or extract them from local streaming media files. This keeps knowledge of image location and encoding out of the main caching and error handling logic, and allows us to support new media types (whether local or remote) by simply adding extra derived implementations.

Image extraction is asynchronous, with currently three implementations:

  • Image downloader
    To retrieve artwork from the remote image server, the service talks to an abstract base class with asynchronous download_album() and download_artist() methods. This allows multiple downloads to run concurrently and makes it easy to add new local or remote image providers without disturbing the code for existing ones. A class derived from that abstract base implements a REST API with QNetworkAccessManager to retrieve images from dash.ubuntu.com.
  • Photo extractor
    The photo extractor is responsible for delivering images from local image files, such as JPEG or PNG files. It simply delegates that work to the image converter and scaler.
  • Audio and video thumbnail extractor
    To extract thumbnails from audio and video files, we use GStreamer. Due to reliability problems with some codecs that can hang or crash, we delegate the task to a separate vs-thumb executable. This shields the service from failures and also allows us to run several GStreamer pipelines concurrently without a crash of one pipeline affecting the others.

Image converter and scaler

We use a simple Image class with a synchronous interface to convert and scale different image formats to JPEG. The implementation uses Gdk-Pixbuf, which can handle many different input formats and is very efficient.

For JPEG source images, the code checks for the presence of EXIF data using libexif and, if it contains a thumbnail that is at least as large as the requested size, scales the thumbnail from the EXIF data. (For images taken with the camera on a Nexus 4, the original image size is 3264×1836, with an embedded EXIF thumbnail of 512×288. Scaling from the EXIF thumbnail is around one hundred times faster than scaling from the full-size image.)

Disk cache

The thumbnailer service optimizes performance and conserves bandwidth and battery by adopting a layered caching strategy.

Two-level caching with failure lookup

Internally, the service uses three separate on-disk caches:

  • Full-size cache
    This cache stores images that are expensive to retrieve (images that are remote or are embedded in audio and video files) at original resolution (scaled down to a 1920-pixel bounding box if the original image is larger). The default size of this cache is 50 MB, which is sufficient to hold around 400 images at 1920×1080 resolution. Images are stored in JPEG format (at a 90% quality setting).
  • Thumbnail cache
    This cache stores thumbnails at the size that was requested by the caller, such as 512×288. The default size of this cache is 100 MB, which is sufficient to store around 11,000 thumbnails at 512×288, or around 25,000 thumbnails at 256×144.
  • Failure cache
    The failure cache stores the keys for images that could not be extracted because of a failure. For remote images, this means that the server returned an authoritative answer “no such image exists”, or that we encountered an unexpected (non-authoritative) failure, such as the server not responding or a DNS lookup timing out. For local images, it means either that the image data could not be processed because it is damaged, or that an audio file does not contain embedded artwork.

The full-size cache exists because it is likely that an application will request thumbnails at different sizes for the same image. For example, when scrolling through a list of songs that shows a small thumbnail of the album cover beside each song, the user is likely to select one of the songs to play, at which point the media player will display the same cover in a larger size. By keeping full-size images in a separate (smallish) cache, we avoid performing an expensive extraction or download a second time. Instead, we create additional thumbnails by scaling them from the full-size cache (which uses an LRU eviction policy).

The thumbnail cache stores thumbnails that were previously retrieved, also using LRU eviction. Thumbnails are stored as JPEG at the default quality setting of 75%, at the actual size that was requested by the caller. Storing JPEG images (rather than, say, PNG) saves space and increases cache effectiveness. (The minimal quality loss from compression is irrelevant for thumbnails). Because we store thumbnails at the size they are actually needed, we may have several thumbnails for the same image in the cache (each thumbnail at a different size). But applications typically ask for thumbnails in only a small number of sizes, and ask for different sizes for the same image only rarely. So, the slight increase in disk space is minor and amply repaid by applications not having to scale thumbnails after they receive them from the cache, which saves battery and achieves better performance overall.

Finally, the failure cache is used to stop futile attempts to repeatedly extract a thumbnail when we know that the attempt will fail. It uses LRU eviction with an expiry time for each entry.

Cache lookup algorithm

When asked for a thumbnail at a particular size, the lookup and thumbnail generation proceed as follows:

  1. Check if a thumbnail exists in the requested size in the thumbnail cache. If so, return it.
  2. Check if a full-size image for the thumbnail exists in the full-size cache. If so, scale the new thumbnail from the full-size image, add the thumbnail to the thumbnail cache, and return it.
  3. Check if there is an entry for the thumbnail in the failure cache. If so, return an error.
  4. Attempt to download or extract the original image for the thumbnail. If the attempt fails, add an entry to the failure cache and return an error.
  5. If the original image was delivered by the remote server or was extracted locally from streaming media, add it to the full-size cache.
  6. Scale the thumbnail to the desired size, add it to the thumbnail cache, and return it.

Note that these steps represent only the logical flow of control for a particular thumbnail. The implementation executes these steps concurrently for different thumbnails.

Designing for performance

Apart from fast on-disk caches (see below), the thumbnailer must make efficient use of I/O bandwidth and threads. This not only means making things fast, but also to not unnecessarily waste resources such as threads, memory, network connections, or file descriptors. Provided that enough requests are made to keep the service busy, we do not want it to ever wait for a download or image extraction to complete while there is something else that could be done in the mean time, and we want it to keep all CPU cores busy. In addition, requests that are slow (because they require a download or a CPU-intensive image extraction) must not block requests that are queued up behind them if those requests would result in cache hits that could be returned immediately.

To achieve a high degree of concurrency without blocking on long-running operations while holding precious resources, the thumbnailer uses a three-phase lookup algorithm:

  1. In phase 1, we look at the caches to determine if we have a hit or an authoritative miss. Phase 1 is very fast. (It takes around a millisecond to return a thumbnail from the cache on a Nexus 4.) However, cache lookup can briefly stall on disk I/O or require a lot of CPU to extract and scale an image. To get good performance, phase 1 requests are passed to a thread pool with as many threads as there are CPU cores. This allows the maximum number of lookups to proceed concurrently.
  2. Phase 2 is initiated if phase 1 determines that a thumbnail requires download or extraction, either of which can take on the order of seconds. (In case of extraction from local media, the task is CPU intensive; in case of download, most of the time is spent waiting for the reply from the server.) This phase is scheduled asynchronously from an event loop. This minimizes task switching and allows large numbers of requests to be queued while only using a few bytes for each request that is waiting in the queue.
  3. Phase 3 is really a repeat of phase 1: if phase 2 produces a thumbnail, it adds it to the cache; if phase 2 does not produce a thumbnail, it creates an entry in the failure cache. By simply repeating phase 1, the lookup then results in either a thumbnail or an error.

If phase 2 determines that a download or extraction is required, that work is performed concurrently: the service schedules several downloads and extractions in parallel. By default, it will run up to two concurrent downloads, and as many concurrent GStreamer pipelines as there are CPUs. This ensures that we use all of the available CPU cores. Moreover, download and extraction run concurrently with lookups for phase 1 and 3. This means that, even if a cache lookup briefly stalls on I/O, there is a good chance that another thread can make use of the CPU.

Because slow operations do not block lookup, this also ensures that a slow request does not stall requests for thumbnails that are already in the cache. In other words, it does not matter how many slow requests are in progress: requests that can be completed quickly are indeed completed quickly, regardless of what is going on elsewhere.

Overall, this strategy works very well. For example, with sufficient workload, the service achieves around 750% CPU utilization on an 8-core desktop machine, while still delivering cache hits almost instantaneously. (On a Nexus 4, cache hits take a little over 1 ms while concurrent extractions or downloads are in progress.)

A re-usable persistent cache for C++

The three internal caches are implemented by a small and flexible C++ API. This API is available as a separate reusable PersistentStringCache component (see persistent-cache-cpp) that provides a persistent store of arbitrary key–value pairs. Keys and values can be binary, and entries can be large. (Megabyte-sized values do not present a problem.)

The implementation uses leveldb, which provides a very fast NoSQL database that scales to multi-gigabyte sizes and provides integrity guarantees. In particular, if the calling process crashes, all inserts that completed at the API level will be intact after a restart. (In case of a power failure or kernel crash, a few buffered inserts can be lost, but the integrity of the database is still guaranteed.)

To use a cache, the caller instantiates it with a path name, a maximum size, and an eviction policy. The eviction policy can be set to either strict LRU (least-recently-used) or LRU with an expiry time. Once a cache reaches its maximum size, expired entries (if any) are evicted first and, if that does not free enough space for a new entry, entries are discarded in least-recently-used order until enough room is available to insert a new record. (In all other respects, expired entries behave like entries that were never added.)

A simple get/put API allows records to be retrieved and added, for example:

auto c = core::PersistentStringCache::open(
    “my_cache”, 100 * 1024 * 1024, core::CacheDiscardPolicy::lru_only);
// Look for an entry and add it if there is a cache miss.
string key = "Bjarne";
auto value = c->get(key);
if (value) {
    cout << key << ″: ″ << *value << endl;
} else {
    value = "C++ inventor";  // Provide a value for the key. 
    c->put(key, *value);     // Insert it.
}

Running this program prints nothing on the first run, and “Bjarne: C++ inventor” on all subsequent runs.

The API also allows application-specific metadata to be added to records, provides detailed statistics, supports dynamic resizing of caches, and offers a simple adapter template that makes it easy to store complex user-defined types without the need to clutter the code with explicit serialization and deserialization calls. (In a pinch, if iteration is not needed, the cache can be used as a persistent map by setting an impossibly large cache size, in which case no records are ever evicted.)

Performance

Our benchmarks indicate good performance. (Figures are for an Intel Ivy Bridge i7-3770k 3.5 GHz machine with a 256 GB SSD.) Our test uses 60-byte string keys. Values are binary blobs filled with random data (so they are not compressible), 20 kB in size with a standard deviation of 7,000, so the majority of values are 13–27 kB in size. The cache size is 100 MB, so it contains around 5,000 records.

Filling the cache with 100 MB of records takes around 2.8 seconds. Thereafter, the benchmark does a random lookup with an 80% hit probability. In case of a cache miss, it inserts a new random record, evicting old records in LRU order to make room for the new one. For 100,000 iterations, the cache returns around 4,800 “thumbnails” per second, with an aggregate read/write throughput of around 93 MB/sec. At 90% hit rate, we see twice the performance at around 7,100 records/sec. (Writes are expensive once the cache is full due to the need to evict entries, which requires updating the main cache table as well as an index.)

Repeating the test with a 1 GB cache produces identical timings so (within limits) performance remains constant for large databases.

Overall, performance is restricted largely by the bandwidth to disk. With a 7,200 rpm disk, we measured around one third of the performance with an SSD.

Recovering from errors

The overall design of the thumbnailer delivers good performance when things work. However, our implementation has to deal with the unexpected, such as network requests that do not return responses, GStreamer pipelines that crash, request overload, and so on. What follows is a partial list of steps we took to ensure that things behave sensibly, particularly on a battery-powered device.

Retry strategy

The failure cache provides an effective way to stop the service from endlessly trying to create thumbnails that, in an earlier attempt, returned an error.

For remote images, we know that, if the server has (authoritatively) told us that it has no artwork for a particular artist or album, it is unlikely that artwork will appear any time soon. However, the server may be updated with more artwork periodically. To deal with this, we add an expiry time of one week to the entries in the failure cache. That way, we do not try to retrieve the same image again until at least one week has passed (and only if we receive a request for a thumbnail for that image again later).

As opposed to authoritative answers from the image server (“I do not have artwork for this artist.”), we can also encounter transient failures. For example, the server may currently be down, or there may be some other network-related issue. In this case, we remember the time of the failure and do not try to contact the remote server again for two hours. This conserves bandwidth and battery power.

The device may also disconnected from the network, in which case any attempt to retrieve a remote image is doomed. Our implementation returns failure immediately on a cache miss for a remote image if no network is present or the device is in flight mode. (We do not add an entry to the failure cache in this case).

For local files, we know that, if an attempt to get a thumbnail for a particular file has failed, future attempts will fail as well. This means that the only way for the problem to get fixed is by modifying or replacing the actual media file. To deal with this, we add the inode number, modification time, and inode modification time to the key for local images. If a user replaces, say, a music file with a new one that contains artwork, we automatically pick up the new version of the file because its key has changed; the old version will eventually fall out of the cache.

Download and extraction failures

We monitor downloads and extractions for timely completion. (Timeouts for downloads and extractions can be configured separately.) If the server does not respond within 10 seconds, we abandon the attempt and treat it it as a transient network error. Similarly, the vs-thumb processes that extract images from audio and video files can hang. We monitor these processes and kill them if they do not produce a result within 10 seconds.

Database corruption

Assuming an error-free implementation of leveldb, database corruption is impossible. However, in practice, an errant command could scribble over the database files. If leveldb detects that the database is corrupted, the recovery strategy is simple: we delete the on-disk cache and start again from scratch. Because the cache contents are ephemeral anyway, this is fine (other than slower operation until the working set of thumbnails makes it into the cache again).

Dealing with backlog

The asynchronous API provided by the service allows an application to submit an unlimited number of requests. Lots of requests happen if, for example, the user has inserted a flash card with thousands of photos into the device and then requests a gallery view for the collection. If the service’s client-side API blindly forwards requests via DBus, this causes a problem because DBus terminates the connection once there are more than around 400 outstanding requests.

To deal with this, we limit the number of outstanding requests to 200 and send another request via DBus only when an earlier request completes. Additional requests are queued in memory. Because this happens on the client side, the number of outstanding requests is limited only by the amount of memory that is available to the client.

A related problem arises if a client submits many requests for a thumbnail for the same image. This happens when, for example, the user looks at a list of tracks: tracks that belong to the same album have the same artwork. If artwork needs to be retrieved from the remote server, naively forwarding cache misses for each thumbnail to the server would end up re-downloading the same image several times.

We deal with this by maintaining an in-memory map of all remote download requests that are currently in progress. If phase 1 reports a cache miss, before initiating a download, we add the key for the remote image to the map and remove it again once the download completes. If more requests for the same image encounter a cache miss while the download for the original request is still in progress, the key for the in-progress download is still in the map, and we hold additional requests for the same image until the download completes. We then schedule the held requests as usual and create their thumbnails from the image that was cached by the first request.

Security

The thumbnailer runs with normal user privileges. We use AppArmor’s aa_query_label() function to verify that the calling client has read access to a file it wants a thumbnail for. This prevents one application from accessing thumbnails produced by a different application, unless both applications can read the original file. In addition, we place the entire service under an AppArmor profile to ensure that it can write only to its own cache directory.

Conclusion

Overall, we are very pleased with the overall design and performance of the thumbnailer. Each component has a clearly defined role with a clean interface, which made it easy for us to experiment and to refine the design as we went along. The design is extensible, so we can support additional media types or remote data sources without disturbing the existing code.

We used threads sparingly and only where we saw worthwhile concurrency opportunities. Using asynchronous interfaces for long-running operations kept resource usage to a minimum and allowed us to take advantage of I/O interleaving. In turn, this extracts the best possible performance from the hardware.

The thumbnailer now runs on Ubuntu Touch and is used by the gallery, camera, and music apps, as well as for all scopes that display media thumbnails.

This article has been originally published on Michi Henning's blog.

Read more
Daniel Holbach

Announcing UbuContest 2015

Have you read the news already? Canonical, the Ubucon Germany 2015 team, and the UbuContest 2015 team, are happy to announce the first UbuContest! Contestants from all over the world have until September 18, 2015 to build and publish their apps and scopes using the Ubuntu SDK and Ubuntu platform. The competion has already started, so register your competition entry today! You don’t have to create a new project, submit what you have and improve it over the next two months.

But we know it's not all about shiny new apps and scopes! A great platform also needs content, great design, testing, documentation, bug management, developer support, interesting blog posts, technology demonstrations and all of the other incredible things our community does every day. So we give you, our community members, the opportunity to nominate other community members for prizes!

We are proud to present five dedicated categories:

  1. Best Team Entry: A team of up to three developers may register up to two apps/scopes they are developing. The jury will assign points in categories including "Creativity", "Functionality", "Design", "Technical Level" and "Convergence". The top three entries with the most points win.

  2. Best Individual Entry: A lone developer may register up to two apps/scopes he or she is developing. The rest of the rules are identical to the "Best Team Entry" category.

  1. Outstanding Technical Contribution: Members of the general public may nominate candidates who, in their opinion, have done something "exceptional" on a technical level. The nominated candidate with the most jury votes wins.

  1. Outstanding Non-Technical Contribution: Members of the general public may nominate candidates who, in their opinion, have done something exceptional, but non-technical, to bring the Ubuntu platform forward. So, for example, you can nominate a friend who has reported and commented on all those phone-related bugs on Launchpad. Or nominate a member of your local community who did translations for Core Apps. Or nominate someone who has contributed documentation, written awesome blog articles, etc. The nominated candidate with the most jury votes wins.

  1. Convergence Hero: The "Best Team Entry" or "Best Individual Entry" contribution with the highest number of "Convergence" points wins. The winner in this category will probably surprise us in ways we have yet to imagine.

Our community judging panel members Laura Cowen, Carla Sella, Simos Xenitellis, Sujeevan Vijayakumaran and Michael Zanetti will select the winners in each category. Successful winners will be awarded items from a huge pile of prizes, including travel subsidies for the first-placed winners to attend Ubucon Germany 2015 in Berlin, four Ubuntu Phones sponsored by bq and Meizu, t-shirts, and bundles of items from the official Ubuntu Shop.

We wish all the contestants good luck!

Go to ubucontest.eu or ubucon.de/2015/contest for more information, including how to register and nominate folks. You can also follow us on Twitter @ubucontest, or contact us via e-mail at contest@ubucon.de.

 

Read more