Canonical Voices


8 Free Online Tech Courses

From Introduction to Linux to Web development, CIO magazine covers 8 online courses which are completely free.

Read More:

Read more
Ben Howard

Snappy Launches

When we launched Snappy, we introduced it on Microsoft Azure [1], Google’s GCE [2], Amazon’s AWS [3] and our KVM images [4]. Immediately our developers were asking questions like, “where’s the Vagrant images”, which we launched yesterday [5].

The one final remaining question was “where are the images for <insert hypervisor>”. We had inquiries about Virtualbox, VMware Desktop/Fusion, interest in VMware Air, Citrix XenServer, etc.

OVA to the rescue

OVA is an industry standard for cross-hypervisor image support. The OVA spec [6] allows you to import a single image to:

  • VMware products
    • ESXi
    • Desktop
    • Fusion
    • VSphere
  • Virtualbox
  • Citrix XenServer
  • Microsoft SCVMM
  • Red Hat Enterprise Virtualization
  • SuSE Studio
  • Oracle VM

Okay, so where can I get the OVA images?

To get the latest OVA image, you can get it from here [7]. From there, you will need to follow your hypervisor instructions on importing OVA images. 

Or if you want a short URL,



Read more

This post provides the background for a deliberate and important decision in the design of that people often wonder about: while the service does support full versions in tag and branch names (as in “v1.2″ or “v1.2.3″), the URL must contain only the major version (as in “″) which gets mapped to the best matching version in the repository.

As will be detailed, there are multiple reasons for that behavior. The critical one is ensuring all packages in a build tree that depend on the same API of a given dependency (different majors means different APIs) may use the exact same version of that dependency. Without that, an application might easily get multiple copies unnecessarily and perhaps incorrectly.

Consider this example:

  • Application A depends on packages B and C
  • Package B depends on D 3.0.1
  • Package C depends on D 3.0.2

Under that scenario, when someone executes go get on application A, two independent copies of D would be embedded in the binary. This happens because both B and C have exact control of the version in use. When everybody can pick their own preferred version, it’s easy to end up with multiple of these.

The current implementation solves that problem by requiring that both B and C necessarily depend on the major version which defines the API version they were coded against. So the scenario becomes:

  • Application A depends on packages B and C
  • Package B depends on D 3.*
  • Package C depends on D 3.*

With that approach, when someone runs go get to import the application it would get the newest version of D that is still compatible with both B and C (might be 3.0.3, 3.1, etc), and both would use that one version. While by default this would just pick up the most recent version, the package might also be moved back to 3.0.2 or 3.0.1 without touching the code. So the approach in fact empowers the person assembling the binary to experiment with specific versions, and gives package authors a framework where the default behavior tends to remain sane.

This is the most important reason why works like this, but there are others. For example, to encode the micro version of a dependency on a package, the import paths of dependent code must be patched on every single minor release of the package (internal and external to the package itself), and the code must be repositioned in the local system to please the go tool. This is rather inconvenient in practice.

It’s worth noting that the issues above describe the problem in terms of minor and patch versions, but the problem exists and is intensified when using individual source code revision numbers to refer to import paths, as it would be equivalent in this context to having a minor version on every single commit.

Finally, when you do want exact control over what builds, godep may be used as a complement to That partnership offers exact reproducibility via godep, and gives people stable APIs they can rely upon over longer periods with Good match.

Read more
Ben Howard

I am pleased to announce initial Vagrant images [1, 2]. These images are bit-for-bit the same as the KVM images, but have a Cloud-init configuration that allows Snappy to work within the Vagrant workflow.

Vagrant enables a cross platform developer experience on MacOS, Windows or Linux [3].

Note: due to the way that Snappy works, shared file systems within Vagrant is not possible at this time. We are working on getting the shared file system support enabled, but it will take us a little bit to get going.

If you want to use Vagrant packaged in the Ubuntu archives, in a terminal run::

  • sudo apt-get -y install vagrant
  • cd <WORKSPACE>
  • vagrant box add snappy
  • vagrant init snappy
  • vagrant up
  • vagrant ssh
If you use Vagrant from [4] (i.e Windows, Mac or install the latest Vagrant) then you can run:
  • vagrant init ubuntu/ubuntu-core-devel-amd64
  • vagrant up
  • vagrant ssh

These images are a work in progress. If you encounter any issues, please report them to "" or ping me (utlemming) on



Read more

The Indian smartphone market grew 82% from a year ago and 27% over the preceding quarter, making it the second consecutive quarter of more than 80% year-on-year shipment growth for smartphones.

There were 23.3 million smartphone handsets shipped in the reporting quarter, comprising 32.1% of the overall mobile phone market that touched 72.5 million units in the September quarter of 2014, recording a 9% growth from a year ago and 15% rise from the preceding quarter.

Read more at:

Read more

Improvements on

Early last year the service was introduced with the goal of encouraging Go developers to establish strategies that enable existent software to remain working while package APIs evolve. After the initial discussions and experimentation that went into defining the (simple) design and feature set of the service, it’s great to see that the approach is proving reasonable in practice, with steady growth in usage. Meanwhile, the service has been up and unchanged for several months while we learned more about which areas needed improvement.

Now it’s time to release some of these improvements:

Source code links

Thanks to Gary Burd, was improved to support custom source code links, which means all packages in can now properly reference, for any given package version, the exact location of functions, methods, structs, etc. For example, the function name in the documentation at is clickable, and redirects to the correct source code line in GitHub.

Unstable releases

As detailed in the documentation, a major version must not have any breaking changes done to it so that dependent packages can rely on the exposed API once it goes live. Often, though, there’s a need to experiment with the upcoming changes before they are ready for prime time, and while small packages can afford to have that testing done locally, it’s usual for non-trivial software to have external validation with experienced developers before the release goes fully public.

To support that scenario properly, now allows the version string in exposed branch and tag names to be suffixed with “-unstable”. For example:

Such unstable versions are hidden from the version list in the package page, except for the specific version being looked at, and their use in released software is also explicitly discouraged via a warning message.

For the package to work properly during development, any imports (both internal and external to the package) must be modified to import the unstable version. While doing that by hand is easy, thanks to Roger Peppe’s govers there’s a very convenient way to do that.

For example, to use mgo.v2-unstable, run:


and to go back:


Repositories with no master branch

Some people have opted to omit the traditional “master” branch altogether and have only versioned branches and tags. Unfortunately, did not accept such repositories as valid. This was fixed.

These changes are all live right now at

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20150113 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:


Status: Vivid Development Kernel

Both the master and master-next branches of our Vivid kernel have been
rebased to the v3.18.2 upstream stable kernel. This has also be
uploaded to the archive, ie. 3.18.0-9.10. Please test and let us
know your results. We are also starting to track the v3.19 kernel on
our unstable branch and have pushed preliminary packages to our ppa.
Important upcoming dates:
Thurs Jan 22 – Vivid Alpha 2 (~1 week away)
Thurs Feb 5 – 14.04.2 Point Release (~3 weeks away)
Thurs Feb 26 – Beta 1 Freeze (~6 weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:

Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Kernel prep week.
  • Precise – Kernel prep week.
  • Trusty – Kernel prep week.
  • Utopic – Kernel prep week.

    Current opened tracking bugs details:


    For SRUs, SRU report is a good source of information:



    cycle: 09-Jan through 31-Jan
    09-Jan Last day for kernel commits for this cycle
    11-Jan – 17-Jan Kernel prep week.
    18-Jan – 31-Jan Bug verification; Regression testing; Release

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more

Amazon EC2 was the most reliable compute and Google Cloud Storage was the most reliable storage.

The Laguna Beach, California company tracks status of more than 30 clouds from AWS to Zettagrid:

Service provider Outrages Downtime
Amazon EC2  20  2.41 Hours
Amazon S3  23  2.69 Hours
Google Compute Engine  72  4.41 Hours
Google Cloud Storage 8  14.23 Minutes
Microsoft Azure Virtual machines  92  40 Hours
Microsoft Azure Object Storage  141  10.97 Hours

Read More:

Read more
Nicholas Skaggs

PSA: Community Requests

As you plan your ubuntu related activities this year, I wanted to highlight an opportunity for you to request materials and funds to help make your plans reality. The funds are donations made by other ubuntu enthusiasts to support ubuntu and specifically to enable community requests. In other words, if you need help traveling to a conference to support ubuntu, planning a local event, holding a hackathon, etc, the community donations fund can help.

Check out the funding page for more information on how to apply and the requirements. In short, if you are a ubuntu member and want to do something to further ubuntu, you can request materials and funding to help. Global Jam is less than a month away, is your loco ready? Flavors, trying to plan events or hold other activities? I'd encourage all of you to submit requests if money or materials can help enable or enhance your efforts to spread ubuntu. Here's to sharing the joy of ubuntu this year!

Read more

Sueño fatal

El otro día soñé que me moría.

Bah, el sueño no era sobre que me moría... me moría durante el sueño, como un detalle de los sucesos. Era un nudo argumental, digamos, no lo central de la historia.

Quiero recalcar que no fue una pesadilla, sino un sueño normal. Bah, normal...

La historia iba de que había un virus suelto, o una maldición, o algo así, que se contagiaba por contacto y si en un determinado tiempo no contagiabas a alguien más, te morías. Recuerdo que alguien me contagiaba, y luego de contagiar a algunos, me descuidé y se me pasó el tiempo en cuestión. Me dí cuenta de eso, me dí cuenta que me iba a morir... y en el momento de la muerte fue como un "ffffssssup!" y nada más, yo seguía estando ahí (tenía conciencia de mí, sabía donde estaba, qué pasaba), sólo que no estaba vinculado a mi cuerpo. La historia seguía más bizarra aún: por un motivo que no recuerdo quería comunicarme con alguien que estaba vivo, entonces aprendía y practicaba (con la ayuda de un par más) a mover objetos "del mundo físico", hasta que en un momento alguien nos convocaba para ir a no sé donde, y ya no me acuerdo más.

En fin, lo que resalto fue que es la primera vez que me muero durante el sueño, y que no fue para nada traumático :p

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20150106 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

Status: Vivid Development Kernel

Both the master and master-next branches of our Vivid kernel have been
rebased to the v3.18.1 upstream stable kernel. We have also uploaded
our first 3.18 based kernel to the archive (3.18.0-8.9). Please test and let us
know your results. We are also starting to track the v3.19 kernel on
our unstable branch.
Important upcoming dates:
Fri Jan 9 – 14.04.2 Kernel Freeze (~3 days away)
Thurs Jan 22 – Vivid Alpha 2 (~2 weeks away)
Thurs Feb 5 – 14.04.2 Point Release (~4 weeks away)
Thurs Feb 26 – Beta 1 Freeze (~7 weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:

Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Verification & Testing
  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • Utopic – Verification & Testing

    Current opened tracking bugs details:


    For SRUs, SRU report is a good source of information:



    cycle: 12-Dec through 10-Jan
    12-Dec Last day for kernel commits for this cycle
    14-Dec – 20-Dec Kernel prep week.
    21-Dec – 10-Jan Bug verification; Regression testing; Release

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Michael Hall

Whenever a user downloads Ubuntu from our website, they are asked if they would like to make a donation, and if so how they want their money used. When the “Community” option is chosen, that money is made available to members of our community to use in ways that they feel will benefit Ubuntu.

I’m a little late getting this report published, but it’s finally done. You can read the report here:

We pretty consistently spend less than we get in each quarter, which means we have money sitting around that could be used by the community. If you want to travel to an event, would like us to sponsor an event, need hardware for development or testing, or anything else that you feel will make Ubuntu the project and the community better, please go and fill out the request form.


Read more

Auld Lang Syne

we’ll tak’ a cup o’ kindness yet,
for auld lang syne.

– Die Roten Rosen, Auld Lang Syne

Eike already greeted the Year of Our Lady of Discord 3181 four days ago, but I’d like to take this opportunity to have a look at the state of the LibreOffice project — the bug tracker status that is.

By the end of 2014:


And a special “Thank You!” goes out to everyone who created one of the over 100 Easy Hacks written for LibreOffice in 2014, and everyone who helped, mentored or reviewed patches by new contributors to the LibreOffice project. Easy Hacks are a good way someone curious about the code of LibreOffice to get started in the project with the help of more experienced developers. If you are interested in that, you find more information on Easy Hacks on the TDF wiki. Note that there are also Easy Hacks about Design related topics and on topics related to QA.

If “I should contribute to LibreOffice once in 2015″ wasnt part of your new years resolutions yet, you are invited to add this as Easy Hacks might convince you that its worthwhile and … easy.

Read more

[原]Ubuntu 手机开发培训准备





学生可以按照文章“Ubuntu SDK 安装”安装好自己的Ubuntu系统及SDK。让后根据文章“创建第一个Ubuntu for phone应用”来检验自己安装的环境是否正确。这种安装通常需要在电脑上安装多系统功能,或虚拟机(模拟器在虚拟机的效果可能并不好)。

如果想做一个专为Ubuntu手机开发而做的Live USB

请参照文章“如何制作Ubuntu SDK Live USB盘”来专门制作一个可以启动的Live USB盘。这个盘可以直接插入到电脑中的USB口中,并启动Ubuntu系统。这个USB盘中已经安装好整个可以供开发的SDK,不需要安装任何额外的软件即可开发。

a) 在BIOS中启动硬件虚拟化功能,这样会使得模拟器的运行速度加快
b) 在BIOS中设置优选顺序以使得USB可以优先启动,或在启动的时候按下F12功能键,并选择由USB来启动Ubuntu

在启动Ubuntu系统后,Ubuntu SDK已经完全安装好了。开发者可以直接进行开发了。建议参阅文章“创建第一个Ubuntu for phone应用”来检验自己安装的环境是否正确。



对不熟悉Ubuntu手机的开发者来说,可以先观看视频“如何使用Ubuntu手机”来了解Ubuntu手机。如果你想对Ubuntu SDK有更深的认识,请观看视频“如何使用Ubuntu SDK (视频)”。




阅读文章“使用Ubuntu SDK开发Flickr应用教程”,并观看视频“Ubuntu手机应用QML开发 (视频)”。幻灯片“Ubuntu应用开发”。

教程的源码在: bzr branch lp:~liu-xiao-guo/debiantrial/flickr7

DeveloperNews RSS阅读器

首先我们可以阅读文章“从零开始创建一个Ubuntu应用--一个小的RSS阅读器”及文章“如何在Ubuntu中使用条件布局”。视频在“在Ubuntu平台上开发Qt Quick QML应用 (视频)

教程的源码在:bzr branch lp:~liu-xiao-guo/debiantrial/developernews4


4)Scope 开发

大家可以先观看视频“Ubuntu Scope简介及开发流程”来了解Ubuntu OS上的Scope开发流程。

教程的源码在: bzr branch lp:~liu-xiao-guo/debiantrial/dianpianclient8





作者:UbuntuTouch 发表于2015-1-4 15:36:54 原文链接
阅读:513 评论:1 查看评论

Read more


我们用Ubuntu SDK创建一个简单的QML应用:

import QtQuick 2.0
import Ubuntu.Components 1.1

    \brief MainView with a Label and Button elements.

MainView {
    id: main
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "com.ubuntu.developer.liu-xiao-guo.foregrounddetect"

     This property enables the application to change orientation
     when the device is rotated. The default is false.
    //automaticOrientation: true

    // Removes the old toolbar and enables new features of the new header.
    useDeprecatedToolbar: false


    Page {

        Connections {
             target: Qt.application
             onActiveChanged: {
                 console.log(" " +;



整个应用的源码在:bzr branch lp:~liu-xiao-guo/debiantrial/foregrounddetect

作者:UbuntuTouch 发表于2015-1-5 10:56:25 原文链接
阅读:202 评论:0 查看评论

Read more



作者:UbuntuTouch 发表于2015-1-13 15:08:31 原文链接
阅读:140 评论:0 查看评论

Read more

[原]如何制作Ubuntu SDK Live USB盘

对于一些想开发Ubuntu手机应用或Scope的开发者来说,不想重新买一个电脑安装Ubuntu操作系统或在自己的硬盘上重新安装一个Ubuntu系统,那么可以考虑制作一个Ubuntu系统的Live USB盘。这个USB包括如下的部分:

  • Ubuntu Kylin 14.10操作系统
  • Ubuntu SDK (包括已经安装好的SDK,模拟器及编译环境)

使用这个Live USB盘,开发者就不用安装任何的东西,直接插入电脑的USB口中。在电脑启动的过程中,选择我们制作好的USB启动盘进行启动(在电脑启动的过程中,按下“F12”键)。在启动的过程中选择“Try Ubuntu Kylin without installing

虽然这是一个Ubuntu OS的启动盘,但是它可以保存我们在开发过程中所创建的项目(存于Home目录中)及一些设置(比如wifi设置密码等)。

当我们选择USB时,我们最好是选择USB 3.0并把USB盘放入到电脑USB 3.0的口中。一般来说,电脑上的USB 3.0口是用蓝色标示的。建议使用质量较好,速度较快一点的USB这样可以使得系统的启动和运行更快更流畅。目前我们使用SanDisk CZ80来做测试,效果还是不错的。USB需要有16G的存储。

为了使得我们的模拟器能够更加流畅及模拟器不会出现黑色的屏幕,我们需要在电脑的BIOS里启动硬件虚拟化功能。开发者需要到自己的电脑的BIOS里的设置启动VT-X/AMD-V。开发者可以参考文章“Ubuntu SDK 安装”来检查自己的电脑是否支持virtualization。

如果开发者想要在自己的电脑上安装Ubuntu系统并在上面开发的话,可以参考文章“Ubuntu SDK 安装”来一步一步地安装Ubuntu SDK。

1)如何在Ubuntu系统下制作Live USB盘



  • kylin-live-20150133.iso (md5sum 13cd61270bf98eb462dc0497df8eee33) 
  • casper-rw-20150113.tar.bz2  (md5sum 8c69f94a03481275bf66aa883b69ae1b)
  • (简单的说明文件)


在Dash中输入“usb”,并启动“Startup Disk Creator/启动盘创建器”




然后按下面运行自带的脚本,参数为 USB 盘挂载的路径。



liuxg@liuxg:~/usb$ ./ /media/liuxg/BD52-7153/


2)如何在Windows 平台下制作启动盘

下载制作工具,与 Linux 平台的工具相似。

单我们在选择“Persistent file”时,它的大小应该是非零的一个值。在我们填入“Step 2”时,我们不应该把拷贝好的字符串拷到该输入框中,否则在“Step 3”中的输入框就会是灰色的。我们应该点击“Browse”按钮,并按照如下的方式进行输入image的路径:

在这之后把 casper-rw 文件拷贝到USB的主目录下即可。

:如果只想使用英文版的Ubuntu系统就不需要进行下面的步骤。如果想要支持中文版,请把 也拷贝到 USB盘的根目录下。从USB 盘启动Ubuntu系统后,在shell中执行如下的指令:

$ cd /cdrom/

$ sudo ./



我们可以把我们的Live USB盘插入电脑,我们可以通文章“创建第一个Ubuntu for phone应用”来检验我们是否有一个完好的Ubuntu SDK。

在我们启动模拟器时,如果需要输入密码,请使用默认的密码“0000”。如果开发者需要自己修改这个密码,请到Ubuntu SDK模拟器中的“系统设置”中去修改。

对于应用开发者来说,在Qt Creator中的热键组合“Ctrl + Space”键有它独特的用处。可是,在Ubuntu系统中,“Ctrl + Space”被用来转换中英文输入法。建议开发者参考文章“怎么在Ubuntu OS上面安装搜狗输入法及对Qt Creator的支持”来重新定义键的组合。

已知问题 (known issues)



我们发现在联想 E455 出现不能启动的问题,目前怀疑是和 AMD 显卡驱动有关,问题仍在调查中,如果遇到些问题,请在系统上安装14.04 LTS版本并安装相应的ubuntu-sdk包来尝试学习ubuntu phone的开发知识,其中的基本概念都是一样。

注:如果想长时间致力于ubuntu phone的开发建议在电脑上安装一个ubuntu系统,最好是utopic (14.10),而不是在Live环境下进行学习,一是以防数据的丢失,二是在使用性能上有更快速的体验。

作者:UbuntuTouch 发表于2015-1-22 15:35:55 原文链接
阅读:164 评论:0 查看评论

Read more
Colin Ian King

During idle moments in the final few weeks of 2014 I have been adding some more stressors and features to stress-ng as well as tidying up the code and fixing some small bugs that have crept in during the last development spin.   Stress-ng aims to stress a machine with various simple torture tests to trip overheating and kernel race conditions.

The mmap stressor now has the '--mmap-file' to use synchronous file backed memory mapping instead of the default anonymous mapping, and the '--mmap-async' option enables asynchronous file mapping if desired.

For socket stressing, the '--sock-domain unix' option now allows AF_UNIX (aka AF_LOCAL) sockets to be used. This compliments the existing AF_INET and AF_INET6 IPv4 and IPv6 protocols available with this stress test.

The CPU stressor now includes mixed integer and floating point stressors, covering 32 and 64 bit integer mixes with the float, double and long double floating point types. The generated object code contains a nice mix of operations which should exercise various functional units in the CPU.  For example, when running on a hyper-threaded CPU one notices a performance hit because these cpu stressor methods heavily contend on the CPU math functional blocks.

File based locking has been extended with the new lockf stressor, this stresses multiple locking and unlocking on portions of a small file and the default blocking mode can be turned into a CPU consuming rapid polling retry with the '--lockf-nonblock' option.

The dup(2) system call is also now stressed with the new dup stressor. This just repeatedly dup's a file opened on /dev/zero until all the free file slots are full, and then closes these. It is very much like the open stressors.

The fcntl(2) F_SETLEASE command is stress tested with the new lease stressor. This has a parent process that rapidly locks and unlocks a file based lease and 1 or more child processes try to open this file and cause lease breaking signal notifications to the parent.

For x86 CPUs, the cache stressor includes two new cache specific options. The '--cache-fence' option forces write serialization on each store operation, while the '--cache-flush' option forces flush cache on each store operation. The code has been specifically written to not incur any overhead if these options are not enabled or are not available on non-x86 CPUs.

This release also includes the stress-ng project mascot too; a humble CPU being tortured and stressed by a rather angry flame.

For more information, visit the stress-ng project page, or check out the stress-ng manual.

Read more

在这个视频里介绍了Ubuntu OS上的online account探讨。online account可以应用于Web,QML及Scope的开发。更多介绍请参阅

作者:UbuntuTouch 发表于2014-12-25 9:40:46 原文链接
阅读:328 评论:0 查看评论

Read more






在这个例子里,我们选择使用的是“Empty Scope”。这样我们不必要去删除很多的文件。我们在Desktop上运行一下我们刚刚创建的Scope。没有什么是特殊的。我们使用能够热键Ctrl + R或SDK左下角的绿色的运行按钮,运行Scope:


这是一个最基本的Scope,没有任何特殊的东西,因为这是个非常简单的Empty Scope。


由于“Empty Scope”模版对Qt没有进行支持。在这个章节里,我们教大家怎么把Qt加入到项目中。我们希望在项目中使用Qt来解析我们得到的数据。我们在项目中加入对Qt的支持。我们首先打开在“src”中的CMakeLists.txt文件,并加入如下的句子:

find_package(Qt5Core REQUIRED)     
find_package(Qt5Xml REQUIRED)      



# Build a shared library containing our scope code.
# This will be the actual plugin that is loaded.
  scope SHARED

qt5_use_modules(scope Core Network) 

# Link against the object library and our external library dependencies


我们可以看到,我们加入了对Qt Core库的调用。同时,我们也打开"tests/unit/CMakeLists.txt"文件,并加入“qt5_use_modules(scope-unit-tests Core Network)"

# Our test executable.
# It includes the object code from the scope

# Link against the scope, and all of our test lib dependencies

qt5_use_modules(scope-unit-tests Core Network)

# Register the test with CTest





        "title": "顺丰",
	"pinyin": "shunfeng",
        "url": ""
        "title": "全峰",
	"pinyin": "quanfeng",
        "url": ""
        "title": "申通",
	"pinyin": "shentong",
        "url": ""
        "title": "EMS",
	"pinyin": "ems",
        "url": ""
        "title": "圆通",
	"pinyin": "yuantong",
        "url": ""
        "title": "中通",
	"pinyin": "zhongtong",
        "url": ""
        "title": "韵达",
	"pinyin": "yunda",
        "url": ""
        "title": "天天",
	"pinyin": "tiantian",
        "url": ""
        "title": "汇通",
	"pinyin": "huitong",
        "url": ""
        "title": "德邦",
	"pinyin": "debang",
        "url": ""
        "title": "宅急送",
 	"pinyin": "zhaijisong",
       	"url": ""

这里我们使用了一个json结构的文件“departments.json”来存储这些信息,并把这个文件存于项目的“data”目录下。这里的“pinyin”项将被用作Scope中的department id。就像我们在这篇文章一开始的位置显示的那样,我们想把该Scope设计为一个department Scope。这样我们可以对所有的快递公司进行查询。我们可以对我们的一个实验性的API进行展示如下:

{"nu":"592833849048","companytype":"shunfeng","com":"shunfeng","updatetime":"2014-12-23 18:13:16","signname":"","condition":"F00","status":"200","codenumber":"592833849048","signedtime":"","data":[{"time":"2014-11-29 13:19:43","location":"","context":"已签收,感谢使用顺丰,期待再次为您服务","ftime":"2014-11-29 13:19:43"},{"time":"2014-11-29 13:19:43","location":"","context":"在官网\"运单资料&签收图\", 可查看签收人信息","ftime":"2014-11-29 13:19:43"},{"time":"2014-11-29 11:14:23","location":"","context":"正在派送途中,请您准备签收(派件人:孙连杰,电话:13810320784)","ftime":"2014-11-29 11:14:23"},{"time":"2014-11-29 10:00:30","location":"","context":"快件到达 北京北苑集散中心","ftime":"2014-11-29 10:00:30"},{"time":"2014-11-29 08:45:55","location":"","context":"快件在 北京顺义集散中心, 正转运至 北京北苑集散中心","ftime":"2014-11-29 08:45:55"},{"time":"2014-11-29 06:29:29","location":"","context":"快件在 北京集散中心, 正转运至 北京顺义集散中心","ftime":"2014-11-29 06:29:29"},{"time":"2014-11-28 20:46:26","location":"","context":"快件在 厦门总集散中心, 正转运至 北京集散中心","ftime":"2014-11-28 20:46:26"},{"time":"2014-11-28 19:32:18","location":"","context":"快件在 厦门集美集散中心, 正转运至 厦门总集散中心","ftime":"2014-11-28 19:32:18"},{"time":"2014-11-28 17:26:37","location":"厦门莲岳服务点","context":"[厦门莲岳服务点]快件在 厦门莲岳服务点, 正转运至 厦门集美集散中心","ftime":"2014-11-28 17:26:37"}],"state":"3","departure":"厦门市","addressee":"","destination":"北京市","message":"ok","ischeck":"1","pickuptime":""}





if [ $# -eq 0 ]; then

    mkdir -p $DEST
    cp -r data/departments.json $DEST/
    cp -r data/renderer $DEST/
    cp -r data/images $DEST/
    echo "Setup complete."
    exit 0;




bzr branch lp:~liu-xiao-guo/debiantrial/mailcheck1




sc::SearchQueryBase::UPtr Scope::search(const sc::CannedQuery &query,
                                        const sc::SearchMetadata &metadata) {
    const QString scopePath = QString::fromStdString(scope_directory());
    // Boilerplate construction of Query
    return sc::SearchQueryBase::UPtr(new Query(query, metadata, scopePath, config_));


#include <QString>



   Query(const unity::scopes::CannedQuery &query,
          const unity::scopes::SearchMetadata &metadata,  QString scopePath,
          api::Config::Ptr config);


QString g_scopePath;
QString g_rootDepartmentId;
QMap<QString, std::string> g_renders;
QString g_userAgent;
QString g_imageDefault;
QString g_imageError;
QMap<QString, QString> g_depts;
QString g_curScopeId;
static QMap<QString, QString> g_deptLayouts;


#define LOAD_RENDERER(which) g_renders.insert(which, getRenderer(g_scopePath, which))

Query::Query( const sc::CannedQuery &query, const sc::SearchMetadata &metadata,
             QString scopePath, Config::Ptr config ) :
    sc::SearchQueryBase( query, metadata ), client_( config ) {
    g_scopePath = scopePath;
    g_userAgent = QString("%1 (Ubuntu)").arg(SCOPE_PACKAGE);
    g_imageDefault = QString("file://%1/images/%2").arg(scopePath).arg(IMG_DEFAULT);
    g_imageError = QString("file://%1/images/%2").arg(scopePath).arg(IMG_ERROR);

    // Load all of the predefined rederers. You can comment out the renderers
    // you don't use.
    LOAD_RENDERER( "journal" );
    LOAD_RENDERER( "wide-art" );
    LOAD_RENDERER( "hgrid" );
    LOAD_RENDERER( "carousel" );
    LOAD_RENDERER( "large" );

std::string Query::getRenderer( QString scopePath, QString name ) {
    QString renderer = readFile( QString("%1/renderer/%2.json" )
    return renderer.toStdString();

QString Query::readFile(QString path) {
    QFile file(path); | QIODevice::Text);
    QString data = file.readAll();
    // qDebug() << "JSON file: " << data;
    return data;

我们通过如下的方法来解析“departments.json",并把“pinyin”项作为department id以便以后进行查询。虽然deparment id可以为任何一个字符串,只要保证它们之间是互相不同的。我们把得到的url数据存于一个叫做g_depts的全局变量中。

DepartmentList Query::getDepartments(QJsonArray data) {
    qDebug() << "entering getDepartments";

    DepartmentList depts;

    // Clear the previous departments since the URL may change according to settings
    qDebug() << "m_depts is being cleared....!";

    int index = 0;
    FOREACH_JSON( json, data ) {
        auto feed = (*json).toObject();
        QString title = feed["title"].toString();
//        qDebug() << "title: " << title;

        QString url = feed["url"].toString();
//        qDebug() << "url: " << url;

        QString pinyin = feed["pinyin"].toString();
//        qDebug() << "pinyin: " << pinyin;

        // This is the default layout otherwise it is defined in the json file
        QString layout = SURFACING_LAYOUT;

        if ( feed.contains( "layout" ) ) {
            layout = feed[ "layout" ].toString();

        g_depts.insert( pinyin, url );
        g_deptLayouts.insert( pinyin, layout );

        CannedQuery query( SCOPENAME.toStdString() );
        query.set_department_id( url.toStdString() );
        query.set_query_string( url.toStdString() );

        Department::SPtr dept( Department::create(
                               pinyin.toStdString(), query, title.toStdString() ) );



    // Dump the departments. The map has been sorted out
    QMapIterator<QString, QString> i(g_depts);
    while (i.hasNext()) {;
        qDebug() << "scope id: " << i.key() << ": " << i.value();

    qDebug() << "Going to dump tthe department layouts";

    QMapIterator<QString, QString> j( g_deptLayouts );
    while (j.hasNext()) {;
        qDebug() << "scope id: " << j.key() << ": " << j.value();

    return depts;


# Install the scope ini file
  FILES "com.ubuntu.developer.liu-xiao-guo.mailcheck_mailcheck.ini"

# Install the scope images

    DIRECTORY "images"

    DIRECTORY "renderer"





bzr branch lp:~liu-xiao-guo/debiantrial/mailcheck2



void Query::search(sc::SearchReplyProxy const& reply) {
    CategoryRenderer renderer(g_renders.value("journal", ""));
    auto search = reply->register_category(
                "search", RESULTS.toStdString(), "", renderer);

    CannedQuery cannedQuery = SearchQueryBase::query();

    QString deptId = QString::fromStdString(cannedQuery.department_id());
    qDebug() << "deptId: " << deptId;

    qDebug() << "m_rootDepartmentId: " << g_rootDepartmentId;
    QString url;

    qDebug() << "m_curScopeId: " << g_curScopeId;

    if ( !deptId.isEmpty() ) {
        g_curScopeId = deptId;

    if ( deptId.isEmpty() && !g_rootDepartmentId.isEmpty()
         && g_curScopeId == g_rootDepartmentId ) {

        QMapIterator<QString, QString> i(g_depts);
        qDebug() << "m_depts count: "  << g_depts.count();

        qDebug() << "Going to set the surfacing content";

        const CannedQuery &query(sc::SearchQueryBase::query());
        // Trim the query string of whitespace
        string query_string = alg::trim_copy(query.query_string());
        QString queryString = QString::fromStdString(query_string);

        if ( queryString.isEmpty()) {
            url = QString(g_depts[g_rootDepartmentId]).arg(592833849048);
        } else {
            url = QString(g_depts[g_rootDepartmentId]).arg(queryString);
    } else {
        QString queryString = QString::fromStdString(cannedQuery.query_string());
        qDebug() << "queryString: " << queryString;

        // Dump the departments. The map has been sorted out
        QMapIterator<QString, QString> i(g_depts);
        qDebug() << "m_depts count: "  << g_depts.count();

        while (i.hasNext()) {
            qDebug() << "scope id: " << i.key() << ": " << i.value();

        url = g_depts[g_curScopeId].arg(queryString);

    qDebug() << "url: "  << url;
    qDebug() << "m_curScopeId: " << g_curScopeId;

    try {
        QByteArray data = get(reply, QUrl(url));
        getMailInfo(data, reply);
    } catch (domain_error &e ) {
        cerr << e.what() << endl;

void Query::getMailInfo(QByteArray &data, SearchReplyProxy const& reply) {
    QJsonParseError e;
    QJsonDocument document = QJsonDocument::fromJson(data, &e);
    if (e.error != QJsonParseError::NoError) {
        throw QString("Failed to parse response: %1").arg(e.errorString());

    // This creates a big picture on the top
    CategoryRenderer rssCAR(CAR_GRID);
    auto catCARR = reply->register_category("A", "", "", rssCAR);
    CategorisedResult res_car(catCARR);
    QString defaultImage1 ="file://"+ g_scopePath + "/images/" + g_curScopeId + ".jpg";
    qDebug() << "defaultImage1: "  << defaultImage1;
    res_car["largepic"] = defaultImage1.toStdString();
    res_car["art2"] =  res_car["largepic"];

    QJsonObject obj = document.object();

    qDebug() << "***********************\r\n";

    if ( obj.contains("data") ) {
        qDebug() << "it has data!";

        QJsonValue data1 = obj.value("data");

        QJsonArray results = data1.toArray();

        qDebug() << "g_curScopeId: " << g_curScopeId;
        QString layout = g_deptLayouts.value( g_curScopeId );
        std::string renderTemplate;

        if (g_renders.contains( layout )) {
            qDebug() << "it has layout: " << layout;
            renderTemplate = g_renders.value( layout, "" );
            // qDebug() << "renderTemplate: " << QString::fromStdString(renderTemplate);
        else {
            qDebug() << "it does not have layout!";
            renderTemplate = g_renders.value( "journal" );
            // qDebug() << "renderTemplate: " << QString::fromStdString(renderTemplate);

        CategoryRenderer grid(renderTemplate);
        std::string categoryId = "root";
        std::string categoryTitle = " "; // #1330899 workaround
        std::string icon = "";
        auto tracking = reply->register_category(categoryId, categoryTitle, icon, grid);

        FOREACH_JSON(result, results) {
            QJsonObject o = (*result).toObject();

            QString time = o.value("time").toString();
//            qDebug() << "time: " << time;

            QString context = o.value("context").toString();
//            qDebug() << "context: " << context;

            QString link = "";
            QString defaultImage ="file://"+ g_scopePath + "/images/" + g_curScopeId + ".jpg";

            CategorisedResult result(tracking);

            SET_RESULT("uri", link);
            SET_RESULT("image", defaultImage);
            //            SET_RESULT("video", video);
            SET_RESULT("title", time);
            //            SET_RESULT("subtitle", context);
            SET_RESULT("summary", context);
            //            SET_RESULT("full_summary", fullSummary);
            //            result["actions"] = actions.end();

            if (!reply->push(result)) break;

QByteArray Query::get(sc::SearchReplyProxy const& reply, QUrl url) const {
    QNetworkRequest request(url);
    QByteArray data = makeRequest(reply, request);
    return data;


void Query::run(sc::SearchReplyProxy const& reply) {
    try {
        // Start by getting information about the query
        const CannedQuery &query(sc::SearchQueryBase::query());

        // Trim the query string of whitespace
        string query_string = alg::trim_copy(query.query_string());
        QString queryString = QString::fromStdString(query_string);

        qDebug() << "queryString: " << queryString;

        // Only push the departments when the query string is null
        if ( queryString.length() == 0 ) {
            qDebug() << "it is going to push the departments...!";
            pushDepartments( reply );


    } catch ( domain_error &e ) {
        // Handle exceptions being thrown by the client API
        cerr << e.what() << endl;
        reply->error( current_exception() );


QByteArray Query::makeRequest(SearchReplyProxy const& reply,QNetworkRequest &request) const {
    int argc = 1;
    char *argv = const_cast<char*>("rss-scope");
    QCoreApplication *app = new QCoreApplication( argc, &argv );

    QNetworkAccessManager manager;
    QByteArray response;
    QNetworkDiskCache *cache = new QNetworkDiskCache();
    QString cachePath = g_scopePath + "/cache";
    //qDebug() << "Cache dir: " << cachePath;

    request.setRawHeader( "User-Agent", g_userAgent.toStdString().c_str() );
    request.setRawHeader( "Content-Type", "application/rss+xml, text/xml" );
    request.setAttribute( QNetworkRequest::CacheLoadControlAttribute, QNetworkRequest::PreferCache );

    QObject::connect(&manager, SIGNAL(finished(QNetworkReply*)), app, SLOT(quit()));
    QObject::connect(&manager, &QNetworkAccessManager::finished,
                     [this, &reply, &response](QNetworkReply *msg) {
        if (msg->error() != QNetworkReply::NoError) {
            qCritical() << "Failed to get data: " << msg->error();
            pushError( reply, NO_CONNECTION );
        } else {
            response = msg->readAll();


    manager.setCache( cache );
    manager.get( request );

    delete cache;
    return response;


DisplayName = 快递查询
Description = This is a Mailcheck scope
Art = screenshot.png
Author = Firstname Lastname
Icon = icon.png
SearchHint = 请输入单号

PageHeader.Logo = logo.png



    "schema-version": 1,
    "template": {
        "category-layout": "vertical-journal",
        "card-layout": "horizontal",
        "card-size": "medium",
        "collapsed-rows": 0,
    "components": {
        "art": "image",
        "title": "title",
        "subtitle": "subtitle",
        "summary": "summary"



bzr branch lp:~liu-xiao-guo/debiantrial/mailcheckfinal

或在地址:git clone

作者:UbuntuTouch 发表于2014-12-23 16:07:06 原文链接
阅读:203 评论:0 查看评论

Read more