Canonical Voices


I’m on my way home from Düsseldorf where I attended the LinuxCon Europe and Linux Plumber conferences. I was quite surprised how huge LinuxCon was, there were about 1.500 people there! Certainly much more than last year in New Orleans.

Containers (in both LXC and docker flavors) are the Big Thing everybody talks about and works with these days; there was hardly a presentation where these weren’t mentioned at all, and (what felt like) half of the presentations were either how to improve these, or how to use these technologies to solve problems. For example, some people/companies really take LXC to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. For example there was an interesting talk how to secure networking for containers, and pretty much everyone uses docker or LXC now to deploy workloads, run CI tests. There are projects like “fleet” which manage systemd jobs across an entire cluster of containers (distributed task scheduler) or like which auto-build packages from each commit of projects.

Another common topic is the trend towards building/shipping complete (r/o) system images, atomic updates and all that goodness. The central thing here was certainly “Stateless systems, factory reset, and golden images” which analyzed the common requirements and proposed how to implement this with various package systems and scenarios. In my opinion this is certainly the way to go, as our current solution on Ubuntu Touch (i. e. Ubuntu’s system-image) is far too limited and static yet, it doesn’t extend to desktops/servers/cloud workloads at all. It’s also a lot of work to implement this properly, so it’s certainly understandable that we took that shortcut for prototyping and the relatively limited Touch phone environment.

On Plumbers my main occupations were mostly the highly interesting LXC track to see what’s coming in the container world, and the systemd hackfest. On the latter I was again mostly listening (after all, I’m still learning most of the internals there..) and was able to work on some cleanups and improvements like getting rid of some of Debian’s patches and properly run the test suite. It was also great to sync up again with David Zeuthen about the future of udisks and some particular proposed new features. Looks like I’m the de-facto maintainer now, so I’ll need to spend some time soon to review/include/clean up some much requested little features and some fixes.

All in all a great week to meet some fellows of the FOSS world a gain, getting to know a lot of new interesting people and projects, and re-learning to drink beer in the evening (I hardly drink any at home :-P).

If you are interested you can also see my raw notes, but beware that there are mostly just scribbling.

Now, off to next week’s Canonical meeting in Washington, DC!

Read more

对很多的开发者来说,你们可能使用的不是Ubuntu操作系统。在这种情况下,开发者需要在自己的操作系统中(OS X及Windows)安装virtualbox,并在VirtualBox中安装Ubuntu及Ubuntu SDK。为了方便大家的安装,我们已经制定好了一个Image。这个Image中包含Ubuntu Utopic (14.10)及Ubuntu SDK。大家可以一次性地下载并安装SDK。下面介绍其安装步骤。


Download VirtualBox



3)下载Ubuntu virtual machine (最小的Ubuntu 14.10 desktop版本及已经在里面安装好的Ubuntu SDK)



在安装完整个SDK后,我们可以参照文章“怎么在Virtualbox下安装Ubuntu OS”来设置自己的中文输入法及文件分享。可以参照文章“Ubuntu SDK 安装”来进一步安装自己的“armhf”及“i386” chroot。整个安装chroot的过程可能需要一定的时间。需要耐心等待。等整个安装过程完成了,我们就可以进行下一步的开发了。

作者:UbuntuTouch 发表于2014-10-17 9:50:48 原文链接
阅读:192 评论:0 查看评论

Read more
Nicholas Skaggs

The final images of what will become utopic are here! Yes, in just one short week utopic unicorn will be released into the world. Celebrate this exciting release and be among the first to run utopic by helping us test!

We need your help and test results, both positive and negative. Please head over to the milestone on the isotracker, select your favorite flavor, and perform the needed tests against the images.

If you've never submitted test results for the iso tracker, check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help.

Thank you for helping to make ubuntu better! Happy Testing!

Read more
Robbie Williamson

The following is an update on Ubuntu’s response to the latest Internet emergency security issue, POODLE (CVE-2014-3566), in combination with an
SSLv3 downgrade vulnerability.

Vulnerability Summary

“SSL 3.0 is an obsolete and insecure protocol. While for most practical purposes it has been replaced by its successors TLS 1.0, TLS 1.1, and TLS 1.2, many TLS implementations remain backwards­ compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. The protocol handshake provides for authenticated version negotiation, so normally the latest protocol version common to the client and the server will be used.” -

A vulnerability was discovered that affects the protocol negotiation between browsers and HTTP servers, where a man-in-the-middle (MITM) attacker is able trigger a protocol downgrade (ie, force downgrade to SSLv3, CVE to be assigned).  Additionally, a new attack was discovered against the CBC block cipher used in SSLv3 (POODLE, CVE-2014-3566).  Because of this new weakness in the CBC block cipher and the known weaknesses in the RC4 stream cipher (both used with SSLv3), attackers who successfully downgrade the victim’s connection to SSLv3 can now exploit the weaknesses of these ciphers to ascertain the plaintext of portions of the connection through brute force attacks.  For example, an attacker who is able to manipulate the encrypted connection is able to steal HTTP cookies.  Note, the protocol downgrade vulnerability exists in web browsers and is not implemented in the ssl libraries.  Therefore, the downgrade attack is currently known to exist only for HTTP.

OpenSSL will be updated to guard against illegal protocol negotiation downgrades (TLS_FALLBACK_SCSV).  When the server and client are updated to use TLS_FALLBACK_SCSV, the protocol cannot be downgraded to below the highest protocol that is supported between the two (so if the client and the server both support TLS 1.2, SSLv3 cannot be used even if the server offers SSLv3).

The recommended course of action is ultimately for sites to disable SSLv3 on their servers, and for browsers to disable SSLv3 by default since the SSLv3 protocol is known to be broken.  However, it will take time for sites to disable SSLv3, and some sites will choose not to, in order to support legacy browsers (eg, IE6).  As a result, immediately disabling SSLv3 in Ubuntu in the openssl libraries, in servers or in browsers, will break sites that still rely on SSLv3.

Ubuntu’s Response:

Unfortunately, this issue cannot be addressed in a single USN because this is a vulnerability in a protocol, and the Internet must respond accordingly (ie SSLv3 must be disabled everywhere).  Ubuntu’s response provides a path forward to transition users towards safe defaults:

  • Add TLS_FALLBACK_SCSV to openssl in a USN:  In progress, upstream openssl is bundling this patch with other fixes that we will incorporate
  • Follow Google’s lead regarding chromium and chromium content api (as used in oxide):
    • Add TLS_FALLBACK_SCSV support to chromium and oxide:  Done – Added by Google months ago.
    • Disable fallback to SSLv3 in next major version:  In Progress
    • Disable SSLv3 in future version:  In Progress
  • Follow Mozilla’s lead regarding Mozilla products:
    • Disable SSLv3 by default in Firefox 34:  In Progress – due Nov 25
    • Add TLS_FALLBACK_SCSV support in Firefox 35:  In Progress

Ubuntu currently will not:

  • Disable SSLv3 in the OpenSSL libraries at this time, so as not to break compatibility where it is needed
  • Disable SSLv3 in Apache, nginx, etc, so as not to break compatibility where it is needed
  • Preempt Google’s and Mozilla’s plans.  The timing of their response is critical to giving sites an opportunity to migrate away from SSLv3 to minimize regressions

For more information on Ubuntu security notices that affect the current supported releases of Ubuntu, or to report a security vulnerability in an Ubuntu package, please visit


Read more

我在以前的文章中,讲述了如何使用U1dbSQLite offline storage API来存储应用的一些状态。在这篇文章中,我将介绍如何使用Qt.labs.settings来存储应用的状态。更加详细的介绍,请参阅链接

首先,我们创建一个最简单的“App with Simple UI”模版应用,并修改文件“main.qml”如下:

import QtQuick 2.0
import Ubuntu.Components 1.1
import Qt.labs.settings 1.0

    \brief MainView with a Label and Button elements.

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "com.ubuntu.developer.liu-xiao-guo.settings"

     This property enables the application to change orientation
     when the device is rotated. The default is false.
    //automaticOrientation: true

    // Removes the old toolbar and enables new features of the new header.
    useDeprecatedToolbar: false


    Page {

        Column {
            anchors.fill: parent
            anchors.centerIn: parent

            Label {
                text: "Please input a string below:"
                fontSize: "large"

            TextField {
                id: myTextField
                text: settings.input
                placeholderText: "please input a string"

                onTextChanged: {
                    settings.input = text

            Button {
                text: "Get category"
                onClicked: {
                    console.log("settings category:" + settings.category);

        Settings {
            id: settings
            property string input: "unknown"

        Component.onDestruction: {
            settings.input = myTextField.text


        Component.onDestruction: {
            settings.input = myTextField.text


            TextField {
                id: myTextField
                text: settings.input
                placeholderText: "please input a string"

                onTextChanged: {
                    settings.input = text


整个测试的源码在 bzr branch lp:~liu-xiao-guo/debiantrial/settingsqml

作者:UbuntuTouch 发表于2014-10-16 15:18:13 原文链接
阅读:168 评论:0 查看评论

Read more



使用 Qt 做过 UI 后一定对 QHBoxLayout, QVBoxLayout, 和 QGridLayout 这三个最重要也最常使用的 layout managers 非常熟悉。那么在 QML 中又是如何控制和管理 UI 布局的呢?那么我们这篇文章就为大家介绍这些基础知识。

首先,QML 同样允许大家使用硬编码的方式将位置数值直接写到代码中,但是这样做首先难以适应 UI 的调整,其次代码维护起来也很困难。因此我们推荐大家不要直接写数值,而是使用下列三种布局管理器:Row,、Column、Grid,以及使用 Anchor 进行布局。


QML 中的 Row 元素会将其子控件都排列在同一行,相互不重叠。我们还可以使用它的 spacing 属性来定义子控件之间的距离。比如下列代码就会产生如图所示的效果:

Row { 
spacing: 2
Rectangle { color: "red"; width: 50; height: 50 }
Rectangle { color: "green"; width: 20; height: 50 }
Rectangle { color: "blue"; width: 50; height: 20 }



QML 中的 Column 元素会将其子控件都排列在同一行,相互不重叠。我们可以使用它的 spacing 属性来定义子控件之间的距离。比如下列代码就会产生如图所示的效果:

Column { 
spacing: 2
Rectangle { color: "red"; width: 50; height: 50 }
Rectangle { color: "green"; width: 20; height: 50 }
Rectangle { color: "blue"; width: 50; height: 20 }



QML 中的 Grid 元素会将其子控件都均匀地排列在一个网格内,相互不重叠,每一个子控件都被放置在一个网格单元的(0,0)位置,也就是左上角。Grid的 rows 和 columns 属性定义网格的行数和列数,列数默认是4。我们还可以使用 Grid 的spacing 属性来定义网格单元之间的距离,这里注意水平和垂直方向的 spacing 都是一样的。比如下列代码就会产生如图所示的效果:

Grid { 
columns: 3
spacing: 2
Rectangle { color: "red"; width: 50; height: 50 }
Rectangle { color: "green"; width: 20; height: 50 }
Rectangle { color: "blue"; width: 50; height: 20 }
Rectangle { color: "cyan"; width: 50; height: 50 }
Rectangle { color: "magenta"; width: 10; height: 10 }



我们还可以将 Grid、Row 和 Column 进行混合应用。比如下面的代码会产生如图所示的效果:

Column {
spacing: 2
Rectangle { color: "red"; width: 50; height: 50 }
Row {
spacing: 2
Rectangle { color: "yellow"; width: 50; height: 50 }
Rectangle { color: "black"; width: 20; height: 50 }
Rectangle { color: "blue"; width:50; height: 20 }
Rectangle { color: "green"; width: 20; height: 50 }



每一个 item 都可以被认为具有 7 条隐藏的“anchor lines":left、 horizontalCenter、 right、 top、 verticalCenter、baseline、以及 bottom,如下图所示: Anchor1.png

其中 baseline 是指的文本所在的线,在上图中并未标出,如果 item 没有文字的话 baselinw 就和 top 的位置是相同的。 除此之外,Anchor 系统还提供了margins 和 offsets。margins 是指一个 item 和外界之间所留有的空间,而 offsets 则可以通过使用 center anchor lines 来进行布局。如下图所示:


使用 QML anchoring 系统,我们可以定义不同 items 之间的 anchor lines 之间的关系。例如:

Rectangle { id: rect1; ... }
Rectangle { id: rect2; anchors.left: rect1.right; anchors.leftMargin: 5; ... }


我们还可以使用多个 anchors:

Rectangle { id: rect1; ... }
Rectangle { id: rect2; anchors.left: rect1.right; rect1.bottom; ... }


通过定义多个水平或垂直的 anchors,我们还可以控制 item 的大小,例如:

Rectangle { id: rect1; x: 0; ... }
Rectangle { id: rect2; anchors.left: rect1.right; anchors.right: rect3.left; ... }
Rectangle { id: rect3; x: 150; ... }


注意:出于效率方面的考虑,我们只允许对一个 item 的邻居和直接父亲使用 anchor 定义。比如下面的定义是不合法的:

 Item {
id: group1
Rectangle { id: rect1; ... }
Item {
id: group2
Rectangle { id: rect2; anchors.left: rect1.right; ... } // invalid anchor!
作者:UbuntuTouch 发表于2014-10-16 9:41:16 原文链接
阅读:74 评论:0 查看评论

Read more
David Callé

Scopes come with a very flexible customization system. From picking the text color to rearranging how results are laid out, a scope can easily look like a generic RSS reader, a music library or even a store front.

In this new article, you will learn how to make your scope shine by customizing its results, changing its colors, adding a logo and adapting its layout to present your data in the best possible way. Read…


Read more
Michael Hall

Will CookeThis is a guest post from Will Cooke, the new Desktop Team manager at Canonical. It’s being posted here while we work to get a blog setup on, which is where you can find out more about Unity 8 and how to get involved with it.


Understandably, most of the Ubuntu news recently has focused around phones. There is a lot of excitement and anticipation building around the imminent release of the first devices.  However, the Ubuntu Desktop has not been dormant during this time.  A lot of thought and planning has been given to what the desktop will become in the future; who will use it and what will they use it for.  All the work which is going in to the phone will be directly applicable to the desktop as well, since they will use the same code.  All the apps, the UI tweaks, everything which makes applications secure and stable will all directly apply to the desktop as well.  The plan is to have the single converged operating system ready for use on the desktop by 16.04.

The plan

We learned some lessons during the early development of Unity 7. Here’s what happened:

  • 11.04: New Unity as default
  • 11.10: New Unity version
  • 12.04: Unity in First LTS

What we’ve decided to do this time is to keep the same, stable Unity 7 desktop as the default while we offer users who want to opt-in to Unity8 an option to use that desktop. As development continues the Unity 8 desktop will get better and better.  It will benefit from a lot of the advances which have come about through the development of the phone OS and will benefit from continual improvements as the releases happen.

  • 14.04 LTS: Unity 7 default / Unity 8 option for the first time
  • 14.10: Unity 7 default / Unity 8 new rev as an option
  • 15.04: Unity 7 default / Unity 8 new rev as an option
  • 15.10: Potentially Unity 8 default / Unity 7 as an option
  • 16.04 LTS: Unity 8 default / Unity 7 as an option

As you can see, this gives us a full 2 cycles (in addition to the one we’ve already done) to really nail Unity 8 with the level of quality that people expect. So what do we have?

How will we deliver Unity 8 with better quality than 7?

Continuous Integration is the best way for us to achieve and maintain the highest quality possible.  We have put a lot of effort in to automating as much of the testing as we can, the best testing is that which is performed easily.  Before every commit the changes get reviewed and approved – this is the first line of defense against bugs.  Every merge request triggers a run of the tests, the second line of defense against bugs and regressions – if a change broke something we find out about it before it gets in to the build.

The CI process builds everything in a “silo”, a self contained & controlled environment where we find out if everything works together before finally landing in the image.

And finally, we have a large number of tests which run against those images. This really is a “belt and braces” approach to software quality and it all happens automatically.  You can see, we are taking the quality of our software very seriously.

What about Unity 7?

Unity 7 and Compiz have a team dedicated to maintenance and bug fixes and so the quality of it continues to improve with every release.  For example; windows switching workspaces when a monitor gets unplugged is fixed, if you have a mouse with 6 buttons it works, support for the new version of Metacity (incase you want to use the Gnome2 desktop) – added (and incidentally, a lot of that work was done by a community contributor – thanks Alberts!)

Unity 7 is the desktop environment for a lot of software developers, devops gurus, cloud platform managers and millions of users who rely on it to help them with their everyday computing.  We don’t want to stop you being able to get work done.  This is why we continue to maintain Unity 7 while we develop Unity 8.  If you want to take Unity 8 for a spin and see how its coming along then you can; if you want to get your work done, we’re making that experience better for you every day.  Best of all, both of these options are available to you with no detriment to the other.

Things that we’re getting in the new Ubuntu Desktop

  1. Applications decoupled from the OS updates.  Traditionally a given release of Ubuntu has shipped with the versions of the applications available at the time of release.  Important updates and security fixes are back-ported to older releases where required, but generally you had to wait for the next release to get the latest and greatest set of applications.  The new desktop packaging system means that application developers can push updates out when they are ready and the user can benefit right away.
  2. Application isolation.  Traditionally applications can access anything the user can access; photos, documents, hardware devices, etc.  On other platforms this has led to data being stolen or rendered otherwise unusable.  Isolation means that without explicit permission any Click packaged application is prevented from accessing data you don’t want it to access.
  3. A full SDK for writing Ubuntu apps.  The SDK which many people are already using to write apps for the phone will allow you to write apps for the desktop as well.  In fact, your apps will be write once run anywhere – you don’t need to write a “desktop” app or a “phone” app, just an Ubuntu app.

What we have now

The easiest way to try out the Unity 8 Desktop Preview is to use the daily Ubuntu Desktop Next live image:   This will allow you to boot into a Unity 8 session without touching your current installation.  An easy 10 step way to write this image to a USB stick is:

  1. Download the ISO
  2. Insert your USB stick in the knowledge that it’s going to get wiped
  3. Open the “Disks” application
  4. Choose your USB stick and click on the cog icon on the righthand side
  5. Choose “Restore Disk Image”
  6. Browse to and select the ISO you downloaded in #1
  7. Click “Start restoring”
  8. Wait
  9. Boot and select “Try Ubuntu….”
  10. Done *

* Please note – there is currently a bug affecting the Unity 8 greeter which means you are not automatically logged in when you boot the live image.  To log in you need to:

  1. Switch to vt1 (ctrl-alt-f1)
  2. type “passwd” and press enter
  3. press enter again to set the current password to blank
  4. enter a new password twice
  5. Check that the password has been successfully changed
  6. Switch back to vt7 (ctrl-alt-f7)
  7. Enter the new password to login


Here are some screenshots showing what Unity 8 currently looks like on the desktop:


The team

The people working on the new desktop are made up of a few different disciplines.  We have a team dedicated to Unity 7 maintenance and bug fixes who are also responsible for Unity 8 on the desktop and feed in a lot of support to the main Unity 8 & Mir teams. We have the Ubuntu Desktop team who are responsible for many aspects of the underlying technologies used such as GNOME libraries, settings, printing etc as well as the key desktop applications such as Libreoffice and Chromium.  The Ubuntu desktop team has some of the longest serving members of the Ubuntu family, with some people having been here for the best part of ten years.

How you can help

We need to log all the bugs which need to be fixed in order to make Unity 8 the best desktop there is.  Firstly, we need people to test the images and log bugs.  If developers want to help fix those bugs, so much the better.  Right now we are focusing on identifying where the work done for the phone doesn’t work as expected on the desktop.  Once those bugs are logged and fixed we can rely on the CI system described above to make sure that they stay fixed.

Link to daily ISOs:


IRC:  #ubuntu-desktop on Freenode

Read more

在前面的一些文章中,我们已经介绍了一些怎么利用Qt和C++ API来创建一个Scope。它们都是一些基本的Scope。在这篇文章中,我们将介绍department Scope,并掌握开发它的方法。Department Scope将会在许多的Scope中进行分类搜寻。更多关于Scope的介绍可以在网址找到。我们最终的Scope的界面如下:


1)什么是department Scope





首先,我们来打开我们的Ubuntu SDK来创建一个最基本的应用。我们选择菜单“New file or Project”或使用热键“Ctrl+N”。我们选择“Unity Scope”模版。

我们给我们的应用一个名字“dianping”。我们同事也选择template的类型为“Empty scope”:



我们可以看到在项目的“src”目录下有两个目录:apiscope。api目录下的代码主要是为了来访问我们的web service来得到一个json或是xml的数据。在这个项目中,我们并不准备采用这个目录中的client类。有兴趣的开发者可以尝试把自己的client和scope的代码分开。


find_package(Qt5Network REQUIRED)
find_package(Qt5Core REQUIRED)     
find_package(Qt5Xml REQUIRED)      



# Build a shared library containing our scope code.
# This will be the actual plugin that is loaded.
  scope SHARED

qt5_use_modules(scope Core Xml Network) 

# Link against the object library and our external library dependencies

我们可以看到,我们加入了对Qt Core,XML及Network库的调用。同时,我们也打开"tests/unit/CMakeLists.txt"文件,并加入“qt5_use_modules(scope-unit-tests Core Xml Network)":

# Our test executable.
# It includes the object code from the scope

# Link against the scope, and all of our test lib dependencies

qt5_use_modules(scope-unit-tests Core Xml Network)

# Register the test with CTest



class QCoreApplication; // added

namespace scope {
class Scope: public unity::scopes::ScopeBase {
    void start(std::string const&) override;
    void stop() override;
    void run(); // added
    unity::scopes::PreviewQueryBase::UPtr preview(const unity::scopes::Result&,
                                                  const unity::scopes::ActionMetadata&) override;
    unity::scopes::SearchQueryBase::UPtr search(
            unity::scopes::CannedQuery const& q,
            unity::scopes::SearchMetadata const&) override;

    api::Config::Ptr config_;
    QCoreApplication *app; //added


#include <QCoreApplication> // added


void Scope::stop() {
    /* The stop method should release any resources, such as network connections where applicable */
    delete app;

void Scope::run()
    int zero = 0;
    app = new QCoreApplication(zero, nullptr);




  • 这个类定义了“start", "stop"及"run"来运行scope。绝大多数开发者并不需要修改这个类的大部分实现。在我们的例程中,我们将不做任何的修改
  • 它也同时实现了另外的两个方法:search 和 preview。我们一般来说不需要修改这俩个方法的实现。但是他们所调用的函数在具体的文件中必须实现
注意:我们可以通过研究Scope API的头文件来对API有更多的认识。更多的详细描述,开发者可以在查看。


sc::SearchQueryBase::UPtr Scope::search(const sc::CannedQuery &query,
                                        const sc::SearchMetadata &metadata) {

    const QString scopePath = QString::fromStdString(scope_directory());
    const QString cachePath =QString::fromStdString(cache_directory());

    // Boilerplate construction of Query
    return sc::SearchQueryBase::UPtr(new Query(query, metadata, scopePath,cachePath, config_));


Query::Query(const sc::CannedQuery &query, const sc::SearchMetadata &metadata, QString const& scopeDir,
        QString const& cacheDir, Config::Ptr config) :
        sc::SearchQueryBase( query, metadata ),
        m_scopeDir( scopeDir ),
        m_cacheDir( cacheDir ),
    qDebug() << "CacheDir: " << m_cacheDir;
    qDebug() << "ScopeDir " <<  m_scopeDir;

    qDebug() << m_urlRSS;


class Query: public unity::scopes::SearchQueryBase {


    QString m_scopeDir;
    QString m_cacheDir;


bzr branch lp:~liu-xiao-guo/debiantrial/dianpingdept1。




  • 得到由用户输入的查询字符串
  • 向web services发送请求
  • 生成搜索的结果(根据每个scope不同而不同)
  • 创建搜索结果category(比如不同的layout-- grid/carousel)
  • 根据不同的搜寻结果来绑定不同的category以显示我们所需要的UI
  • 推送不同的category来显示给最终用户
  • 基本上所有的代码集中在"run"方法中。这里我们加入了一个”QCoreApplication”变量。这主要是为了我们能够使用signal/slot机制。
接下来我们对“run”进行修改来达到搜寻的目的。对dianping API的接口来说,我们需要对其输入的URL进行签名。为了方便,我定义了如下的helper方法。

QString Query::getUrl(QString addr, QMap<QString, QString> map) {
    QCryptographicHash generator(QCryptographicHash::Sha1);

    QString temp;
    QMapIterator<QString, QString> i(map);
    while (i.hasNext()) {;
        // qDebug() << i.key() << ": " << i.value();


    qDebug() << temp;

    qDebug() << "UTF-8: " << temp.toUtf8();

    QString sign = generator.result().toHex().toUpper();

    QString url;


    while (i.hasNext()) {;
        // qDebug() << i.key() << ": " << i.value();

    qDebug() << "Final url: " << url;
    return url;


Query::Query(const sc::CannedQuery &query, const sc::SearchMetadata &metadata, QString const& scopeDir,
        QString const& cacheDir, Config::Ptr config) :
        sc::SearchQueryBase( query, metadata ),
        m_scopeDir( scopeDir ),
        m_cacheDir( cacheDir ),
        // m_limit( 0 ),
    qDebug() << "CacheDir: " << m_cacheDir;
    qDebug() << "ScopeDir " <<  m_scopeDir;

    QMap<QString,QString> map;
    map["format"] = "xml";

    m_urlRSS = getUrl(DEPARTMENTS,  map);
    qDebug() << "m_urlRSS: " << m_urlRSS;


const QString DEPARTMENTS = "";

我们可以通过打印的方式打印出来到Application Output窗口中:

m_urlRSS:  ""



void Query::run(sc::SearchReplyProxy const& reply) {
    qDebug() <<  "Run is started .............................!";

    // Create an instance of disk cache and set cache directory
    m_diskCache = new QNetworkDiskCache();

    QEventLoop loop;

    QNetworkAccessManager managerDepts;
    QObject::connect(&managerDepts, SIGNAL(finished(QNetworkReply*)), &loop, SLOT(quit()));
    QObject::connect(&managerDepts, &QNetworkAccessManager::finished,
                     [reply,this](QNetworkReply *msg){
        if( msg->error()!= QNetworkReply::NoError ){
            qWarning() << "failed to retrieve raw data, error:" << msg->error();
        QByteArray data = msg->readAll();

        // qDebug() << "XML data is: " <<;

        QString deptUrl = rssDepartments( data, reply );

        CannedQuery cannedQuery = query();
        QString deptId = qstr(cannedQuery.department_id());
        qDebug() << "department id: " << deptId;

        if (!query().department_id().empty()){ // needs departments support
            qDebug() << "it is not empty xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx!";
            deptUrl = m_depts[deptId];
            qDebug() << "depatUrl: " << deptUrl;
        } else {
            qDebug() << "It is empty ===================================!";

        if ( deptUrl.isEmpty() )


QString Query::rssDepartments( QByteArray &data, unity::scopes::SearchReplyProxy const& reply ) {
    QDomElement docElem;
    QDomDocument xmldoc;
    DepartmentList rss_depts;
    QString firstname = "";

    CannedQuery myquery( SCOPE_NAME );
    myquery.set_department_id( TOP_DEPT_NAME );

    Department::SPtr topDept;

    if ( !xmldoc.setContent(data) ) {
        qWarning()<<"Error importing data";
        return firstname;

    docElem = xmldoc.firstChildElement("results");
    if (docElem.isNull()) {
        qWarning() << "Error in data," << "results" << " not found";
        return firstname;

    docElem = docElem.firstChildElement("categories");
    if ( docElem.isNull() ) {
        qWarning() << "Error in data," << "categories" << " not found";
        return firstname;

    docElem = docElem.firstChildElement("category");

    // Clear the previous departments since the URL may change according to settings

    int index = 0;
    while ( !docElem.isNull() ) {

        QString category = docElem.attribute("name","");
        qDebug() << "category: " << category;

        if ( !category.isEmpty() ) {
            QString url = getDeptUrl(category);

            QString deptId = QString::number(index);

            if (firstname.isEmpty()) {
                //Create the url here
                firstname = url;
                topDept = move(unity::scopes::Department::create( "",
                                                                  myquery, category.toStdString()));
            } else {
                Department::SPtr aDept = move( unity::scopes::Department::create( deptId.toStdString(),
                                              myquery, category.toStdString() ) );
                rss_depts.insert( rss_depts.end(), aDept );

            m_depts.insert( QString::number(index), url );

        docElem = docElem.nextSiblingElement("category");

    // Dump the deparmemts
    QMapIterator<QString, QString> i(m_depts);
    while (i.hasNext()) {;
         qDebug() << i.key() << ": " << i.value();

    topDept->set_subdepartments( rss_depts );

     try {
        reply->register_departments( topDept );
    } catch (std::exception const& e) {
        qWarning() << "Error happened: " << e.what();

    return firstname;


bzr branch lp:~liu-xiao-guo/debiantrial/dianpingdept2


现在显然我们还看不到任何东西因为我们没有对我们的department进行搜寻。接下来,我们可以按照文章“怎么在Ubuntu Scope中获取location地址信息”来设置获得我们所需要的位置信息。在手机上,我们可以通过网路或GPS来获得我们所需要的位置信息。在电脑上目前还没有支持。通过获得的位置信息,我们通过点评对当地的位置进行搜索。


void Query::run(sc::SearchReplyProxy const& reply) {
    qDebug() <<  "Run is started .............................!";

    // Initialize the scopes

    // Get the current location of the search
    auto metadata = search_metadata();
    if ( metadata.has_location() ) {
        qDebug() << "Location is supported!";
        auto location = metadata.location();

        if ( location.has_altitude()) {
            cerr << "altitude: " << location.altitude() << endl;
            cerr << "longitude: " << location.longitude() << endl;
            cerr << "latitude: " << location.latitude() << endl;
            auto latitude = std::to_string(location.latitude());
            auto longitude = std::to_string(location.longitude());
            m_longitude = QString::fromStdString(longitude);
            m_latitude = QString::fromStdString(latitude);

        if ( m_longitude.isEmpty() ) {
            m_longitude = DEFAULT_LONGITUDE;
        if ( m_latitude.isEmpty() ) {
            m_latitude = DEFAULT_LATITUDE;

        qDebug() << "m_longitude1: " << m_longitude;
        qDebug() << "m_latitude1: " << m_latitude;
    } else {
        qDebug() << "Location is not supported!";
        m_longitude = DEFAULT_LONGITUDE;
        m_latitude = DEFAULT_LATITUDE;

    // Create an instance of disk cache and set cache directory
    m_diskCache = new QNetworkDiskCache();

    QEventLoop loop;

    QNetworkAccessManager managerDepts;
    QObject::connect(&managerDepts, SIGNAL(finished(QNetworkReply*)), &loop, SLOT(quit()));
    QObject::connect(&managerDepts, &QNetworkAccessManager::finished,
                     [reply,this](QNetworkReply *msg){
        if( msg->error()!= QNetworkReply::NoError ){
            qWarning() << "failed to retrieve raw data, error:" << msg->error();
        QByteArray data = msg->readAll();

        // qDebug() << "XML data is: " <<;

        QString deptUrl = rssDepartments( data, reply );

        CannedQuery cannedQuery = query();
        QString deptId = qstr(cannedQuery.department_id());
        qDebug() << "department id: " << deptId;

        if (!query().department_id().empty()){ // needs departments support
            qDebug() << "it is not empty xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx!";
            deptUrl = m_depts[deptId];
            qDebug() << "depatUrl: " << deptUrl;
        } else {
            qDebug() << "It is empty ===================================!";

        if ( deptUrl.isEmpty() )

        QEventLoop loop;
        QNetworkAccessManager managerRSS;
        QObject::connect( &managerRSS, SIGNAL(finished(QNetworkReply*)), &loop, SLOT(quit()));
        QObject::connect( &managerRSS, &QNetworkAccessManager::finished,
                         [reply,this](QNetworkReply *msg ){
            if( msg->error() != QNetworkReply::NoError ){
                qWarning() << "failed to retrieve specific dept raw data, error:" <<msg->error();
                rssError( reply, ERROR_Connection );

            QByteArray data = msg->readAll();
            if( query().query_string().empty() ){
                rssImporter( data, reply, CATEGORY_HEADER );
            } else {
                rssImporter( data, reply, CATEGORY_SEARCH );

        managerRSS.setCache( m_diskCache );
        managerRSS.get( QNetworkRequest( QUrl(deptUrl)) );



void Query::rssImporter(QByteArray &data, unity::scopes::SearchReplyProxy const& reply, QString title) {
    QDomElement docElem;
    QDomDocument xmldoc;
    CannedQuery cannedQuery = query();
    QString query = qstr( cannedQuery.query_string() );

    if ( !xmldoc.setContent( data ) ) {
        qWarning()<<"Error importing data";

    docElem = xmldoc.documentElement();
    //find result
    docElem = docElem.firstChildElement("businesses");
    if (docElem.isNull()) {
        qWarning()<<"Error in data,"<< "result" <<" not found";

    CategoryRenderer rdrGrid(CR_GRID);
    CategoryRenderer rdrCarousel(CR_CAROUSEL);

    auto carousel = reply->register_category("dianpingcarousel", title.toStdString(), "", rdrCarousel);
    auto grid = reply->register_category("dianpinggrid", "", "", rdrGrid);
    bool isgrid = false;

    docElem = docElem.firstChildElement("business");

    while (!docElem.isNull()) {
        QString business_id = docElem.firstChildElement("business_id").text();
        // qDebug() << "business_id: " << business_id;

        QString name = docElem.firstChildElement("name").text();
        // qDebug() << "name: "  << name;

        // Let's get rid of the test info in the string
        name = removeTestInfo(name);

        QString branch_name = docElem.firstChildElement("branch_name").text();
        // qDebug() << "branch_name: " << branch_name;

        QString address = docElem.firstChildElement("address").text();
        // qDebug() << "address: " << address;

        QString telephone = docElem.firstChildElement("telephone").text();
        // qDebug() << "telephone: " << telephone;

        QString city = docElem.firstChildElement("city").text();
        // qDebug() << "city: " << city;

        QString photo_url = docElem.firstChildElement("photo_url").text();
        // qDebug() << "photo_url: " << photo_url;

        QString s_photo_url = docElem.firstChildElement("s_photo_url").text();
        // qDebug() << "s_photo_url: " << s_photo_url;

        QString rating_s_img_uri = docElem.firstChildElement("rating_s_img_uri").text();
        // qDebug() << "rating_s_img_uri: " << rating_s_img_uri;

        QString business_url = docElem.firstChildElement("business_url").text();
        // qDebug() << "business_url: " << business_url;

        QDomElement deals = docElem.firstChildElement("deals");
        QDomElement deal = deals.firstChildElement("deal");
        QString summary = deal.firstChildElement("description").text();
        // qDebug() << "Summary: " << summary;

        if ( !query.isEmpty() ) {
            if ( !name.contains( query, Qt::CaseInsensitive ) &&
                 !summary.contains( query, Qt::CaseInsensitive ) &&
                 !address.contains( query, Qt::CaseInsensitive ) ) {
                qDebug() << "it is going to be skipped";
                docElem = docElem.nextSiblingElement("business");
            } else {
                qDebug() << "it is going to be listed!";

        docElem = docElem.nextSiblingElement("business");

        // for each result
        const std::shared_ptr<const Category> * top;

        if ( isgrid ) {
          top = &grid;
          isgrid = false;
        } else {
          isgrid = true;
          top = &carousel;

        CategorisedResult catres((*top));

        catres["subtitle"] = address.toStdString();
        catres["summary"] = summary.toStdString();
        catres["fulldesc"] = summary.toStdString();
        catres["art2"] = s_photo_url.toStdString();
        catres["address"] = Variant(address.toStdString());
        catres["telephone"] = Variant(telephone.toStdString());

        //push the categorized result to the client
        if (!reply->push(catres)) {
            break; // false from push() means search waas cancelled

    qDebug()<<"parsing ended";


        if ( !query.isEmpty() ) {
            if ( !name.contains( query, Qt::CaseInsensitive ) &&
                 !summary.contains( query, Qt::CaseInsensitive ) &&
                 !address.contains( query, Qt::CaseInsensitive ) ) {
                qDebug() << "it is going to be skipped";
                docElem = docElem.nextSiblingElement("business");
            } else {
                qDebug() << "it is going to be listed!";


在本例中,我们创建了两个JSON objects. 它们是最原始的字符串,如下所示,它有两个field:template及components。template是用来定义是用什么layout来显示我们所搜索到的结果。这里我们选择的是”grid"及小的card-size。components项可以用来让我们选择预先定义好的field来显示我们所需要的结果。这里我们添加了"title"及“art"。

[html] view plaincopy
  1. std::string CR_GRID = R"(  
  2.     {  
  3.         "schema-version" : 1,  
  4.         "template" : {  
  5.             "category-layout" : "grid",  
  6.             "card-size": "small"  
  7.         },  
  8.         "components" : {  
  9.             "title" : "title",  
  10.             "art" : {  
  11.                 "field": "art",  
  12.                 "aspect-ratio": 1.6,  
  13.                 "fill-mode": "fit"  
  14.             }  
  15.         }  
  16.     }  
  17. )";  

更多关于 CategoryRenderer 类的介绍可以在 docs找到。

我们为每个JSON Object创建了一个CategoryRenderer,并同时向reply object注册:

  1. CategoryRenderer rdrGrid(CR_GRID);  
  2. CategoryRenderer rdrCarousel(CR_CAROUSEL);  
  4. QString title = queryString + "美味";  
  6. auto carousel = reply->register_category("dianpingcarousel", title.toStdString(), "", rdrCarousel);  
  7. auto grid = reply->register_category("dianpinggrid""""", rdrGrid);  


bzr branch lp:~liu-xiao-guo/debiantrial/dianpingdept3

运行我们的Scope, 我们可以看到如下的画面:






  • 定义在preview时所需要的widget
  • 让widget和搜索到的数据field一一对应起来
  • 定义不同数量的layout列(由屏幕的尺寸来定)
  • 把不同的widget分配到layout中的不同列中
  • 把reply实例显示到layout的widget中




Preview Widgets

这是一组预先定义好的widgets。每个都有一个类型。更据这个类型我们可以生成它们。你可以在这里找到Preview Widget列表及它们提供的的field类型。


  • header:它有title及subtitle field
  • image:它有source field有来显示从哪里得到这个art
  • text:它有text field
  • action:用来展示一个有"Open"的按钮。当用户点击时,所包含的URI将被打开


  1. PreviewWidget w_header("headerId""header");  


bzr branch lp:~liu-xiao-guo/debiantrial/dianpingdept4





我们在这里想对dianping Scope做一个设置。比如我想有更多的搜寻的结果,而不是每次只有最多20个。我们可以通过文章“如何在Ubuntu Scope中定义设置变量并读取”来多我们的limit进行设置。首先,在Query类中加入函数:

// The followoing function is used to retrieve the settings for the scope
void Query::initScope()
    qDebug() << "Going to retrieve the settings!";

    unity::scopes::VariantMap config = settings();  // The settings method is provided by the base class
    if (config.empty())
        qDebug() << "CONFIG EMPTY!";

    m_limit = config["limit"].get_double();
    cerr << "limit: " << m_limit << endl;


void Query::run(sc::SearchReplyProxy const& reply) {
    qDebug() <<  "Run is started .............................!";

    // Initialize the scopes



type = number
defaultValue = 20
displayName = 搜寻条数



  FILES "${CMAKE_BINARY_DIR}/src/com.ubuntu.developer.liu-xiao-guo.dianping_dianping-settings.ini"

我们可以运行一下“Run CMake”,这样,我们在Project中可以看到新添加的.ini文件。重新运行我们的Scope,并在Scope的右上角的设置图标(像有锯齿的 )去尝试改变limit的值,看看效果是什么样的。


我们也可以同时修改“data”目录下的logo及icon文件来使得我们的Scope更像一个branded Scope。最终所有的源码可以在如下的地址下载:

bzr branch lp:~liu-xiao-guo/debiantrial/dianpingdept5


作者:UbuntuTouch 发表于2014-10-14 16:01:44 原文链接
阅读:89 评论:0 查看评论

Read more
Greg Lutostanski


  • Review ACTION points from previous meeting

ACTION: all to review blueprint work items before next weeks meeting

  • U Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair


Final Freeze 9 days out
  • Check on FTBFS packages — seems like there has been good progress
  • Make sure are up to date, if resources are needed now is the time to ask.
  • Release bugs, no high priority ones, juju mirs and openstack bits are being worked.
  • kickinz1 brought up two bcache bugs (LP #1377130 and LP #1377142) to the kernel team for help.
Meeting Actions


Agree on next meeting date and time

Next meeting will be on Tuesday, Oct 14th at 16:00 UTC in #ubuntu-meeting.


Read more

在本遍文章中,我们来讲解怎么对我们的Ubuntu Scope进行设置。对Scope而言,有些时候我们希望能够使用设置来改变我们的显示,或对我们的搜索进行重新定义。关于更多Scope的开发,请参阅网站:


我们首先打开SDK,并选择“Unity Scope”模版。我们选择一个项目的名称为“settingscope”:

接下来,我们选择“Empty scope”。这样我们就创建了我们的一个最基本的scope了。









  FILES "${CMAKE_BINARY_DIR}/src/com.ubuntu.developer.liu-xiao-guo.settingscope_settingscope-settings.ini"


type = string
defaultValue = London
displayName = Location

type = list
defaultValue = 1
displayName = Distance Unit
displayName[de] = Entfernungseinheit
displayValues = Kilometers;Miles
displayValues[de] = Kilometer;Meilen

type = number
defaultValue = 23
displayName = Age

type = boolean
defaultValue = true
displayName = Enabled

# Setting without a default value
type = string
displayName = Color

type = number
defaultValue = 20
displayName = 搜寻条数



void Query::run(sc::SearchReplyProxy const& reply) {

    // Read the settings

    try {
        // Start by getting information about the query
        const sc::CannedQuery &query(sc::SearchQueryBase::query());

        // Trim the query string of whitespace
        string query_string = alg::trim_copy(query.query_string());

        Client::ResultList results;
        if (query_string.empty()) {
            // If the string is empty, pick a default
            results ="default");
        } else {
            // otherwise, use the search string
            results =;

        // Register a category
        auto cat = reply->register_category("results", "Results", "",

        for (const auto &result : results) {
            sc::CategorisedResult res(cat);

            cerr << "it comes here: " << m_limit << endl;

            // We must have a URI

            // res.set_title(result.title);
            res.set_title( m_location );
            res["subtitle"] = std::to_string(m_limit);

            // Set the rest of the attributes, art, description, etc
            res["description"] = result.description;

            // Push the result
            if (!reply->push(res)) {
                // If we fail to push, it means the query has been cancelled.
                // So don't continue;
    } catch (domain_error &e) {
        // Handle exceptions being thrown by the client API
        cerr << e.what() << endl;

void Query::initScope()
    unity::scopes::VariantMap config = settings();  // The settings method is provided by the base class
    if (config.empty())
        cerr << "CONFIG EMPTY!" << endl;

    m_location = config["location"].get_string();     // Prints "London" unless the user changed the value
    cerr << "location: " << m_location << endl;

    m_limit = config["limit"].get_double();
    cerr << "limit: " << m_limit << endl;


            // res.set_title(result.title);
            res.set_title( m_location );
            res["subtitle"] = std::to_string(m_limit);



我们也可以在我们的Application Output窗口中看到设置的变化:


bzr branch lp:~liu-xiao-guo/debiantrial/settingscope

作者:UbuntuTouch 发表于2014-10-14 13:12:28 原文链接
阅读:192 评论:0 查看评论

Read more
David Callé

A scope is a tailored view for a set of data, that can use custom layouts, display and branding options. From RSS news feeds to weather data and search engine results, the flexibility of scopes allows you to provide a simple, recognizable and consistent experience with the rest of the OS.

Scopes can also integrate with system-wide user accounts (email, social networks…), split your content into categories and aggregate into each others (for example, a “shopping” scope aggregating results from several store scopes).


In this tutorial, you will learn how to write a scope in C++ for SoundCloud, using the Ubuntu SDK. Read…

Read more
Luca Paulina

A few weeks ago we launched ‘Machine view’ for Juju, a feature designed to allow users to easily visualise and manage the machines running in their cloud environments. In this post I want to share with you some of the challenges we faced and the solutions we designed in the process of creating it.

A little bit about Juju…
For those of you that are unfamiliar with Juju, a brief introduction. Juju is a software tool that allows you to design, build and manage application services running in the cloud. You can use Juju through the command-line or via a GUI and our team is responsible for the user experience of Juju in the GUI.

First came ‘Service View’
In the past we have primarily focused on Juju’s ‘Service view’ – a virtual canvas that enables users to design and connect the components of their given cloud environment.


This view is fantastic for modelling the concept and relationships that define an application environment. However, for each component or service block, a user can have anything from one unit to hundreds or thousands of units, depending on the scale of the environment, and before machine view, units means machines.

The goal of machine view was to surface these units and enable users to visualise, manage and optimise their use of machines in the cloud.

‘Machine view’: design challenges
There were a number of challenges we needed to address in terms of layout and functionality:

  • The scalability of the solution
  • The glanceability of the data
  • The ability to customise and sort the information
  • The ability to easily place and move units
  • The ability to track changes
  • The ability to deploy easily to the cloud

I’ll briefly go into each one of these topics below.

Scalability: Environments can be made up of a couple of machines or thousands. This means that giving the user a clear, light and accessible layout was incredibly important – we had to make sure the design looked and worked great at both ends of the spectrum.

Machine view


Glanceability: Users need simple comparative information to help choose the right machine at-a-glace. We designed and tested hundreds of different ways of displaying the same data and eventually ended up with an extremely cut back listing which was clean and balanced.

A tour of the many incarnations and iterations of Machine view…

The ability to sort and customise: As it was possible and probable that users would scale environments to thousands of machines, we needed to provide the ability to sort and customise the views. Users can use the menus at the top of each column to hide information from view and customise the data they want visible at a glance. As users become more familiar with their machines they could turn off extra information for a denser view of their machines. Users are also given basic sorting options to help them find and explore their machines in different ways.


The ability to easily place and move units: Machine view is built around the concept of manual placement – the ability to co-locate (put more than one) items on a single machine or to define specific types of machines for specific tasks. (As opposed to automatic placement, where each unit is given a machine of the pre-determined specification). We wanted to enable users to create the most optimised machine configurations for their applications.

Drag and drop was a key interaction that we wanted to exploit for this interface because it would simplify the process of manually placing units by a significant amount. The three column layout aided the use of drag and drop, where users are able to pick up units that need placing on the left hand side and drag them to a machine in the middle column or a container in the third column. The headers also change to reveal drop zones allowing users to create new machines and containers in one fluid action keeping all of the primary interactions in view and accessible at all times.

Drag and drop in action on machine view

The ability to track changes: We also wanted to expose the changes that were being made throughout user’s environments as they were going along and allow them to commit batches of changes altogether. Deciding which changes were exposed and the design of the uncommitted notification was difficult, we had to make sure the notifications were not viewed as repetitive, that they were identifiable and that it could be used throughout the interface.



The ability to deploy easily to the cloud: Before machine view it was impossible for someone to design their entire environment before sending it to the cloud. The deployment bar is a new ever present canvas element that rationalises all of the changes made into a neat listing, it is also where users can deploy or commit those changes. Look for more information about the deployment bar in another post.

The change log exposed

The deployment summary

We hope that machine view will really help Juju users by increasing the level of control and flexibility they have over their cloud infrastructure.

This project wouldn’t have been possible without the diligent help from the Juju GUI development team. Please take a look and let us know what you think. Find out more about Juju, Machine View or take it for a spin.

Read more

mgo r2014.10.12

A new release of the mgo MongoDB driver for Go is out, packed with contributions and features. But before jumping into the change list, there’s a note in the release of MongoDB 2.7.7 a few days ago that is worth celebrating:

New Tools!
– The MongoDB tools have been completely re-written in Go
– Moved to a new repository:
– Have their own JIRA project:

So far this is part of an unstable release of the MongoDB server, but it implies that if the experiment works out every MongoDB server release will be carrying client tools developed in Go and leveraging the mgo driver. This extends the collaboration with MongoDB Inc. (mgo is already in use in the MMS product), and some of the features in release r2014.10.12 were made to support that work.

The specific changes available in this release are presented below. These changes do not introduce compatibility issues, and most of them are new features.

Fix in txn package

The bug would be visible as an invariant being broken, and the transaction application logic would panic until the txn metadata was cleaned up. The bug does not cause any data loss nor incorrect transactions to be silently applied. More stress tests were added to prevent that kind of issue in the future.

Debug information contributed by the juju team at Canonical.

MONGODB-X509 auth support

The MONGODB-X509 authentication mechanism, which allows authentication via SSL client certificates, is now supported.

Feature contributed by Gabriel Russel.

SCRAM-SHA-1 auth support

The MongoDB server is changing the default authentication protocol to SCRAM-SHA-1. This release of mgo defaults to authenticating over SCRAM-SHA-1 if the server supports it (2.7.7 and later).

Feature requested by Cailin Nelson.

GSSAPI auth on Windows too

The driver can now authenticate with the GSSAPI (Kerberos) mechanism on Windows using the standard operating system support (SSPI). The GSSAPI support on Linux remains via the cyrus-sasl library.

Feature contributed by Valeri Karpov.

Struct document ids on txn package

The txn package can now handle documents that use struct value keys.

Feature contributed by Jesse Meek.

Improved text index support

The EnsureIndex family of functions may now conveniently define text indexes via the usual shorthand syntax ("$text:field"), and Sort can use equivalent syntax ("$textScore:field") to inject the text indexing score.

Feature contributed by Las Zenow.

Support for BSON’s deprecated DBPointer

Although the BSON specification defines DBPointer as deprecated, some ancient applications still depend on it. To enable the migration of these applications to Go, the type is now supported.

Feature contributed by Mike O’Brien.

Generic Getter/Setter document types

The Getter/Setter interfaces are now respected when unmarshaling documents on any type. Previously they would only be respected on maps and structs.

Feature requested by Thomas Bouldin.

Improvements on aggregation pipelines

The Pipe.Iter method will now return aggregation results using cursors when possible (MongoDB 2.6+), and there are also new methods to tweak the aggregation behavior: Pipe.AllowDiskUse, Pipe.Batch, and Pipe.Explain.

Features requested by Roman Konz.

Decoding into custom bson.D types

Unmarshaling will now work for types that are slices of bson.DocElem in an equivalent way to bson.D.

Feature requested by Daniel Gottlieb.

Indexes and CommandNames via commands

The Indexes and CollectionNames methods will both attempt to use the new command-based protocol, and fallback to the old method if that doesn’t work.

GridFS default chunk size

The default GridFS chunk size changed from 256k to 255k, to ensure that the total document size won’t go over 256k with the additional metadata. Going over 256k would force the reservation of a 512k block when using the power-of-two allocation schema.

Performance of bson.Raw decoding

Unmarshaling data into a bson.Raw will now bypass the decoding process and record the provided data directly into the bson.Raw value. This significantly improves the performance of dumping raw data during iteration.

Benchmarks contributed by Kyle Erf.

Performance of seeking to end of GridFile

Seeking to the end of a GridFile will now not read any data. This enables a client to find the size of the file using only the io.ReadSeeker interface with low overhead.

Improvement contributed by Roger Peppe.

Added Query.SetMaxScan method

The SetMaxScan method constrains the server to only scan the specified number of documents when fulfilling the query.

Improvement contributed by Abhishek Kona.

Added GridFile.SetUploadDate method

The SetUploadDate method allows changing the upload date at file writing time.

Read more

New MAAS features in 1.7.0

MAAS 1.7.0 is close to its release date, which is set to coincide with Ubuntu 14.10’s release.

The development team has been hard at work and knocked out some amazing new features and improvements. Let me take you through some of them!

UI-based boot image imports

Previously, MAAS used to require admins to configure (well, hand-hack) a yaml file on each cluster controller that specified precisely which OSes, release and architectures to import. This has all been replaced with a very smooth new API that lets you simply click and go.

New image import configuration page

Click for bigger version

The different images available are driven by a “simplestreams” data feed maintained by Canonical. What you see here is a representation of what’s available and supported.

Any previously-imported images also show on this page, and you can see how much space they are taking up, and how many nodes got deployed using each image. All the imported images are automatically synced across the cluster controllers.


Once a new selection is clicked, “Apply changes” kicks off the import. You can see that the progress is tracked right here.

(There’s a little more work left for us to do to track the percentage downloaded.)

Robustness and event logs

MAAS now monitors nodes as they are deploying and lets you know exactly what’s going on by showing you an event log that contains all the important events during the deployment cycle.


You can see here that this node has been allocated to a user and started up.

Previously, MAAS would have said “okay, over to you, I don’t care any more” at this point, which was pretty useless when things start going wrong (and it’s not just hardware that goes wrong, preseeds often fail).

So now, the node’s status shows “Deploying” and you can see the new event log at the bottom of the node page that shows these actions starting to take place.

After a while, more events arrive and are logged:


And eventually it’s completely deployed and ready to use:


You’ll notice how quick this process is nowadays.  Awesome!

More network support

MAAS has nascent support for tracking networks/subnets and attached devices. Changes in this release add a couple of neat things: Cluster interfaces automatically have their networks registered in the Networks tab (“master-eth0″ in the image), and any node network interfaces known to be attached to any of these networks are automatically linked (see the “attached nodes” column).  This makes even less work for admins to set up things, and easier for users to rely on networking constraints when allocating nodes over the API.


Power monitoring

MAAS is now tracking whether the power is applied or not to your nodes, right in the node listing.  Black means off, green means on, and red means there was an error trying to find out.


Bugs squashed!

With well over 100 bugs squashed, this will be a well-received release.  I’ll post again when it’s out.

Read more
Michael Hall

screenshot_1.0So it’s finally happened, one of my first Ubuntu SDK apps has reached an official 1.0 release. And I think we all know what that means. Yup, it’s time to scrap the code and start over.

It’s a well established mantra, codified by Fred Brooks, in software development that you will end up throwing away the first attempt at a new project. The releases between 0.1 and 0.9 are a written history of your education about the problem, the tools, or the language you are learning. And learn I did, I wrote a whole series of posts about my adventures in writing uReadIt. Now it’s time to put all of that learning to good use.

Often times projects still spend an extremely long time in this 0.x stage, getting ever closer but never reaching that 1.0 release.  This isn’t because they think 1.0 should wait until the codebase is perfect, I don’t think anybody expects 1.0 to be perfect. 1.0 isn’t the milestone of success, it’s the crossing of the Rubicon, the point where drastic change becomes inevitable. It’s the milestone where the old code, with all it’s faults, dies, and out of it is born a new codebase.

So now I’m going to start on uReadIt 2.0, starting fresh, with the latest Ubuntu UI Toolkit and platform APIs. It won’t be just a feature-for-feature rewrite either, I plan to make this a great Reddit client for both the phone and desktop user. To that end, I plan to add the following:

  • A full Javascript library for interacting with the Reddit API
  • User account support, which additionally will allow:
    • Posting articles & comments
    • Reading messages in your inbox
    • Upvoting and downvoting articles and comments
  • Convergence from the start, so it’s usable on the desktop as well
  • Re-introduce link sharing via Content-Hub
  • Take advantage of new features in the UITK such as UbuntuListView filtering & pull-to-refresh, and left/right swipe gestures on ListItems

Another change, which I talked about in a previous post, will be to the license of the application. Where uReadIt 1.0 is GPLv3, the next release will be under a BSD license.

Read more

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./ ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./ ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack.

Finally, let' run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Read more



我们首先打开SDK,并选择“Unity Scope”模版:

接下来,我们选择“Empty scope”。这样我们就创建了我们的一个最基本的scope了。




DisplayName = Scopetest Scope
Description = This is a Scopetest scope
Art = screenshot.png
Author = Firstname Lastname
Icon = icon.png


PageHeader.Logo = logo.png


#include <unity/scopes/SearchMetadata.h> // added


void Query::run(sc::SearchReplyProxy const& reply) {
    try {
        cerr << "starting to get the location" << endl;

        auto metadata = search_metadata();
        if (metadata.has_location()) {

            cerr << "it has location data" << endl;

            auto location = metadata.location();

            if (location.has_country_code()) {
                cerr << "country code: " << location.country_code() << endl;

            if ( location.has_area_code() ) {
                cerr << "area code: " << location.area_code() << endl;

            if ( location.has_city() ) {
               cerr << "city: " << << endl;

            if ( location.has_country_name() ) {
                cerr << "" << location.country_name() << endl;

            if ( location.has_altitude()) {
                cerr << "altitude: " << location.altitude() << endl;
                cerr << "longitude: " << location.longitude() << endl;
                cerr << "latitude: " << location.latitude() << endl;

            if ( location.has_horizontal_accuracy()) {
                cerr << "horizotal accuracy: " << location.horizontal_accuracy() << endl;

            if ( location.has_region_code() ) {
                cerr << "region code: " << location.region_code() << endl;

            if ( location.has_region_name() ) {
                cerr << "region name: " << location.region_name() << endl;

            if ( location.has_zip_postal_code() ) {
                cerr << "zip postal code: " << location.zip_postal_code() << endl;






bzr branch lp:~liu-xiao-guo/debiantrial/scope

作者:UbuntuTouch 发表于2014-10-10 13:07:50 原文链接
阅读:263 评论:0 查看评论

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20141007 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:


Status: Utopic Development Kernel

The Utopic kernel has been rebased to the v3.16.4 upstream stable
kernel. This is available for testing as of the 3.16.0-21.28 upload to
the archive. Please test and let us know your results.
Also, Utopic Kernel Freeze is this Thurs Oct 9. Any patches submitted
after kernel freeze are subject to our Ubuntu kernel SRU policy. I sent
a friendly reminder about this to the Ubuntu kernel-team mailing list
yesterday as well.
Important upcoming dates:
Thurs Oct 9 – Utopic Kernel Freeze (~2 days away)
Thurs Oct 16 – Utopic Final Freeze (~1 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~2 weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:

Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 30):

  • Lucid – Testing
  • Precise – Testing
  • Trusty – Testing

    Current opened tracking bugs details:


    For SRUs, SRU report is a good source of information:



    cycle: 19-Sep through 11-Oct
    19-Sep Last day for kernel commits for this cycle
    21-Sep – 27-Sep Kernel prep week.
    28-Sep – 04-Oct Bug verification & Regression testing.
    05-Oct – 08-Oct Regression testing & Release to -updates.

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Michael Hall

Ubuntu Mauritius CommunityBut it isn’t perfect.  And that, in my opinion, is okay.  I’m not perfect, and neither are you, but you are still wonderful too.

I was asked, not too long ago, what I hated about the community. The truth, then and now, is that I don’t hate anything about it. There is a lot I don’t like about what happens, of course, but nothing that I hate. I make an effort to understand people, to “grok” them if I may borrow the word from Heinlein. When you understand somebody, or in this case a community of somebodies, you understand the whole of them, the good and the bad. Now understanding the bad parts doesn’t make them any less bad, but it does provide opportunities for correcting or removing them that you don’t get otherwise.

You reap what you sow

People will usually respond in kind with the way they are treated. I try to treat everybody I interact with respectfully, kindly, and rationally, and I’ve found that I am treated that way back. But, if somebody is prone to arrogance or cruelty or passion, they will find far more of that treatment given back and them than the positive ones. They are quite often shocked when this happens. But when you are a source of negativity you drive away people who are looking for something positive, and attract people who are looking for something negative. It’s not absolute, nice people will have some unhappy followers, and crumpy people will have some delightful ones, but on average you will be surrounded by people who behave like you.

Don’t get even, get better

An eye for an eye makes the whole world blind, as the old saying goes. When somebody is rude or disrespectful to us, it’s easy to give in to the desire to be rude and disrespectful back. When somebody calls us out on something, especially in public, we want to call them out on their own problems to show everybody that they are just as bad. This might feel good in the short term, but it causes long term harm to both the person who does it and the community they are a part of. This ties into what I wrote above, because even if you aren’t naturally a negative person, if you respond to negativity with more of the same, you’ll ultimately share the same fate. Instead use that negativity as fuel to drive you forward in a positive way, respond with coolness, thoughtfulness and introspection and not only will you disarm the person who started it, you’ll attract far more of the kind of people and interactions that you want.

Know your audience

Your audience isn’t the person or people you are talking to. Your audience is the people who hear you. Many of the defenders of Linus’ beratement of kernel contributors is that he only does it to people he knows can take it. This defense is almost always countered, quite properly, by somebody pointing out that his actions are seen by far more than just their intended recipient. Whenever you interact with any member of your community in a public space, such as a forum or mailing list, treat it as if you were interacting with every member, because you are. Again, if you perpetuate negativity in your community, you will foster negativity in your community, either directly in response to you or indirectly by driving away those who are more positive in nature. Linus’ actions might be seen as a joke, or necessary “tough love” to get the job done, but the LKML has a reputation of being inhospitable to potential contributors in no small part because of them. You can gather a large number of negative, or negativity-accepting, people into a community and get a lot of work done, but it’s easier and in my opinion better to have a large number of positive people doing it.

Monoculture is dangerous

I think all of us in the open source community know this, and most of us have said it at least once to somebody else. As noted security researcher Bruce Schneier says, “monoculture is bad; embrace diversity or die along with everyone else.” But it’s not just dangerous for software and agriculture, it’s dangerous to communities too. Communities need, desperately need, diversity, and not just for the immediate benefits that various opinions and perspectives bring. Including minorities in your community will point out flaws you didn’t know existed, because they didn’t affect anyone else, but a distro-specific bug in upstream is still a bug, and a minority-specific flaw in your community is still a flaw. Communities that are almost all male, or white, or western, aren’t necessarily bad because of their monoculture, but they should certainly consider themselves vulnerable and deficient because of it. Bringing in diversity will strengthen it, and adding minority contributor will ultimately benefit a project more than adding another to the majority. When somebody from a minority tells you there is a problem in your community that you didn’t see, don’t try to defend it by pointing out that it doesn’t affect you, but instead treat it like you would a normal bug report from somebody on different hardware than you.

Good people are human too

The appendix is a funny organ. Most of the time it’s just there, innocuous or maybe even slightly helpful. But every so often one happens to, for whatever reason, explode and try to kill the rest of the body. People in a community do this too.  I’ve seen a number of people that were good or even great contributors who, for whatever reason, had to explode and they threatened to take down anything they were a part of when it happened. But these people were no more malevolent than your appendix is, they aren’t bad, even if they do need to be removed in order to avoid lasting harm to the rest of the body. Sometimes, once whatever caused their eruption has passed, these people can come back to being a constructive part of your community.

Love the whole, not the parts

When you look at it, all of it, the open source community is a marvel of collaboration, of friendship and family. Yes, family. I know I’m not alone in feeling this way about people I may not have ever met in person. And just like family you love them during the good and the bad. There are some annoying, obnoxious people in our family. There are good people who are sometimes annoying and obnoxious. But neither of those truths changes the fact that we are still a part of an amazing, inspiring, wonderful community of open source contributors and enthusiasts.

Read more