Canonical Voices

UbuntuTouch

对很多的开发者来说,你们可能使用的不是Ubuntu操作系统。在这种情况下,开发者需要在自己的操作系统中(OS X及Windows)安装virtualbox,并在VirtualBox中安装Ubuntu及Ubuntu SDK。为了方便大家的安装,我们已经制定好了一个Image。这个Image中包含Ubuntu Utopic (14.10)及Ubuntu SDK。大家可以一次性地下载并安装SDK。下面介绍其安装步骤。

1)从https://www.virtualbox.org/wiki/Downloads下载最新的VirtualBox

Download VirtualBox


注意:当我们下载VirtualBox是,一定要根据自己的系统选择合适的版本。


2)双击刚下载的VirtualBox文件,并安装它

3)下载Ubuntu virtual machine (最小的Ubuntu 14.10 desktop版本及已经在里面安装好的Ubuntu SDK)

4)等下完后,双击已经下载的文件“ubuntu+sdk.ova”来导入到VirtualBox中,并运行它


注意:在整个安装过程中,需要用到的用户名及密码是“ubuntu/ubuntu”

在安装完整个SDK后,我们可以参照文章“怎么在Virtualbox下安装Ubuntu OS”来设置自己的中文输入法及文件分享。可以参照文章“Ubuntu SDK 安装”来进一步安装自己的“armhf”及“i386” chroot。整个安装chroot的过程可能需要一定的时间。需要耐心等待。等整个安装过程完成了,我们就可以进行下一步的开发了。

作者:UbuntuTouch 发表于2014-10-17 9:50:48 原文链接
阅读:261 评论:0 查看评论

Read more
Nicholas Skaggs

The final images of what will become utopic are here! Yes, in just one short week utopic unicorn will be released into the world. Celebrate this exciting release and be among the first to run utopic by helping us test!

We need your help and test results, both positive and negative. Please head over to the milestone on the isotracker, select your favorite flavor, and perform the needed tests against the images.

If you've never submitted test results for the iso tracker, check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help.

Thank you for helping to make ubuntu better! Happy Testing!

Read more
Robbie Williamson

The following is an update on Ubuntu’s response to the latest Internet emergency security issue, POODLE (CVE-2014-3566), in combination with an
SSLv3 downgrade vulnerability.

Vulnerability Summary

“SSL 3.0 is an obsolete and insecure protocol. While for most practical purposes it has been replaced by its successors TLS 1.0, TLS 1.1, and TLS 1.2, many TLS implementations remain backwards­ compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. The protocol handshake provides for authenticated version negotiation, so normally the latest protocol version common to the client and the server will be used.” -https://www.openssl.org/~bodo/ssl-poodle.pdf

A vulnerability was discovered that affects the protocol negotiation between browsers and HTTP servers, where a man-in-the-middle (MITM) attacker is able trigger a protocol downgrade (ie, force downgrade to SSLv3, CVE to be assigned).  Additionally, a new attack was discovered against the CBC block cipher used in SSLv3 (POODLE, CVE-2014-3566).  Because of this new weakness in the CBC block cipher and the known weaknesses in the RC4 stream cipher (both used with SSLv3), attackers who successfully downgrade the victim’s connection to SSLv3 can now exploit the weaknesses of these ciphers to ascertain the plaintext of portions of the connection through brute force attacks.  For example, an attacker who is able to manipulate the encrypted connection is able to steal HTTP cookies.  Note, the protocol downgrade vulnerability exists in web browsers and is not implemented in the ssl libraries.  Therefore, the downgrade attack is currently known to exist only for HTTP.

OpenSSL will be updated to guard against illegal protocol negotiation downgrades (TLS_FALLBACK_SCSV).  When the server and client are updated to use TLS_FALLBACK_SCSV, the protocol cannot be downgraded to below the highest protocol that is supported between the two (so if the client and the server both support TLS 1.2, SSLv3 cannot be used even if the server offers SSLv3).

The recommended course of action is ultimately for sites to disable SSLv3 on their servers, and for browsers to disable SSLv3 by default since the SSLv3 protocol is known to be broken.  However, it will take time for sites to disable SSLv3, and some sites will choose not to, in order to support legacy browsers (eg, IE6).  As a result, immediately disabling SSLv3 in Ubuntu in the openssl libraries, in servers or in browsers, will break sites that still rely on SSLv3.

Ubuntu’s Response:

Unfortunately, this issue cannot be addressed in a single USN because this is a vulnerability in a protocol, and the Internet must respond accordingly (ie SSLv3 must be disabled everywhere).  Ubuntu’s response provides a path forward to transition users towards safe defaults:

  • Add TLS_FALLBACK_SCSV to openssl in a USN:  In progress, upstream openssl is bundling this patch with other fixes that we will incorporate
  • Follow Google’s lead regarding chromium and chromium content api (as used in oxide):
    • Add TLS_FALLBACK_SCSV support to chromium and oxide:  Done – Added by Google months ago.
    • Disable fallback to SSLv3 in next major version:  In Progress
    • Disable SSLv3 in future version:  In Progress
  • Follow Mozilla’s lead regarding Mozilla products:
    • Disable SSLv3 by default in Firefox 34:  In Progress – due Nov 25
    • Add TLS_FALLBACK_SCSV support in Firefox 35:  In Progress

Ubuntu currently will not:

  • Disable SSLv3 in the OpenSSL libraries at this time, so as not to break compatibility where it is needed
  • Disable SSLv3 in Apache, nginx, etc, so as not to break compatibility where it is needed
  • Preempt Google’s and Mozilla’s plans.  The timing of their response is critical to giving sites an opportunity to migrate away from SSLv3 to minimize regressions

For more information on Ubuntu security notices that affect the current supported releases of Ubuntu, or to report a security vulnerability in an Ubuntu package, please visit http://www.ubuntu.com/usn/.

 

Read more
UbuntuTouch

我在以前的文章中,讲述了如何使用U1dbSQLite offline storage API来存储应用的一些状态。在这篇文章中,我将介绍如何使用Qt.labs.settings来存储应用的状态。更加详细的介绍,请参阅链接


首先,我们创建一个最简单的“App with Simple UI”模版应用,并修改文件“main.qml”如下:

import QtQuick 2.0
import Ubuntu.Components 1.1
import Qt.labs.settings 1.0

/*!
    \brief MainView with a Label and Button elements.
*/

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "com.ubuntu.developer.liu-xiao-guo.settings"

    /*
     This property enables the application to change orientation
     when the device is rotated. The default is false.
    */
    //automaticOrientation: true

    // Removes the old toolbar and enables new features of the new header.
    useDeprecatedToolbar: false

    width: units.gu(50)
    height: units.gu(75)

    Page {
        title: i18n.tr("Simple")

        Column {
            anchors.fill: parent
            anchors.centerIn: parent
            anchors.horizontalCenter: parent.center


            Label {
                text: "Please input a string below:"
                fontSize: "large"
            }

            TextField {
                id: myTextField
                text: settings.input
                placeholderText: "please input a string"

                onTextChanged: {
                    settings.input = text
                }
            }

            Button {
                text: "Get category"
                onClicked: {
                    console.log("settings category:" + settings.category);
                }
            }
        }

        Settings {
            id: settings
            property string input: "unknown"
        }

        Component.onDestruction: {
            settings.input = myTextField.text
        }
    }
}

记得这里我们一定要加入Qt.labs.settings。我们首先绑定myTextField的值为settings中的input。在程序退出的时候,我们通过如下的方式进行存储:


        Component.onDestruction: {
            settings.input = myTextField.text
        }

在我们的应用中,我们使用如下的方法。每当myTextField变化时,我们就存一下。这依赖于我们最终程序的需求是什么样的。

            TextField {
                id: myTextField
                text: settings.input
                placeholderText: "please input a string"

                onTextChanged: {
                    settings.input = text
                }
            }

运行我们的应用,我们会发现,当我们修改myTextField中的值,并退出后。下次启动时,可以看到,上次输入的值被读取,并存放于myTextField中。



整个测试的源码在 bzr branch lp:~liu-xiao-guo/debiantrial/settingsqml

作者:UbuntuTouch 发表于2014-10-16 15:18:13 原文链接
阅读:225 评论:0 查看评论

Read more
UbuntuTouch

QML入门必备基础知识之——UI布局管理


概述

使用 Qt 做过 UI 后一定对 QHBoxLayout, QVBoxLayout, 和 QGridLayout 这三个最重要也最常使用的 layout managers 非常熟悉。那么在 QML 中又是如何控制和管理 UI 布局的呢?那么我们这篇文章就为大家介绍这些基础知识。

首先,QML 同样允许大家使用硬编码的方式将位置数值直接写到代码中,但是这样做首先难以适应 UI 的调整,其次代码维护起来也很困难。因此我们推荐大家不要直接写数值,而是使用下列三种布局管理器:Row,、Column、Grid,以及使用 Anchor 进行布局。

Row

QML 中的 Row 元素会将其子控件都排列在同一行,相互不重叠。我们还可以使用它的 spacing 属性来定义子控件之间的距离。比如下列代码就会产生如图所示的效果:

Row { 
spacing: 2
Rectangle { color: "red"; width: 50; height: 50 }
Rectangle { color: "green"; width: 20; height: 50 }
Rectangle { color: "blue"; width: 50; height: 20 }
}

Row.png




Column

QML 中的 Column 元素会将其子控件都排列在同一行,相互不重叠。我们可以使用它的 spacing 属性来定义子控件之间的距离。比如下列代码就会产生如图所示的效果:

Column { 
spacing: 2
Rectangle { color: "red"; width: 50; height: 50 }
Rectangle { color: "green"; width: 20; height: 50 }
Rectangle { color: "blue"; width: 50; height: 20 }
}

Column.png





Grid

QML 中的 Grid 元素会将其子控件都均匀地排列在一个网格内,相互不重叠,每一个子控件都被放置在一个网格单元的(0,0)位置,也就是左上角。Grid的 rows 和 columns 属性定义网格的行数和列数,列数默认是4。我们还可以使用 Grid 的spacing 属性来定义网格单元之间的距离,这里注意水平和垂直方向的 spacing 都是一样的。比如下列代码就会产生如图所示的效果:

Grid { 
columns: 3
spacing: 2
Rectangle { color: "red"; width: 50; height: 50 }
Rectangle { color: "green"; width: 20; height: 50 }
Rectangle { color: "blue"; width: 50; height: 20 }
Rectangle { color: "cyan"; width: 50; height: 50 }
Rectangle { color: "magenta"; width: 10; height: 10 }
}

Grid.png




混合应用

我们还可以将 Grid、Row 和 Column 进行混合应用。比如下面的代码会产生如图所示的效果:

Column {
spacing: 2
Rectangle { color: "red"; width: 50; height: 50 }
Row {
spacing: 2
Rectangle { color: "yellow"; width: 50; height: 50 }
Rectangle { color: "black"; width: 20; height: 50 }
Rectangle { color: "blue"; width:50; height: 20 }
}
Rectangle { color: "green"; width: 20; height: 50 }
}

Combine.png






Anchor

每一个 item 都可以被认为具有 7 条隐藏的“anchor lines":left、 horizontalCenter、 right、 top、 verticalCenter、baseline、以及 bottom,如下图所示: Anchor1.png

其中 baseline 是指的文本所在的线,在上图中并未标出,如果 item 没有文字的话 baselinw 就和 top 的位置是相同的。 除此之外,Anchor 系统还提供了margins 和 offsets。margins 是指一个 item 和外界之间所留有的空间,而 offsets 则可以通过使用 center anchor lines 来进行布局。如下图所示:

Anchor2.png





使用 QML anchoring 系统,我们可以定义不同 items 之间的 anchor lines 之间的关系。例如:

Rectangle { id: rect1; ... }
Rectangle { id: rect2; anchors.left: rect1.right; anchors.leftMargin: 5; ... }

执行效果:Anchor3.png




我们还可以使用多个 anchors:

Rectangle { id: rect1; ... }
Rectangle { id: rect2; anchors.left: rect1.right; anchors.top: rect1.bottom; ... }

执行效果:Anchor4.png





通过定义多个水平或垂直的 anchors,我们还可以控制 item 的大小,例如:

Rectangle { id: rect1; x: 0; ... }
Rectangle { id: rect2; anchors.left: rect1.right; anchors.right: rect3.left; ... }
Rectangle { id: rect3; x: 150; ... }

执行效果:Anchor5.png


注意:出于效率方面的考虑,我们只允许对一个 item 的邻居和直接父亲使用 anchor 定义。比如下面的定义是不合法的:

 Item {
id: group1
Rectangle { id: rect1; ... }
}
Item {
id: group2
Rectangle { id: rect2; anchors.left: rect1.right; ... } // invalid anchor!
}
本文原文出“QML入门必备基础知识之——UI布局管理
作者:UbuntuTouch 发表于2014-10-16 9:41:16 原文链接
阅读:129 评论:0 查看评论

Read more
David Callé

Scopes come with a very flexible customization system. From picking the text color to rearranging how results are laid out, a scope can easily look like a generic RSS reader, a music library or even a store front.

In this new article, you will learn how to make your scope shine by customizing its results, changing its colors, adding a logo and adapting its layout to present your data in the best possible way. Read…

screenshot20145615_125616591

Read more
Michael Hall

Will CookeThis is a guest post from Will Cooke, the new Desktop Team manager at Canonical. It’s being posted here while we work to get a blog setup on unity.ubuntu.com, which is where you can find out more about Unity 8 and how to get involved with it.

Intro

Understandably, most of the Ubuntu news recently has focused around phones. There is a lot of excitement and anticipation building around the imminent release of the first devices.  However, the Ubuntu Desktop has not been dormant during this time.  A lot of thought and planning has been given to what the desktop will become in the future; who will use it and what will they use it for.  All the work which is going in to the phone will be directly applicable to the desktop as well, since they will use the same code.  All the apps, the UI tweaks, everything which makes applications secure and stable will all directly apply to the desktop as well.  The plan is to have the single converged operating system ready for use on the desktop by 16.04.

The plan

We learned some lessons during the early development of Unity 7. Here’s what happened:

  • 11.04: New Unity as default
  • 11.10: New Unity version
  • 12.04: Unity in First LTS

What we’ve decided to do this time is to keep the same, stable Unity 7 desktop as the default while we offer users who want to opt-in to Unity8 an option to use that desktop. As development continues the Unity 8 desktop will get better and better.  It will benefit from a lot of the advances which have come about through the development of the phone OS and will benefit from continual improvements as the releases happen.

  • 14.04 LTS: Unity 7 default / Unity 8 option for the first time
  • 14.10: Unity 7 default / Unity 8 new rev as an option
  • 15.04: Unity 7 default / Unity 8 new rev as an option
  • 15.10: Potentially Unity 8 default / Unity 7 as an option
  • 16.04 LTS: Unity 8 default / Unity 7 as an option

As you can see, this gives us a full 2 cycles (in addition to the one we’ve already done) to really nail Unity 8 with the level of quality that people expect. So what do we have?

How will we deliver Unity 8 with better quality than 7?

Continuous Integration is the best way for us to achieve and maintain the highest quality possible.  We have put a lot of effort in to automating as much of the testing as we can, the best testing is that which is performed easily.  Before every commit the changes get reviewed and approved – this is the first line of defense against bugs.  Every merge request triggers a run of the tests, the second line of defense against bugs and regressions – if a change broke something we find out about it before it gets in to the build.

The CI process builds everything in a “silo”, a self contained & controlled environment where we find out if everything works together before finally landing in the image.

And finally, we have a large number of tests which run against those images. This really is a “belt and braces” approach to software quality and it all happens automatically.  You can see, we are taking the quality of our software very seriously.

What about Unity 7?

Unity 7 and Compiz have a team dedicated to maintenance and bug fixes and so the quality of it continues to improve with every release.  For example; windows switching workspaces when a monitor gets unplugged is fixed, if you have a mouse with 6 buttons it works, support for the new version of Metacity (incase you want to use the Gnome2 desktop) – added (and incidentally, a lot of that work was done by a community contributor – thanks Alberts!)

Unity 7 is the desktop environment for a lot of software developers, devops gurus, cloud platform managers and millions of users who rely on it to help them with their everyday computing.  We don’t want to stop you being able to get work done.  This is why we continue to maintain Unity 7 while we develop Unity 8.  If you want to take Unity 8 for a spin and see how its coming along then you can; if you want to get your work done, we’re making that experience better for you every day.  Best of all, both of these options are available to you with no detriment to the other.

Things that we’re getting in the new Ubuntu Desktop

  1. Applications decoupled from the OS updates.  Traditionally a given release of Ubuntu has shipped with the versions of the applications available at the time of release.  Important updates and security fixes are back-ported to older releases where required, but generally you had to wait for the next release to get the latest and greatest set of applications.  The new desktop packaging system means that application developers can push updates out when they are ready and the user can benefit right away.
  2. Application isolation.  Traditionally applications can access anything the user can access; photos, documents, hardware devices, etc.  On other platforms this has led to data being stolen or rendered otherwise unusable.  Isolation means that without explicit permission any Click packaged application is prevented from accessing data you don’t want it to access.
  3. A full SDK for writing Ubuntu apps.  The SDK which many people are already using to write apps for the phone will allow you to write apps for the desktop as well.  In fact, your apps will be write once run anywhere – you don’t need to write a “desktop” app or a “phone” app, just an Ubuntu app.

What we have now

The easiest way to try out the Unity 8 Desktop Preview is to use the daily Ubuntu Desktop Next live image:   http://cdimage.ubuntu.com/ubuntu-desktop-next/daily-live/current/   This will allow you to boot into a Unity 8 session without touching your current installation.  An easy 10 step way to write this image to a USB stick is:

  1. Download the ISO
  2. Insert your USB stick in the knowledge that it’s going to get wiped
  3. Open the “Disks” application
  4. Choose your USB stick and click on the cog icon on the righthand side
  5. Choose “Restore Disk Image”
  6. Browse to and select the ISO you downloaded in #1
  7. Click “Start restoring”
  8. Wait
  9. Boot and select “Try Ubuntu….”
  10. Done *

* Please note – there is currently a bug affecting the Unity 8 greeter which means you are not automatically logged in when you boot the live image.  To log in you need to:

  1. Switch to vt1 (ctrl-alt-f1)
  2. type “passwd” and press enter
  3. press enter again to set the current password to blank
  4. enter a new password twice
  5. Check that the password has been successfully changed
  6. Switch back to vt7 (ctrl-alt-f7)
  7. Enter the new password to login

 

Here are some screenshots showing what Unity 8 currently looks like on the desktop:

00000009000000190000003100000055000000690000011000000183000001950000020700000255000002630000032800000481

The team

The people working on the new desktop are made up of a few different disciplines.  We have a team dedicated to Unity 7 maintenance and bug fixes who are also responsible for Unity 8 on the desktop and feed in a lot of support to the main Unity 8 & Mir teams. We have the Ubuntu Desktop team who are responsible for many aspects of the underlying technologies used such as GNOME libraries, settings, printing etc as well as the key desktop applications such as Libreoffice and Chromium.  The Ubuntu desktop team has some of the longest serving members of the Ubuntu family, with some people having been here for the best part of ten years.

How you can help

We need to log all the bugs which need to be fixed in order to make Unity 8 the best desktop there is.  Firstly, we need people to test the images and log bugs.  If developers want to help fix those bugs, so much the better.  Right now we are focusing on identifying where the work done for the phone doesn’t work as expected on the desktop.  Once those bugs are logged and fixed we can rely on the CI system described above to make sure that they stay fixed.

Link to daily ISOs:  http://cdimage.ubuntu.com/ubuntu-desktop-next/daily-live/current/

Bugs:  https://bugs.launchpad.net/ubuntu/+source/unity8-desktop-session

IRC:  #ubuntu-desktop on Freenode

Read more
UbuntuTouch

在前面的一些文章中,我们已经介绍了一些怎么利用Qt和C++ API来创建一个Scope。它们都是一些基本的Scope。在这篇文章中,我们将介绍department Scope,并掌握开发它的方法。Department Scope将会在许多的Scope中进行分类搜寻。更多关于Scope的介绍可以在网址http://developer.ubuntu.com/scopes/找到。我们最终的Scope的界面如下:


      


1)什么是department Scope


首先,坦率地说,我们很难找到一个很确切的中文词来描述它。姑且叫它部门Scope吧。在上面的左图上,我们可以看到在“美食”的正右边,有一个向下的下拉箭头。点击它后,就可以看到如中间图所示的菜单。这也就是说,我们可以对点评网的每个category进行分别地搜索,而不是把不同领域的搜索结果都罗列出来。比方,我想找吃的,我只想知道和餐馆相关的信息,而不要和美容,娱乐相关的搜寻结果。通过我们对点评API的接口

http://api.dianping.com/v1/business/find_businesses?appkey=3562917596&sign=16B7FAB0AE9C04F356C9B1BE3BB3B77829F83EDA&category=美食&city=上海&latitude=31.18268013000488&longitude=121.42769622802734&sort=1&limit=20&offset_type=1&out_offset_type=1&platform=2

进行分析,我们可以把“category”设置为我们的部门,这样我们就可以对每个领域进行分别的搜寻。我们也可以通API接口来得到所有点评的category:

http://api.dianping.com/v1/metadata/get_categories_with_businesses

关于这个API的接口具体可以在链接找到。

2)创建一个基本的Scope

首先,我们来打开我们的Ubuntu SDK来创建一个最基本的应用。我们选择菜单“New file or Project”或使用热键“Ctrl+N”。我们选择“Unity Scope”模版。



我们给我们的应用一个名字“dianping”。我们同事也选择template的类型为“Empty scope”:

  
这样我们就创建了一个最基本的scope。我们可以点击它,可能没有什么太多的功能。为了确保我们能够运行并看到我们的scope,我们点击“Projects”,并在“Run”中设置我们的“Run Configuration”使之成为“dianping”。



2)加入对Qt的支持

我们可以看到在项目的“src”目录下有两个目录:apiscope。api目录下的代码主要是为了来访问我们的web service来得到一个json或是xml的数据。在这个项目中,我们并不准备采用这个目录中的client类。有兴趣的开发者可以尝试把自己的client和scope的代码分开。

我们首先打开在“src”中的CMakeLists.txt文件,并加入如下的句子:

add_definitions(-DQT_NO_KEYWORDS)
find_package(Qt5Network REQUIRED)
find_package(Qt5Core REQUIRED)     
find_package(Qt5Xml REQUIRED)      

include_directories(${Qt5Core_INCLUDE_DIRS})    
include_directories(${Qt5Network_INCLUDE_DIRS})
include_directories(${Qt5Xml_INCLUDE_DIRS})    

....

# Build a shared library containing our scope code.
# This will be the actual plugin that is loaded.
add_library(
  scope SHARED
  $<TARGET_OBJECTS:scope-static>
)

qt5_use_modules(scope Core Xml Network) 

# Link against the object library and our external library dependencies
target_link_libraries(
  scope
  ${SCOPE_LDFLAGS}
  ${Boost_LIBRARIES}
)

我们可以看到,我们加入了对Qt Core,XML及Network库的调用。同时,我们也打开"tests/unit/CMakeLists.txt"文件,并加入“qt5_use_modules(scope-unit-tests Core Xml Network)":

# Our test executable.
# It includes the object code from the scope
add_executable(
  scope-unit-tests
  scope/test-scope.cpp
  $<TARGET_OBJECTS:scope-static>
)

# Link against the scope, and all of our test lib dependencies
target_link_libraries(
  scope-unit-tests
  ${GTEST_BOTH_LIBRARIES}
  ${GMOCK_LIBRARIES}
  ${SCOPE_LDFLAGS}
  ${TEST_LDFLAGS}
  ${Boost_LIBRARIES}
)

qt5_use_modules(scope-unit-tests Core Xml Network)

# Register the test with CTest
add_test(
  scope-unit-tests
  scope-unit-tests
)

重新编译项目,如果还有编译错误错误,请修正。

我们同时需要对scope.cpp进行修改。这里我们加入了一个”QCoreApplication”变量。这主要是为了我们能够使用signal/slot机制及生成一个Qt应用。我们来修改scope.h文件,并加QoreApplication的变量app及类的forward申明。我们也必须同时加入一个方法"run"。

class QCoreApplication; // added

namespace scope {
class Scope: public unity::scopes::ScopeBase {
public:
    void start(std::string const&) override;
    void stop() override;
    void run(); // added
    unity::scopes::PreviewQueryBase::UPtr preview(const unity::scopes::Result&,
                                                  const unity::scopes::ActionMetadata&) override;
    unity::scopes::SearchQueryBase::UPtr search(
            unity::scopes::CannedQuery const& q,
            unity::scopes::SearchMetadata const&) override;

protected:
    api::Config::Ptr config_;
    QCoreApplication *app; //added
};

我们同时打开scope.cpp,并做如下的修改:

#include <QCoreApplication> // added

...

void Scope::stop() {
    /* The stop method should release any resources, such as network connections where applicable */
    delete app;
}

void Scope::run()
{
    int zero = 0;
    app = new QCoreApplication(zero, nullptr);
}

这样我们的每个scope其实也是一个Qt应用在运行。重新编译我们的Scope,并在desktop上运行。至此,我们基本对我们的框架加入了基本的Qt支持。在下面的环节中,我们来一步一步地完成我的其它的部分。

3)代码讲解

src/scope/scope.cpp


这个文件定义了一个unity::scopes::ScopeBase的类。它提供了客户端用来和Scope交互的起始接口。
  • 这个类定义了“start", "stop"及"run"来运行scope。绝大多数开发者并不需要修改这个类的大部分实现。在我们的例程中,我们将不做任何的修改
  • 它也同时实现了另外的两个方法:search 和 preview。我们一般来说不需要修改这俩个方法的实现。但是他们所调用的函数在具体的文件中必须实现
注意:我们可以通过研究Scope API的头文件来对API有更多的认识。更多的详细描述,开发者可以在http://developer.ubuntu.com/api/scopes/sdk-14.10/查看。

在上一节中,我们已经基本上完成了对它的改造。对大多数的Scope来说,基本上我们不需要做很多的改变。对于我们的这个Scope,我们想使用cache来缓冲我们的数据,这样可以提高我们Scope的流畅度。这里我们对search函数做如下的修改:

sc::SearchQueryBase::UPtr Scope::search(const sc::CannedQuery &query,
                                        const sc::SearchMetadata &metadata) {

    const QString scopePath = QString::fromStdString(scope_directory());
    const QString cachePath =QString::fromStdString(cache_directory());

    // Boilerplate construction of Query
    return sc::SearchQueryBase::UPtr(new Query(query, metadata, scopePath,cachePath, config_));
}

同时我们也要对Query类中的构造函数进行修改,以便能够进行编译:

Query::Query(const sc::CannedQuery &query, const sc::SearchMetadata &metadata, QString const& scopeDir,
        QString const& cacheDir, Config::Ptr config) :
        sc::SearchQueryBase( query, metadata ),
        m_scopeDir( scopeDir ),
        m_cacheDir( cacheDir ),
        client_(config)
{
    qDebug() << "CacheDir: " << m_cacheDir;
    qDebug() << "ScopeDir " <<  m_scopeDir;

    qDebug() << m_urlRSS;
}

当然,我们要记得在我们的Query头文件中加入数据变量m_scopeDir及m_cacheDir:

class Query: public unity::scopes::SearchQueryBase {

....

private:
    QString m_scopeDir;
    QString m_cacheDir;
    ....
}

重新编译我们的Scope。如果大家此时还有任何的问题的话,可以下载我的源码

bzr branch lp:~liu-xiao-guo/debiantrial/dianpingdept1。

大家可以此为基础向下做练习。

src/scope/query.cpp


这个文件定义了一个unity::scopes::SearchQueryBase类。
这个类用来产生由用户提供的查询字符串而生产的查询结果。这个结果可能是基于json或是xml的。这个类可以用来进行对返回的结果处理并显示。

  • 得到由用户输入的查询字符串
  • 向web services发送请求
  • 生成搜索的结果(根据每个scope不同而不同)
  • 创建搜索结果category(比如不同的layout-- grid/carousel)
  • 根据不同的搜寻结果来绑定不同的category以显示我们所需要的UI
  • 推送不同的category来显示给最终用户
  • 基本上所有的代码集中在"run"方法中。这里我们加入了一个”QCoreApplication”变量。这主要是为了我们能够使用signal/slot机制。
接下来我们对“run”进行修改来达到搜寻的目的。对dianping API的接口来说,我们需要对其输入的URL进行签名。为了方便,我定义了如下的helper方法。

QString Query::getUrl(QString addr, QMap<QString, QString> map) {
    QCryptographicHash generator(QCryptographicHash::Sha1);

    QString temp;
    temp.append(appkey);
    QMapIterator<QString, QString> i(map);
    while (i.hasNext()) {
        i.next();
        // qDebug() << i.key() << ": " << i.value();
        temp.append(i.key()).append(i.value());
    }

    temp.append(secret);

    qDebug() << temp;

    qDebug() << "UTF-8: " << temp.toUtf8();

    generator.addData(temp.toUtf8());
    QString sign = generator.result().toHex().toUpper();

    QString url;
    url.append(addr);
    url.append("appkey=");
    url.append(appkey);

    url.append("&");
    url.append("sign=");
    url.append(sign);

    i.toFront();
    while (i.hasNext()) {
        i.next();
        // qDebug() << i.key() << ": " << i.value();
        url.append("&").append(i.key()).append("=").append(i.value());
    }

    qDebug() << "Final url: " << url;
    return url;
}

这里用到的“appKey”及“secret”是两个定义的QString常量。开发者需要到点评的网站进行申请。这里的addr就是请求的url的前面部分,比如http://api.dianping.com/v1/metadata/get_categories_with_businesses。这里的map实际上是像如下的一组数据,用来存储请求的参数的。我们利用这个方法来得到我们的department的url。如下:

Query::Query(const sc::CannedQuery &query, const sc::SearchMetadata &metadata, QString const& scopeDir,
        QString const& cacheDir, Config::Ptr config) :
        sc::SearchQueryBase( query, metadata ),
        m_scopeDir( scopeDir ),
        m_cacheDir( cacheDir ),
        // m_limit( 0 ),
        client_(config)
{
    qDebug() << "CacheDir: " << m_cacheDir;
    qDebug() << "ScopeDir " <<  m_scopeDir;

    QMap<QString,QString> map;
    map["format"] = "xml";

    m_urlRSS = getUrl(DEPARTMENTS,  map);
    qDebug() << "m_urlRSS: " << m_urlRSS;
}

我们可以把上面的代码加到Query类的构造函数中。这里的DEPARTMENTS定义如下:

const QString DEPARTMENTS = "http://api.dianping.com/v1/metadata/get_categories_with_businesses?";

我们可以通过打印的方式打印出来到Application Output窗口中:

m_urlRSS:  "http://api.dianping.com/v1/metadata/get_categories_with_businesses?appkey=3562917596&sign=4BAF8DD42A36538E17207A1C10F819571B00BF6E&format=xml"

如果我们把得到的url输入到浏览器中,我们会发现:




接下来,我们需要通过网路请求的方式得到上面的xml格式的数据并对它进行解析。为了能够得到我们需要的departments,我们对“run”方法做如下的修改:

void Query::run(sc::SearchReplyProxy const& reply) {
    qDebug() <<  "Run is started .............................!";

    // Create an instance of disk cache and set cache directory
    m_diskCache = new QNetworkDiskCache();
    m_diskCache->setCacheDirectory(m_cacheDir);

    QEventLoop loop;

    QNetworkAccessManager managerDepts;
    QObject::connect(&managerDepts, SIGNAL(finished(QNetworkReply*)), &loop, SLOT(quit()));
    QObject::connect(&managerDepts, &QNetworkAccessManager::finished,
                     [reply,this](QNetworkReply *msg){
        if( msg->error()!= QNetworkReply::NoError ){
            qWarning() << "failed to retrieve raw data, error:" << msg->error();
            rssError(reply,ERROR_Connection);
            return;
        }
        QByteArray data = msg->readAll();

        // qDebug() << "XML data is: " << data.data();

        QString deptUrl = rssDepartments( data, reply );

        CannedQuery cannedQuery = query();
        QString deptId = qstr(cannedQuery.department_id());
        qDebug() << "department id: " << deptId;

        if (!query().department_id().empty()){ // needs departments support
            qDebug() << "it is not empty xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx!";
            deptUrl = m_depts[deptId];
            qDebug() << "depatUrl: " << deptUrl;
        } else {
            qDebug() << "It is empty ===================================!";
        }

        if ( deptUrl.isEmpty() )
            return;
   });
    managerDepts.setCache(m_diskCache);
    managerDepts.get(QNetworkRequest(QUrl(m_urlRSS)));
    loop.exec();
}

这里其实很简单,我们通过对m_urlRSS的请求,并把得到的结果传给rssDepartments来解释所得到的xml格式的数据。每一个department都有一个叫做department_id来识别。它是一个独有的区别其他的String。rss_Departments的实现如下:

QString Query::rssDepartments( QByteArray &data, unity::scopes::SearchReplyProxy const& reply ) {
    QDomElement docElem;
    QDomDocument xmldoc;
    DepartmentList rss_depts;
    QString firstname = "";

    CannedQuery myquery( SCOPE_NAME );
    myquery.set_department_id( TOP_DEPT_NAME );

    Department::SPtr topDept;

    if ( !xmldoc.setContent(data) ) {
        qWarning()<<"Error importing data";
        return firstname;
    }

    docElem = xmldoc.firstChildElement("results");
    if (docElem.isNull()) {
        qWarning() << "Error in data," << "results" << " not found";
        return firstname;
    }

    docElem = docElem.firstChildElement("categories");
    if ( docElem.isNull() ) {
        qWarning() << "Error in data," << "categories" << " not found";
        return firstname;
    }

    docElem = docElem.firstChildElement("category");

    // Clear the previous departments since the URL may change according to settings
    m_depts.clear();

    int index = 0;
    while ( !docElem.isNull() ) {

        QString category = docElem.attribute("name","");
        qDebug() << "category: " << category;

        if ( !category.isEmpty() ) {
            QString url = getDeptUrl(category);

            QString deptId = QString::number(index);

            if (firstname.isEmpty()) {
                //Create the url here
                firstname = url;
                topDept = move(unity::scopes::Department::create( "",
                                                                  myquery, category.toStdString()));
            } else {
                Department::SPtr aDept = move( unity::scopes::Department::create( deptId.toStdString(),
                                              myquery, category.toStdString() ) );
                rss_depts.insert( rss_depts.end(), aDept );
            }

            m_depts.insert( QString::number(index), url );
            index++;
        }

        docElem = docElem.nextSiblingElement("category");
    }

    // Dump the deparmemts
    QMapIterator<QString, QString> i(m_depts);
    while (i.hasNext()) {
        i.next();
         qDebug() << i.key() << ": " << i.value();
    }

    topDept->set_subdepartments( rss_depts );

     try {
        reply->register_departments( topDept );
    } catch (std::exception const& e) {
        qWarning() << "Error happened: " << e.what();
    }

    return firstname;
}

这个方法通过解析,并生产相应的department。完整的代码可以在

bzr branch lp:~liu-xiao-guo/debiantrial/dianpingdept2

运行我们的Scope。我们可以看到所生产的department。



现在显然我们还看不到任何东西因为我们没有对我们的department进行搜寻。接下来,我们可以按照文章“怎么在Ubuntu Scope中获取location地址信息”来设置获得我们所需要的位置信息。在手机上,我们可以通过网路或GPS来获得我们所需要的位置信息。在电脑上目前还没有支持。通过获得的位置信息,我们通过点评对当地的位置进行搜索。

我们接下来对“run”更进一步地修改来对我们得到的department进行查询:

void Query::run(sc::SearchReplyProxy const& reply) {
    qDebug() <<  "Run is started .............................!";

    // Initialize the scopes
    initScope();

    // Get the current location of the search
    auto metadata = search_metadata();
    if ( metadata.has_location() ) {
        qDebug() << "Location is supported!";
        auto location = metadata.location();

        if ( location.has_altitude()) {
            cerr << "altitude: " << location.altitude() << endl;
            cerr << "longitude: " << location.longitude() << endl;
            cerr << "latitude: " << location.latitude() << endl;
            auto latitude = std::to_string(location.latitude());
            auto longitude = std::to_string(location.longitude());
            m_longitude = QString::fromStdString(longitude);
            m_latitude = QString::fromStdString(latitude);
        }

        if ( m_longitude.isEmpty() ) {
            m_longitude = DEFAULT_LONGITUDE;
        }
        if ( m_latitude.isEmpty() ) {
            m_latitude = DEFAULT_LATITUDE;
        }

        qDebug() << "m_longitude1: " << m_longitude;
        qDebug() << "m_latitude1: " << m_latitude;
    } else {
        qDebug() << "Location is not supported!";
        m_longitude = DEFAULT_LONGITUDE;
        m_latitude = DEFAULT_LATITUDE;
    }

    // Create an instance of disk cache and set cache directory
    m_diskCache = new QNetworkDiskCache();
    m_diskCache->setCacheDirectory(m_cacheDir);

    QEventLoop loop;

    QNetworkAccessManager managerDepts;
    QObject::connect(&managerDepts, SIGNAL(finished(QNetworkReply*)), &loop, SLOT(quit()));
    QObject::connect(&managerDepts, &QNetworkAccessManager::finished,
                     [reply,this](QNetworkReply *msg){
        if( msg->error()!= QNetworkReply::NoError ){
            qWarning() << "failed to retrieve raw data, error:" << msg->error();
            rssError(reply,ERROR_Connection);
            return;
        }
        QByteArray data = msg->readAll();

        // qDebug() << "XML data is: " << data.data();

        QString deptUrl = rssDepartments( data, reply );

        CannedQuery cannedQuery = query();
        QString deptId = qstr(cannedQuery.department_id());
        qDebug() << "department id: " << deptId;

        if (!query().department_id().empty()){ // needs departments support
            qDebug() << "it is not empty xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx!";
            deptUrl = m_depts[deptId];
            qDebug() << "depatUrl: " << deptUrl;
        } else {
            qDebug() << "It is empty ===================================!";
        }

        if ( deptUrl.isEmpty() )
            return;

        QEventLoop loop;
        QNetworkAccessManager managerRSS;
        QObject::connect( &managerRSS, SIGNAL(finished(QNetworkReply*)), &loop, SLOT(quit()));
        QObject::connect( &managerRSS, &QNetworkAccessManager::finished,
                         [reply,this](QNetworkReply *msg ){
            if( msg->error() != QNetworkReply::NoError ){
                qWarning() << "failed to retrieve specific dept raw data, error:" <<msg->error();
                rssError( reply, ERROR_Connection );
                return;
            }

            QByteArray data = msg->readAll();
            if( query().query_string().empty() ){
                rssImporter( data, reply, CATEGORY_HEADER );
            } else {
                rssImporter( data, reply, CATEGORY_SEARCH );
            }

        });
        managerRSS.setCache( m_diskCache );
        managerRSS.get( QNetworkRequest( QUrl(deptUrl)) );
        loop.exec();

    });
    managerDepts.setCache(m_diskCache);
    managerDepts.get(QNetworkRequest(QUrl(m_urlRSS)));
    loop.exec();
}

上面我们可以看到我们定义了另外一个QEventLoop。在这里,我们通过对刚才所得到的deptUrl做一个新的请求,并把得到的数据传到rssImporter函数中进行解析。

void Query::rssImporter(QByteArray &data, unity::scopes::SearchReplyProxy const& reply, QString title) {
    QDomElement docElem;
    QDomDocument xmldoc;
    CannedQuery cannedQuery = query();
    QString query = qstr( cannedQuery.query_string() );

    if ( !xmldoc.setContent( data ) ) {
        qWarning()<<"Error importing data";
        return;
    }

    docElem = xmldoc.documentElement();
    //find result
    docElem = docElem.firstChildElement("businesses");
    if (docElem.isNull()) {
        qWarning()<<"Error in data,"<< "result" <<" not found";
        return;
    }

    CategoryRenderer rdrGrid(CR_GRID);
    CategoryRenderer rdrCarousel(CR_CAROUSEL);

    auto carousel = reply->register_category("dianpingcarousel", title.toStdString(), "", rdrCarousel);
    auto grid = reply->register_category("dianpinggrid", "", "", rdrGrid);
    bool isgrid = false;

    docElem = docElem.firstChildElement("business");

    while (!docElem.isNull()) {
        QString business_id = docElem.firstChildElement("business_id").text();
        // qDebug() << "business_id: " << business_id;

        QString name = docElem.firstChildElement("name").text();
        // qDebug() << "name: "  << name;

        // Let's get rid of the test info in the string
        name = removeTestInfo(name);

        QString branch_name = docElem.firstChildElement("branch_name").text();
        // qDebug() << "branch_name: " << branch_name;

        QString address = docElem.firstChildElement("address").text();
        // qDebug() << "address: " << address;

        QString telephone = docElem.firstChildElement("telephone").text();
        // qDebug() << "telephone: " << telephone;

        QString city = docElem.firstChildElement("city").text();
        // qDebug() << "city: " << city;

        QString photo_url = docElem.firstChildElement("photo_url").text();
        // qDebug() << "photo_url: " << photo_url;

        QString s_photo_url = docElem.firstChildElement("s_photo_url").text();
        // qDebug() << "s_photo_url: " << s_photo_url;

        QString rating_s_img_uri = docElem.firstChildElement("rating_s_img_uri").text();
        // qDebug() << "rating_s_img_uri: " << rating_s_img_uri;

        QString business_url = docElem.firstChildElement("business_url").text();
        // qDebug() << "business_url: " << business_url;

        QDomElement deals = docElem.firstChildElement("deals");
        QDomElement deal = deals.firstChildElement("deal");
        QString summary = deal.firstChildElement("description").text();
        // qDebug() << "Summary: " << summary;

        if ( !query.isEmpty() ) {
            if ( !name.contains( query, Qt::CaseInsensitive ) &&
                 !summary.contains( query, Qt::CaseInsensitive ) &&
                 !address.contains( query, Qt::CaseInsensitive ) ) {
                qDebug() << "it is going to be skipped";
                docElem = docElem.nextSiblingElement("business");
                continue;
            } else {
                qDebug() << "it is going to be listed!";
            }
        }

        docElem = docElem.nextSiblingElement("business");

        // for each result
        const std::shared_ptr<const Category> * top;

        if ( isgrid ) {
          top = &grid;
          isgrid = false;
        } else {
          isgrid = true;
          top = &carousel;
        }

        CategorisedResult catres((*top));

        catres.set_uri(business_url.toStdString());
        catres.set_dnd_uri(business_url.toStdString());
        catres.set_title(name.toStdString());
        catres["subtitle"] = address.toStdString();
        catres["summary"] = summary.toStdString();
        catres["fulldesc"] = summary.toStdString();
        catres.set_art(photo_url.toStdString());
        catres["art2"] = s_photo_url.toStdString();
        catres["address"] = Variant(address.toStdString());
        catres["telephone"] = Variant(telephone.toStdString());

        //push the categorized result to the client
        if (!reply->push(catres)) {
            break; // false from push() means search waas cancelled
        }
    }

    qDebug()<<"parsing ended";
}

请注意,我们在上面的代码中使用了如下的代码一对每个department所搜寻的结果再次根据在search输入框中所输入的字符串进行匹配,从而可以更加缩小所显示的内容:

        if ( !query.isEmpty() ) {
            if ( !name.contains( query, Qt::CaseInsensitive ) &&
                 !summary.contains( query, Qt::CaseInsensitive ) &&
                 !address.contains( query, Qt::CaseInsensitive ) ) {
                qDebug() << "it is going to be skipped";
                docElem = docElem.nextSiblingElement("business");
                continue;
            } else {
                qDebug() << "it is going to be listed!";
            }
        }


创建并注册CategoryRenderers

在本例中,我们创建了两个JSON objects. 它们是最原始的字符串,如下所示,它有两个field:template及components。template是用来定义是用什么layout来显示我们所搜索到的结果。这里我们选择的是”grid"及小的card-size。components项可以用来让我们选择预先定义好的field来显示我们所需要的结果。这里我们添加了"title"及“art"。

[html] view plaincopy
  1. std::string CR_GRID = R"(  
  2.     {  
  3.         "schema-version" : 1,  
  4.         "template" : {  
  5.             "category-layout" : "grid",  
  6.             "card-size": "small"  
  7.         },  
  8.         "components" : {  
  9.             "title" : "title",  
  10.             "art" : {  
  11.                 "field": "art",  
  12.                 "aspect-ratio": 1.6,  
  13.                 "fill-mode": "fit"  
  14.             }  
  15.         }  
  16.     }  
  17. )";  

更多关于 CategoryRenderer 类的介绍可以在 docs找到。

我们为每个JSON Object创建了一个CategoryRenderer,并同时向reply object注册:

  1. CategoryRenderer rdrGrid(CR_GRID);  
  2. CategoryRenderer rdrCarousel(CR_CAROUSEL);  
  3.   
  4. QString title = queryString + "美味";  
  5.   
  6. auto carousel = reply->register_category("dianpingcarousel", title.toStdString(), "", rdrCarousel);  
  7. auto grid = reply->register_category("dianpinggrid""""", rdrGrid);  


我们可以把得到的数据通过qDebug的方式进行打印并调试。这里加入的代码很多,你可以在如下的地址下载我的代码:


bzr branch lp:~liu-xiao-guo/debiantrial/dianpingdept3

运行我们的Scope, 我们可以看到如下的画面:

  

大家可以点击图片。我们其实还没完成preview的部分,在下面我们会加入更多的信息来显示我们所得到的信息。我们可以输入一个字符串并使得list的内容更少。比如,我们可以输入”朝阳区“来更进一步地缩小我们的显示的list。

src/dianping-preview.cpp

这个文件定义了一个unity::scopes::PreviewQueryBase类。

这个类定义了一个widget及一个layout来展示我们搜索到的结果。这是一个preview结i果,就像它的名字所描述的那样。

  • 定义在preview时所需要的widget
  • 让widget和搜索到的数据field一一对应起来
  • 定义不同数量的layout列(由屏幕的尺寸来定)
  • 把不同的widget分配到layout中的不同列中
  • 把reply实例显示到layout的widget中

大多数的代码在“run&quot;中实现。跟多关于这个类的介绍可以在http://developer.ubuntu.com/api/scopes/sdk-14.10/previewwidgets/找到。

Preview

Preview需要来生成widget并连接它们的field到CategorisedResult所定义的数据项中。它同时也用来为不同的显示环境(比如屏幕尺寸)生成不同的layout。根据不同的显示环境来生成不同数量的column。

Preview Widgets

这是一组预先定义好的widgets。每个都有一个类型。更据这个类型我们可以生成它们。你可以在这里找到Preview Widget列表及它们提供的的field类型。

这个例子使用了如下的widgets

  • header:它有title及subtitle field
  • image:它有source field有来显示从哪里得到这个art
  • text:它有text field
  • action:用来展示一个有"Open"的按钮。当用户点击时,所包含的URI将被打开

如下是一个例子,它定义了一个叫做“headerId"的PreviewWidget。第二个参数是它的类型"header"。

  1. PreviewWidget w_header("headerId""header");  

我们的代码在如下的地址可以找到:

bzr branch lp:~liu-xiao-guo/debiantrial/dianpingdept4


重新运行我们的Scope,我们可以看到如下的画面。在Preview的画面中,我们可以看到更多的信息,比如电话号码。同时我们可以点击“Open”,并进入到网页看到更多的信息。

  

  


4)加入设置

我们在这里想对dianping Scope做一个设置。比如我想有更多的搜寻的结果,而不是每次只有最多20个。我们可以通过文章“如何在Ubuntu Scope中定义设置变量并读取”来多我们的limit进行设置。首先,在Query类中加入函数:

// The followoing function is used to retrieve the settings for the scope
void Query::initScope()
{
    qDebug() << "Going to retrieve the settings!";

    unity::scopes::VariantMap config = settings();  // The settings method is provided by the base class
    if (config.empty())
        qDebug() << "CONFIG EMPTY!";

    m_limit = config["limit"].get_double();
    cerr << "limit: " << m_limit << endl;
}

并在“run”的开始部分调用它:

void Query::run(sc::SearchReplyProxy const& reply) {
    qDebug() <<  "Run is started .............................!";

    // Initialize the scopes
    initScope();

 ....
}

同时不要忘记在“data”目录下生产相应的.ini文件(/dianping/data/com.ubuntu.developer.liu-xiao-guo.dianping_dianping-settings.ini)。其内容如下:

[limit]
type = number
defaultValue = 20
displayName = 搜寻条数

我们也同时需要对“data”目录下的CMakeLists.txt文件进行修改。添加如下的部分到其中:

configure_file(
  "com.ubuntu.developer.liu-xiao-guo.dianping_dianping-settings.ini"
  "${CMAKE_BINARY_DIR}/src/com.ubuntu.developer.liu-xiao-guo.dianping_dianping-settings.ini"
)

INSTALL(
  FILES "${CMAKE_BINARY_DIR}/src/com.ubuntu.developer.liu-xiao-guo.dianping_dianping-settings.ini"
  DESTINATION "${SCOPE_INSTALL_DIR}"
)

我们可以运行一下“Run CMake”,这样,我们在Project中可以看到新添加的.ini文件。重新运行我们的Scope,并在Scope的右上角的设置图标(像有锯齿的 )去尝试改变limit的值,看看效果是什么样的。

  

我们也可以同时修改“data”目录下的logo及icon文件来使得我们的Scope更像一个branded Scope。最终所有的源码可以在如下的地址下载:

bzr branch lp:~liu-xiao-guo/debiantrial/dianpingdept5

大家如果有什么建议,请告诉我。



作者:UbuntuTouch 发表于2014-10-14 16:01:44 原文链接
阅读:138 评论:0 查看评论

Read more
Greg Lutostanski

Agenda

  • Review ACTION points from previous meeting

ACTION: all to review blueprint work items before next weeks meeting

  • U Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair

Minutes

Final Freeze 9 days out
  • Check on FTBFS packages — seems like there has been good progress
  • Make sure are up to date, if resources are needed now is the time to ask.
  • Release bugs, no high priority ones, juju mirs and openstack bits are being worked.
  • kickinz1 brought up two bcache bugs (LP #1377130 and LP #1377142) to the kernel team for help.
Meeting Actions

None

Agree on next meeting date and time

Next meeting will be on Tuesday, Oct 14th at 16:00 UTC in #ubuntu-meeting.

IRC Log

http://ubottu.com/meetingology/logs/ubuntu-meeting/2014/ubuntu-meeting.2014-10-07-16.03.html

Read more
UbuntuTouch

在本遍文章中,我们来讲解怎么对我们的Ubuntu Scope进行设置。对Scope而言,有些时候我们希望能够使用设置来改变我们的显示,或对我们的搜索进行重新定义。关于更多Scope的开发,请参阅网站:http://developer.ubuntu.com/scopes/

1)首先创建一个最基本的Scope

我们首先打开SDK,并选择“Unity Scope”模版。我们选择一个项目的名称为“settingscope”:



接下来,我们选择“Empty scope”。这样我们就创建了我们的一个最基本的scope了。



2)加入代码来完成设置功能


首先,我们打开项目中的“data”文件夹,并创建一个如下的文件名:

com.ubuntu.developer.liu-xiao-guo.settingscope_settingscope-settings.ini

注意这个文件名和Scope的设置文件

com.ubuntu.developer.liu-xiao-guo.settingscope_settingscope.ini

只有细小的差别。只是在它的后面加上“-settings"即可。记住千万不要改变这个规则。注意这个文件名和项目的名称的不同而不同

为了能够对这个文件进行设置和安装,我们也同时需要对“data”目录下的“CMakeLists.txt”文件加入如下的内容:


configure_file(
  "com.ubuntu.developer.liu-xiao-guo.settingscope_settingscope-settings.ini"
  "${CMAKE_BINARY_DIR}/src/com.ubuntu.developer.liu-xiao-guo.settingscope_settingscope-settings.ini"
)

INSTALL(
  FILES "${CMAKE_BINARY_DIR}/src/com.ubuntu.developer.liu-xiao-guo.settingscope_settingscope-settings.ini"
  DESTINATION "${SCOPE_INSTALL_DIR}"
)

这样我们的设置文件就可以安装到目标中了。下面,我们可以对我们的设置文件进行配置。打开我们的设置文件:

[location]
type = string
defaultValue = London
displayName = Location


[distanceUnit]
type = list
defaultValue = 1
displayName = Distance Unit
displayName[de] = Entfernungseinheit
displayValues = Kilometers;Miles
displayValues[de] = Kilometer;Meilen


[age]
type = number
defaultValue = 23
displayName = Age


[enabled]
type = boolean
defaultValue = true
displayName = Enabled


# Setting without a default value
[color]
type = string
displayName = Color


[limit]
type = number
defaultValue = 20
displayName = 搜寻条数



在这里,我们定义了一些设置的名称,比如“location”。它被定义为“string”,同时它还有一个默认的值“London”。显示的提示为“Location”,当然我们也可以把它修改为“位置”(对中文而言)。


为了能够在应用中访问我们,我们可以修改我们的代码如下:

void Query::run(sc::SearchReplyProxy const& reply) {

    // Read the settings
    initScope();

    try {
        // Start by getting information about the query
        const sc::CannedQuery &query(sc::SearchQueryBase::query());

        // Trim the query string of whitespace
        string query_string = alg::trim_copy(query.query_string());

        Client::ResultList results;
        if (query_string.empty()) {
            // If the string is empty, pick a default
            results = client_.search("default");
        } else {
            // otherwise, use the search string
            results = client_.search(query_string);
        }

        // Register a category
        auto cat = reply->register_category("results", "Results", "",
                                            sc::CategoryRenderer(CATEGORY_TEMPLATE));

        for (const auto &result : results) {
            sc::CategorisedResult res(cat);

            cerr << "it comes here: " << m_limit << endl;

            // We must have a URI
            res.set_uri(result.uri);

            // res.set_title(result.title);
            res.set_title( m_location );
            res["subtitle"] = std::to_string(m_limit);

            // Set the rest of the attributes, art, description, etc
            res.set_art(result.art);
            res["description"] = result.description;

            // Push the result
            if (!reply->push(res)) {
                // If we fail to push, it means the query has been cancelled.
                // So don't continue;
                return;
            }
        }
    } catch (domain_error &e) {
        // Handle exceptions being thrown by the client API
        cerr << e.what() << endl;
        reply->error(current_exception());
    }
}

void Query::initScope()
{
    unity::scopes::VariantMap config = settings();  // The settings method is provided by the base class
    if (config.empty())
        cerr << "CONFIG EMPTY!" << endl;

    m_location = config["location"].get_string();     // Prints "London" unless the user changed the value
    cerr << "location: " << m_location << endl;

    m_limit = config["limit"].get_double();
    cerr << "limit: " << m_limit << endl;
}

这里“initScope”在“Run”中被调用。在InitScope中,我们通过“settings()”来读取设置的值。为了显示的方便,我们在“Run”中,也对读取的值进行简单的显示:

            // res.set_title(result.title);
            res.set_title( m_location );
            res["subtitle"] = std::to_string(m_limit);

我们重新运行我们的Scope,并可以看到如下的图片:

   

我们也可以在我们的Application Output窗口中看到设置的变化:



整个项目的源码可以在如下的地址下载:

bzr branch lp:~liu-xiao-guo/debiantrial/settingscope




作者:UbuntuTouch 发表于2014-10-14 13:12:28 原文链接
阅读:233 评论:0 查看评论

Read more
David Callé

A scope is a tailored view for a set of data, that can use custom layouts, display and branding options. From RSS news feeds to weather data and search engine results, the flexibility of scopes allows you to provide a simple, recognizable and consistent experience with the rest of the OS.

Scopes can also integrate with system-wide user accounts (email, social networks…), split your content into categories and aggregate into each others (for example, a “shopping” scope aggregating results from several store scopes).

unity-8-scopes

In this tutorial, you will learn how to write a scope in C++ for SoundCloud, using the Ubuntu SDK. Read…

Read more
Luca Paulina

A few weeks ago we launched ‘Machine view’ for Juju, a feature designed to allow users to easily visualise and manage the machines running in their cloud environments. In this post I want to share with you some of the challenges we faced and the solutions we designed in the process of creating it.

A little bit about Juju…
For those of you that are unfamiliar with Juju, a brief introduction. Juju is a software tool that allows you to design, build and manage application services running in the cloud. You can use Juju through the command-line or via a GUI and our team is responsible for the user experience of Juju in the GUI.

First came ‘Service View’
In the past we have primarily focused on Juju’s ‘Service view’ – a virtual canvas that enables users to design and connect the components of their given cloud environment.

Service_view

This view is fantastic for modelling the concept and relationships that define an application environment. However, for each component or service block, a user can have anything from one unit to hundreds or thousands of units, depending on the scale of the environment, and before machine view, units means machines.

The goal of machine view was to surface these units and enable users to visualise, manage and optimise their use of machines in the cloud.

‘Machine view’: design challenges
There were a number of challenges we needed to address in terms of layout and functionality:

  • The scalability of the solution
  • The glanceability of the data
  • The ability to customise and sort the information
  • The ability to easily place and move units
  • The ability to track changes
  • The ability to deploy easily to the cloud

I’ll briefly go into each one of these topics below.

Scalability: Environments can be made up of a couple of machines or thousands. This means that giving the user a clear, light and accessible layout was incredibly important – we had to make sure the design looked and worked great at both ends of the spectrum.

Machine view

Simple_machine_view

Glanceability: Users need simple comparative information to help choose the right machine at-a-glace. We designed and tested hundreds of different ways of displaying the same data and eventually ended up with an extremely cut back listing which was clean and balanced.

A tour of the many incarnations and iterations of Machine view…

The ability to sort and customise: As it was possible and probable that users would scale environments to thousands of machines, we needed to provide the ability to sort and customise the views. Users can use the menus at the top of each column to hide information from view and customise the data they want visible at a glance. As users become more familiar with their machines they could turn off extra information for a denser view of their machines. Users are also given basic sorting options to help them find and explore their machines in different ways.

More_menu

The ability to easily place and move units: Machine view is built around the concept of manual placement – the ability to co-locate (put more than one) items on a single machine or to define specific types of machines for specific tasks. (As opposed to automatic placement, where each unit is given a machine of the pre-determined specification). We wanted to enable users to create the most optimised machine configurations for their applications.

Drag and drop was a key interaction that we wanted to exploit for this interface because it would simplify the process of manually placing units by a significant amount. The three column layout aided the use of drag and drop, where users are able to pick up units that need placing on the left hand side and drag them to a machine in the middle column or a container in the third column. The headers also change to reveal drop zones allowing users to create new machines and containers in one fluid action keeping all of the primary interactions in view and accessible at all times.

Drag and drop in action on machine view

The ability to track changes: We also wanted to expose the changes that were being made throughout user’s environments as they were going along and allow them to commit batches of changes altogether. Deciding which changes were exposed and the design of the uncommitted notification was difficult, we had to make sure the notifications were not viewed as repetitive, that they were identifiable and that it could be used throughout the interface.

Uncommitted_SV

Uncommitted_MV

The ability to deploy easily to the cloud: Before machine view it was impossible for someone to design their entire environment before sending it to the cloud. The deployment bar is a new ever present canvas element that rationalises all of the changes made into a neat listing, it is also where users can deploy or commit those changes. Look for more information about the deployment bar in another post.

The change log exposed

The deployment summary

We hope that machine view will really help Juju users by increasing the level of control and flexibility they have over their cloud infrastructure.

This project wouldn’t have been possible without the diligent help from the Juju GUI development team. Please take a look and let us know what you think. Find out more about Juju, Machine View or take it for a spin.

Read more
niemeyer

mgo r2014.10.12

A new release of the mgo MongoDB driver for Go is out, packed with contributions and features. But before jumping into the change list, there’s a note in the release of MongoDB 2.7.7 a few days ago that is worth celebrating:

New Tools!
– The MongoDB tools have been completely re-written in Go
– Moved to a new repository: https://github.com/mongodb/mongo-tools
– Have their own JIRA project: https://jira.mongodb.org/browse/TOOLS

So far this is part of an unstable release of the MongoDB server, but it implies that if the experiment works out every MongoDB server release will be carrying client tools developed in Go and leveraging the mgo driver. This extends the collaboration with MongoDB Inc. (mgo is already in use in the MMS product), and some of the features in release r2014.10.12 were made to support that work.

The specific changes available in this release are presented below. These changes do not introduce compatibility issues, and most of them are new features.

Fix in txn package

The bug would be visible as an invariant being broken, and the transaction application logic would panic until the txn metadata was cleaned up. The bug does not cause any data loss nor incorrect transactions to be silently applied. More stress tests were added to prevent that kind of issue in the future.

Debug information contributed by the juju team at Canonical.

MONGODB-X509 auth support

The MONGODB-X509 authentication mechanism, which allows authentication via SSL client certificates, is now supported.

Feature contributed by Gabriel Russel.

SCRAM-SHA-1 auth support

The MongoDB server is changing the default authentication protocol to SCRAM-SHA-1. This release of mgo defaults to authenticating over SCRAM-SHA-1 if the server supports it (2.7.7 and later).

Feature requested by Cailin Nelson.

GSSAPI auth on Windows too

The driver can now authenticate with the GSSAPI (Kerberos) mechanism on Windows using the standard operating system support (SSPI). The GSSAPI support on Linux remains via the cyrus-sasl library.

Feature contributed by Valeri Karpov.

Struct document ids on txn package

The txn package can now handle documents that use struct value keys.

Feature contributed by Jesse Meek.

Improved text index support

The EnsureIndex family of functions may now conveniently define text indexes via the usual shorthand syntax ("$text:field"), and Sort can use equivalent syntax ("$textScore:field") to inject the text indexing score.

Feature contributed by Las Zenow.

Support for BSON’s deprecated DBPointer

Although the BSON specification defines DBPointer as deprecated, some ancient applications still depend on it. To enable the migration of these applications to Go, the type is now supported.

Feature contributed by Mike O’Brien.

Generic Getter/Setter document types

The Getter/Setter interfaces are now respected when unmarshaling documents on any type. Previously they would only be respected on maps and structs.

Feature requested by Thomas Bouldin.

Improvements on aggregation pipelines

The Pipe.Iter method will now return aggregation results using cursors when possible (MongoDB 2.6+), and there are also new methods to tweak the aggregation behavior: Pipe.AllowDiskUse, Pipe.Batch, and Pipe.Explain.

Features requested by Roman Konz.

Decoding into custom bson.D types

Unmarshaling will now work for types that are slices of bson.DocElem in an equivalent way to bson.D.

Feature requested by Daniel Gottlieb.

Indexes and CommandNames via commands

The Indexes and CollectionNames methods will both attempt to use the new command-based protocol, and fallback to the old method if that doesn’t work.

GridFS default chunk size

The default GridFS chunk size changed from 256k to 255k, to ensure that the total document size won’t go over 256k with the additional metadata. Going over 256k would force the reservation of a 512k block when using the power-of-two allocation schema.

Performance of bson.Raw decoding

Unmarshaling data into a bson.Raw will now bypass the decoding process and record the provided data directly into the bson.Raw value. This significantly improves the performance of dumping raw data during iteration.

Benchmarks contributed by Kyle Erf.

Performance of seeking to end of GridFile

Seeking to the end of a GridFile will now not read any data. This enables a client to find the size of the file using only the io.ReadSeeker interface with low overhead.

Improvement contributed by Roger Peppe.

Added Query.SetMaxScan method

The SetMaxScan method constrains the server to only scan the specified number of documents when fulfilling the query.

Improvement contributed by Abhishek Kona.

Added GridFile.SetUploadDate method

The SetUploadDate method allows changing the upload date at file writing time.

Read more
bigjools

New MAAS features in 1.7.0

MAAS 1.7.0 is close to its release date, which is set to coincide with Ubuntu 14.10’s release.

The development team has been hard at work and knocked out some amazing new features and improvements. Let me take you through some of them!

UI-based boot image imports

Previously, MAAS used to require admins to configure (well, hand-hack) a yaml file on each cluster controller that specified precisely which OSes, release and architectures to import. This has all been replaced with a very smooth new API that lets you simply click and go.

New image import configuration page

Click for bigger version

The different images available are driven by a “simplestreams” data feed maintained by Canonical. What you see here is a representation of what’s available and supported.

Any previously-imported images also show on this page, and you can see how much space they are taking up, and how many nodes got deployed using each image. All the imported images are automatically synced across the cluster controllers.

image-import

Once a new selection is clicked, “Apply changes” kicks off the import. You can see that the progress is tracked right here.

(There’s a little more work left for us to do to track the percentage downloaded.)

Robustness and event logs

MAAS now monitors nodes as they are deploying and lets you know exactly what’s going on by showing you an event log that contains all the important events during the deployment cycle.

node-start-log

You can see here that this node has been allocated to a user and started up.

Previously, MAAS would have said “okay, over to you, I don’t care any more” at this point, which was pretty useless when things start going wrong (and it’s not just hardware that goes wrong, preseeds often fail).

So now, the node’s status shows “Deploying” and you can see the new event log at the bottom of the node page that shows these actions starting to take place.

After a while, more events arrive and are logged:

node-start-log2

And eventually it’s completely deployed and ready to use:

node-start-log3

You’ll notice how quick this process is nowadays.  Awesome!

More network support

MAAS has nascent support for tracking networks/subnets and attached devices. Changes in this release add a couple of neat things: Cluster interfaces automatically have their networks registered in the Networks tab (“master-eth0″ in the image), and any node network interfaces known to be attached to any of these networks are automatically linked (see the “attached nodes” column).  This makes even less work for admins to set up things, and easier for users to rely on networking constraints when allocating nodes over the API.

networks

Power monitoring

MAAS is now tracking whether the power is applied or not to your nodes, right in the node listing.  Black means off, green means on, and red means there was an error trying to find out.

powermon

Bugs squashed!

With well over 100 bugs squashed, this will be a well-received release.  I’ll post again when it’s out.


Read more
Michael Hall

screenshot_1.0So it’s finally happened, one of my first Ubuntu SDK apps has reached an official 1.0 release. And I think we all know what that means. Yup, it’s time to scrap the code and start over.

It’s a well established mantra, codified by Fred Brooks, in software development that you will end up throwing away the first attempt at a new project. The releases between 0.1 and 0.9 are a written history of your education about the problem, the tools, or the language you are learning. And learn I did, I wrote a whole series of posts about my adventures in writing uReadIt. Now it’s time to put all of that learning to good use.

Often times projects still spend an extremely long time in this 0.x stage, getting ever closer but never reaching that 1.0 release.  This isn’t because they think 1.0 should wait until the codebase is perfect, I don’t think anybody expects 1.0 to be perfect. 1.0 isn’t the milestone of success, it’s the crossing of the Rubicon, the point where drastic change becomes inevitable. It’s the milestone where the old code, with all it’s faults, dies, and out of it is born a new codebase.

So now I’m going to start on uReadIt 2.0, starting fresh, with the latest Ubuntu UI Toolkit and platform APIs. It won’t be just a feature-for-feature rewrite either, I plan to make this a great Reddit client for both the phone and desktop user. To that end, I plan to add the following:

  • A full Javascript library for interacting with the Reddit API
  • User account support, which additionally will allow:
    • Posting articles & comments
    • Reading messages in your inbox
    • Upvoting and downvoting articles and comments
  • Convergence from the start, so it’s usable on the desktop as well
  • Re-introduce link sharing via Content-Hub
  • Take advantage of new features in the UITK such as UbuntuListView filtering & pull-to-refresh, and left/right swipe gestures on ListItems

Another change, which I talked about in a previous post, will be to the license of the application. Where uReadIt 1.0 is GPLv3, the next release will be under a BSD license.

Read more
pitti

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack.

Finally, let' run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Read more
UbuntuTouch

Location信息对很多有地址进行搜索的应用来说非常重要。比如对dianping这样的应用来说,我们可以通过地址来获取当前位置的一些信息。在这篇文章中,我们来介绍如何获取Scope架构中的位置信息。这个位置信息可以对我们很多的搜索是非常重要的。


1)创建一个简单的Scope应用


我们首先打开SDK,并选择“Unity Scope”模版:



接下来,我们选择“Empty scope”。这样我们就创建了我们的一个最基本的scope了。



我们可以运行我们的Scope。这是一个最基本的Scope。

2)加入代码获取Location信息

为了获取位置信息,我们对我们的代码进行设置。首先打开"data"文件夹中的.ini文件,并加入LocationDataNeeded=true。这样整个文件显示为:

[ScopeConfig]
DisplayName = Scopetest Scope
Description = This is a Scopetest scope
Art = screenshot.png
Author = Firstname Lastname
Icon = icon.png

LocationDataNeeded=true

[Appearance]
PageHeader.Logo = logo.png

同时我们打开scope.cpp文件,并修改为:

#include <unity/scopes/SearchMetadata.h> // added

....


void Query::run(sc::SearchReplyProxy const& reply) {
    try {
        cerr << "starting to get the location" << endl;

        auto metadata = search_metadata();
        if (metadata.has_location()) {

            cerr << "it has location data" << endl;

            auto location = metadata.location();

            if (location.has_country_code()) {
                cerr << "country code: " << location.country_code() << endl;
             }

            if ( location.has_area_code() ) {
                cerr << "area code: " << location.area_code() << endl;
            }

            if ( location.has_city() ) {
               cerr << "city: " << location.city() << endl;
            }

            if ( location.has_country_name() ) {
                cerr << "" << location.country_name() << endl;
            }

            if ( location.has_altitude()) {
                cerr << "altitude: " << location.altitude() << endl;
                cerr << "longitude: " << location.longitude() << endl;
                cerr << "latitude: " << location.latitude() << endl;
            }

            if ( location.has_horizontal_accuracy()) {
                cerr << "horizotal accuracy: " << location.horizontal_accuracy() << endl;
            }

            if ( location.has_region_code() ) {
                cerr << "region code: " << location.region_code() << endl;
            }

            if ( location.has_region_name() ) {
                cerr << "region name: " << location.region_name() << endl;
            }

            if ( location.has_zip_postal_code() ) {
                cerr << "zip postal code: " << location.zip_postal_code() << endl;
            }
        }

 ....

}

我们通过打印的方式来查看我们所收到的位置信息。在手机上运行,并同时在desktop上执行如下的命令:





我们可以看到我们所需要的位置信息。通过这些信息,我们可以在我们的Scope中使用。

所有的源码可以在如下的地址找到:

bzr branch lp:~liu-xiao-guo/debiantrial/scope



作者:UbuntuTouch 发表于2014-10-10 13:07:50 原文链接
阅读:280 评论:0 查看评论

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20141007 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to the v3.16.4 upstream stable
kernel. This is available for testing as of the 3.16.0-21.28 upload to
the archive. Please test and let us know your results.
Also, Utopic Kernel Freeze is this Thurs Oct 9. Any patches submitted
after kernel freeze are subject to our Ubuntu kernel SRU policy. I sent
a friendly reminder about this to the Ubuntu kernel-team mailing list
yesterday as well.
—–
Important upcoming dates:
Thurs Oct 9 – Utopic Kernel Freeze (~2 days away)
Thurs Oct 16 – Utopic Final Freeze (~1 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~2 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 30):

  • Lucid – Testing
  • Precise – Testing
  • Trusty – Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 19-Sep through 11-Oct
    ====================================================================
    19-Sep Last day for kernel commits for this cycle
    21-Sep – 27-Sep Kernel prep week.
    28-Sep – 04-Oct Bug verification & Regression testing.
    05-Oct – 08-Oct Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Michael Hall

Ubuntu Mauritius CommunityBut it isn’t perfect.  And that, in my opinion, is okay.  I’m not perfect, and neither are you, but you are still wonderful too.

I was asked, not too long ago, what I hated about the community. The truth, then and now, is that I don’t hate anything about it. There is a lot I don’t like about what happens, of course, but nothing that I hate. I make an effort to understand people, to “grok” them if I may borrow the word from Heinlein. When you understand somebody, or in this case a community of somebodies, you understand the whole of them, the good and the bad. Now understanding the bad parts doesn’t make them any less bad, but it does provide opportunities for correcting or removing them that you don’t get otherwise.

You reap what you sow

People will usually respond in kind with the way they are treated. I try to treat everybody I interact with respectfully, kindly, and rationally, and I’ve found that I am treated that way back. But, if somebody is prone to arrogance or cruelty or passion, they will find far more of that treatment given back and them than the positive ones. They are quite often shocked when this happens. But when you are a source of negativity you drive away people who are looking for something positive, and attract people who are looking for something negative. It’s not absolute, nice people will have some unhappy followers, and crumpy people will have some delightful ones, but on average you will be surrounded by people who behave like you.

Don’t get even, get better

An eye for an eye makes the whole world blind, as the old saying goes. When somebody is rude or disrespectful to us, it’s easy to give in to the desire to be rude and disrespectful back. When somebody calls us out on something, especially in public, we want to call them out on their own problems to show everybody that they are just as bad. This might feel good in the short term, but it causes long term harm to both the person who does it and the community they are a part of. This ties into what I wrote above, because even if you aren’t naturally a negative person, if you respond to negativity with more of the same, you’ll ultimately share the same fate. Instead use that negativity as fuel to drive you forward in a positive way, respond with coolness, thoughtfulness and introspection and not only will you disarm the person who started it, you’ll attract far more of the kind of people and interactions that you want.

Know your audience

Your audience isn’t the person or people you are talking to. Your audience is the people who hear you. Many of the defenders of Linus’ beratement of kernel contributors is that he only does it to people he knows can take it. This defense is almost always countered, quite properly, by somebody pointing out that his actions are seen by far more than just their intended recipient. Whenever you interact with any member of your community in a public space, such as a forum or mailing list, treat it as if you were interacting with every member, because you are. Again, if you perpetuate negativity in your community, you will foster negativity in your community, either directly in response to you or indirectly by driving away those who are more positive in nature. Linus’ actions might be seen as a joke, or necessary “tough love” to get the job done, but the LKML has a reputation of being inhospitable to potential contributors in no small part because of them. You can gather a large number of negative, or negativity-accepting, people into a community and get a lot of work done, but it’s easier and in my opinion better to have a large number of positive people doing it.

Monoculture is dangerous

I think all of us in the open source community know this, and most of us have said it at least once to somebody else. As noted security researcher Bruce Schneier says, “monoculture is bad; embrace diversity or die along with everyone else.” But it’s not just dangerous for software and agriculture, it’s dangerous to communities too. Communities need, desperately need, diversity, and not just for the immediate benefits that various opinions and perspectives bring. Including minorities in your community will point out flaws you didn’t know existed, because they didn’t affect anyone else, but a distro-specific bug in upstream is still a bug, and a minority-specific flaw in your community is still a flaw. Communities that are almost all male, or white, or western, aren’t necessarily bad because of their monoculture, but they should certainly consider themselves vulnerable and deficient because of it. Bringing in diversity will strengthen it, and adding minority contributor will ultimately benefit a project more than adding another to the majority. When somebody from a minority tells you there is a problem in your community that you didn’t see, don’t try to defend it by pointing out that it doesn’t affect you, but instead treat it like you would a normal bug report from somebody on different hardware than you.

Good people are human too

The appendix is a funny organ. Most of the time it’s just there, innocuous or maybe even slightly helpful. But every so often one happens to, for whatever reason, explode and try to kill the rest of the body. People in a community do this too.  I’ve seen a number of people that were good or even great contributors who, for whatever reason, had to explode and they threatened to take down anything they were a part of when it happened. But these people were no more malevolent than your appendix is, they aren’t bad, even if they do need to be removed in order to avoid lasting harm to the rest of the body. Sometimes, once whatever caused their eruption has passed, these people can come back to being a constructive part of your community.

Love the whole, not the parts

When you look at it, all of it, the open source community is a marvel of collaboration, of friendship and family. Yes, family. I know I’m not alone in feeling this way about people I may not have ever met in person. And just like family you love them during the good and the bad. There are some annoying, obnoxious people in our family. There are good people who are sometimes annoying and obnoxious. But neither of those truths changes the fact that we are still a part of an amazing, inspiring, wonderful community of open source contributors and enthusiasts.

Read more
UbuntuTouch

[原]怎么安装Ubuntu应用到Device中

这里我们先设想你们已经把手机刷到Ubuntu Touch最新软件。下面我们来介绍怎么生成Click package,并安装到手机中。开始这前,我们必须确保我们已经在手机上打开开发者模式”。关于如何打开开发者模式,可以参考文章“怎么在Ubuntu手机中打开开发者模式”。


1) 生成Click Package

  • 启动Ubuntu SDK
  • 打开已经创建的应用


  • 选择IDE左下方的目标架构为"Ubuntu Device (GCC armhf-ubuntu-sdk-14.10-utopic)"
  • 选中IDE 左侧的"Publish",在这个框中我们可以设置我们所需要的一些东西,比如说应用的Title等

  • 点击"Click Package",这样在和项目目录"test2"平行的一个目录中"build-test2-Ubuntu_Device_GCC_armhf_ubuntu_sdk_14_10_utopic-Default"生成一个叫做"com.ubuntu.developer.liu-xiao-guo.test2_0.1_all.click"的click文件。这个即是可以安装到手机的文件。

2)安装Click文件包到手机上

启动一个Terminal。我们可以通过如下的指令来完成安装的工作

$ adb push com.ubuntu.developer.liu-xiao-guo.test2_0.1_all.click /tmp
$ adb shell "sudo -iu phablet pkcon --allow-untrusted install-local /tmp/com.ubuntu.developer.liu-xiao-guo.test2_0.1_all.click"




这样在手机中的"应用”页面就可以找到我们的应用了。如果找不到的话,可以通过搜索的方式寻找它:



3)通过当前项目生成click包

我们也可以同过IDE的集成环境来完成应用的安装。具体的步骤如下:
  • 选中当前的项目(对纯QML项目,无C++代码)
  • 点击右键


我们可以在项目当前目录退后的一个目录找到所需要的click包。比如对我们的项目”balloon"来说,在目录build-balloon-UbuntuSDK_for_armhf_GCC_ubuntu_sdk_14_10_utopic-default里可以找到"com.ubuntu.developer.liu-xiao-guo.balloon_0.1_all.click"包。一旦生成这个包,我们可以按上述讲的方法来安装我们生成的应用。


4)查看Click安装包中的内容。

有时我们想查看一下Click安装包中到底有那些的内容,我们可以打入如下的命令:

$ click contents com.ubuntu.developer.liu-xiao-guo.test2_0.1_all.click


我们也可以通过如下的命令来得到click包里所有的文件。把我下面的click包文件名换成你自己的包的名字即可以

dpkg -x myapp.click unpacked
file unpacked/path/to/your/binary

通过”file"命令来查看文件的特性,比如:

/tmp/unpacked/lib/arm-linux-gnueabihf/bin/filemanager: ELF 32-bit LSB  executable, ARM, . . 

可以看到确实,该文件是一个ARM的可执行文件。



关于click命令还有其他的很多的功能,我们可以通过:

$ click --help

来查看它的具体的用法。

5) 登陆到手机

我们可以通过如下的命令来登陆到手机

$ adb shell

我们也可以通过如下的命令来切换到"phablet"账号中

$ root@ubuntu-phablet:~# su - phablet

如果需要安装软件需要密码的话,密码是"phablet"


6) 通过Terminal命令来生产click package


对有“CMakeLists.txt”的项目(通常是有C++代码的项目),我们也可以通过如下的命令来生产click package文件。首先我们使用Terminal进入到项目的目录(含有CMakeLists.txt)的目录,并键入如下的命令:

$click-buddy --arch armhf --framework ubuntu-sdk-14.10

一旦生产click package文件,我们就可以通过上面的方法来进行安装我们的应用了。


作者:UbuntuTouch 发表于2014-8-6 9:56:09 原文链接
阅读:131 评论:0 查看评论

Read more