Canonical Voices

UbuntuTouch

在实际的Scope中,如果我们的搜寻的结果多过一个屏幕,这个时候,我们可以通过创建一些按钮来帮我们进行翻页.当然我们也可以使用按钮来做一些其它的功能,这完全依赖于我们对我们的Scope的具体的设计.比如,在下面的Scope的画面中,我们创建了"Next","Previous"及"Top".在这篇文章中,我们将介绍如何创建这些按钮,并捕获这些按钮的事件.




下面,我们来介绍如何创建这些按钮.


1)在query中创建按钮


query.cpp


我们定义如下的函数:

void Query::showNext(sc::SearchReplyProxy const& reply, sc::CategorisedResult res, int next_breadcrumb)
{
    res.set_uri("http://ubuntu.com");
    //Translators: this means show the next page of results
    std::string show_more =  _("Next");
    res.set_title("<b>" + show_more + "</b>");
    res["get_next"] = "true";
    res.set_intercept_activation();
    res["api_page"] = sc::Variant(next_breadcrumb);
    if (!reply->push(res)) {
        return;
    }
}

void Query::showPrevious(sc::SearchReplyProxy const& reply, sc::CategorisedResult res)
{
    res.set_uri("http://ubuntu.com");
    //Translators: this means show the previous page of results
    std::string show_prev =  _("Previous");
    res.set_title("<b>" + show_prev + "</b>");
    res["get_previous"] = "true";
    res.set_intercept_activation();
    if (!reply->push(res)) {
        return;
    }
}

void Query::showTop(sc::SearchReplyProxy const& reply, sc::CategorisedResult res)
{
    res.set_uri("http://ubuntu.com");
    //Translators: this means return to the top page (the first page) of results
    std::string top = _("Top");
    res.set_title("<b>" + top + "</b>");
    res["go_to_top"] = "true";
    res.set_intercept_activation();
    if (!reply->push(res)) {
        return;
    }
}

void Query::createButtons(const unity::scopes::SearchReplyProxy &reply) {
    auto buttons_cat = reply->register_category("buttons", "", "",
                                                sc::CategoryRenderer(BUTTONS_TEMPLATE));
    // Show the next button
    sc::CategorisedResult res(buttons_cat);
    showNext(reply, res, 1);

    // show the preview button
    sc::CategorisedResult prev_res(buttons_cat);
    showPrevious(reply, prev_res);

    // show the top button
    sc::CategorisedResult top_res(buttons_cat);
    showTop(reply, top_res);
}

它们被用来创建我们所需要的按钮.注意上面的"res.set_intercept_activation()".它可以让我们在我们下面的Action中捕获这个事件..在这里,BUTTONS_TEMPLATE的定义如下:

const static string BUTTONS_TEMPLATE =
        R"(
{
        "schema-version": 1,
        "template": {
        "category-layout": "vertical-journal",
        "card-size": "small",
        "card-background":"color:///#12a3d8"
        },
        "components": {
        "title": "title"
        }
        }
        )";

我们在query的run方法中调用我们上面创建的createButtons:

void Query::run(sc::SearchReplyProxy const& reply) {
    try {

        // We can play a two buttons on the top
        createButtons(reply);
        ...

 } catch (domain_error &e) {
        // Handle exceptions being thrown by the client API
        cerr << e.what() << endl;
        reply->error(current_exception());
    }
}

这样我们就可以在最上面显示我们所需要的按钮.事实上,就是通过push的方法来生产一个我们喜欢颜色的方块.


2)捕获按钮事件


按照如下的方法来捕获我们的按钮事件.我们必须在scope.cpp中创建如下的方法:

scope.cpp


sc::ActivationQueryBase::UPtr Scope::activate(sc::Result const& result,
                                              sc::ActionMetadata const& metadata)
{
    cerr << "activate" << endl;
    return sc::ActivationQueryBase::UPtr(new Action(result, metadata, *this));
}

action.cpp


创建我们所需要的构造函数:

Action::Action(us::Result const& result,
               us::ActionMetadata const& metadata,
               scope::Scope & scope):
    result_(result),
    scope_(scope),
    ActivationQueryBase(result, metadata)
{
}

在activate()方法中处理我们所需要的按键事件.

sc::ActivationResponse Action::activate()
{
    qDebug() << "in activate()";

    try {
        std::string val = result_["get_previous"].get_string();
        qDebug() << "PREVIOUS button is clicked!";
        us::CannedQuery cq("scopetemplates.xiaoguo_scopetemplates");
        // restore the previous search. We can save the state in query
        // and save the states into the members of scope.
        //        cq.set_filter_state(scope_.filter_state);
        //        cq.set_query_string(scope_.previous_query);
        return us::ActivationResponse(cq);
    }
    catch ( unity::LogicException &e){
    }

    try {
        std::string val = result_["get_next"].get_string();
        qDebug() << "NEXT button is clicked!";
        us::CannedQuery cq("scopetemplates.xiaoguo_scopetemplates");
        // restore the previous search
        //        cq.set_filter_state(scope_.filter_state);
        //        if (scope_.previous_query.empty())
        //            cq.set_department_id(scope_.previous_dept_id);
        //        cq.set_query_string("");
        return us::ActivationResponse(cq);
    }
    catch ( unity::LogicException &e){
    }

    ...
}

我们可以在Activation中捕获这些按钮的事件.当我们返回的时候,我们可以参照文章"如果在Scope中的Preview中发起一个query请求"及文章"运用link query特性query自己的Scope中department或其它scope中的department"调到我们需要的department中或通过set_query_string生产一个新的query.

在Desktop下运行,我们可以看到我们在按下按钮时的事件:



作者:UbuntuTouch 发表于2016/5/18 11:28:51 原文链接
阅读:291 评论:1 查看评论

Read more
UbuntuTouch

在QML的设计中,在很多的情况下,我们可以把我们的逻辑代码通过Javascript来书写,而且可以把我们的JS代码掩埋在我们的QML代码中,比如典型的代码如下:

            Button {
                text: "Calculate"
                onClicked: {
                    console.log("Change button is clicked")
                    console.log(Method.factorial(10));
                    console.log(Method.factorialCallCount())
                }
            }


在上面的代码中,我们定义了一个Button按钮.每当我们的按钮的触碰事件click发生时,我们可以在我们的QML文件中直接按照上面的方法来执行我们的Javascript代码.在onClicked后面的""及""中间其实就是我们的Javascript代码来实现我们的逻辑处理.

事实上,如果我们的逻辑比较复杂,并且有时,我们想把我们的逻辑及UI (由QML语言描述)分开的话,这时候,我们可以直接把我们的逻辑代码放入到一个叫做Javascript的文件之中.然后再在我们的QML文件中进行直接import.这样的好处是界面和逻辑分开,也可以使得我们的QML代码简单明了.

在QML中引入Javascript有两种方式.有兴趣的开发者可以参阅Qt的官方文档.在该文档中,如果不是自己亲手去实验,其实也很晦涩难懂.简单地说,这两种方式是:

  • stateful:在这种方式下,js模块中所定义的变量在每次被import后,都会被拷贝一份.所以如果有被多次import就有多个拷贝,而且每个的值都可能会有不同. 在这种过情况下,js文件可以直接访问在我们QML文件中所定义的object
  • statelss:在这种情况下,js所定义的模块就像一个公共分享的库一样.它里面的方法都可以被使用,并且模块里定义的变量在所以被import的QML代码中独一份,无论被import多少次.另外,它不可以直接访问QML文件中的object尽管可以通过参数的传人对所要求的object进行修改.

1)stateful


我们特意在我们的例子中加入我们一个特有的MyButton.qml:

MyButton.qml


// MyButton.qml
import QtQuick 2.0
import "my_button_impl.js" as Logic // a new instance of this JavaScript resource is loaded for each instance of Button.qml
import "factorial.js" as Method

Rectangle {
    id: rect
    width: 200
    height: 100
    color: "red"
    property int count: 0
    property alias text: mytext.text
    signal clicked()

    Text {
        id: mytext
        anchors.centerIn: parent
        font.pixelSize: units.gu(3)
    }

    MouseArea {
        id: mousearea
        anchors.fill: parent
        onClicked: {
            rect.clicked()
            count = Logic.onClicked(rect)
            console.log(Method.factorialCallCount())
        }
    }
}

my_button_impl.js


// this state is separate for each instance of MyButton
var clickCount = 0;

function onClicked(obj) {
    clickCount += 1;
    if ((clickCount % 5) == 0) {
        obj.color = Qt.rgba(1,0,0,1);
    } else {
        obj.color = Qt.rgba(0,1,0,1);
    }

    return clickCount;
}

function changeBackgroundColor() {
    main.backgroundColor = "green"
}

在我们的MyButton.qml中,我们import了my_button_impl.js这个文件.由于这个js文件的最前面没有".pragma library",所以它是一个stateful的Javascript.如果我们有多个MyButton的实例,那么每个按钮都分别有自己的clickCount变量,并且它们之间毫无关系.

我们在我们的Main.qml中加入两个我们设计的MyButton按钮:

Main.qml


        Column {
            anchors.centerIn: parent
            spacing: units.gu(5)

            MyButton {
                anchors.horizontalCenter: parent.horizontalCenter
                text: "Button " + count
                onClicked: {
                    console.log("Button 1 is clicked!")
                }
            }

            MyButton {
                anchors.horizontalCenter: parent.horizontalCenter
                width: parent.width
                text: "Button " + count
                onClicked: {
                    console.log("Button 2 is clicked!")
                }
            }
         }

运行我们的代码:



从上面的输出可以看出,两个按钮的count可以是完全不同的.它们之间完全独立.

另外,我们也在我们的my_button_impl.js中设计了如下的函数:

function changeBackgroundColor() {
    main.backgroundColor = "green"
}

显然它可以直接访问我们在Main.qml中的object Main.运行我们应用,点击"Change color"按钮:



可以看见我们的MainView的背景颜色已经发生改变了.

2)stateless



针对这种情况,在Javascript文件的开头部分必须是一下的语句:

.pragma library

我们一个完整的factorial.js文件如下:

.pragma library

var factorialCount = 0;

function factorial(a) {
    // a = parseInt(a);

    // factorial recursion
    if (a > 0)
        return a * factorial(a - 1);

    // shared state
    factorialCount += 1;

    // recursion base-case.
    return 1;
}

function factorialCallCount() {
    return factorialCount;
}

function changeBackgroundColor() {
    main.backgroundColor = "green"
}

function changeBackground(obj) {
    obj.backgroundColor = "green"
}


在这个模块中,我们定义了一个factorialCount的变量.由于是stateless,所有这个变量对于所有的import文件来说,只有一个.有点类似于C++类中static变量.在我们的Main.qml中,我们定义了一个按钮:

Main.qml


            Button {
                text: "Calculate"
                onClicked: {
                    console.log("Calculate button is clicked")
                    console.log(Method.factorial(10));
                    console.log(Method.factorialCallCount())
                }
            }

按下这个按钮,factorial将会帮我们计算我们的结果,同时会显示factorial被调用多少次.在我们的MyButton.qml中,我们也在点击的地方展示这个数据:

MyButton.qml


    MouseArea {
        id: mousearea
        anchors.fill: parent
        onClicked: {
            rect.clicked()
            count = Logic.onClicked(rect)
            console.log(Method.factorialCallCount())
        }
    }

如果我们点击"Calculate"按钮,在点击MyButton按钮,我们将可以看到同样的factorialCount值.



同样的,如果在我们的factorial.js中直接访问object main来修改它的属性,就像:

function changeBackgroundColor() {
    main.backgroundColor = "green"
}

这样是不可以的.我们必须通过如下的方法来修改:

function changeBackground(obj) {
    obj.backgroundColor = "green"
}

我们在我们的Main.qml中使用如下的代码来实现:

            Button {
                text: "Change color via stateless "
                onClicked: {
                    Method.changeBackground(main)
                }
            }

整个项目的源码:https://github.com/liu-xiao-guo/JsInQml



作者:UbuntuTouch 发表于2016/5/19 11:12:00 原文链接
阅读:243 评论:0 查看评论

Read more
UbuntuTouch

在先前的文章" 如何在Ubuntu手机中使得一个应用是全屏的应用 - Ubuntu.Components 1.3",我们介绍了如何实现一个全屏的应用.但是在那里的文章中,我们的方法不能完全覆盖手机的状态显示区域.比如:


  


从上面的两张图中,我们可以看出来一些差别.左边的一个图里,还是有手机状态的显示.究其原因,还是因为MainView不让我们完全覆盖所有的区域.那么我们怎么实现一个完全的全屏应用呢?


在我们的下面的例子中,我们完全抛弃MainView.我们选用Window来完成我们的界面:


Main.qml


import QtQuick 2.4
import Ubuntu.Components 1.3
import QtQuick.Window 2.2

//Rectangle {
//    width: Screen.width
//    height: Screen.height

//    color:"red"
//}

Window {
    id: main
    width: Screen.width
    height: Screen.height
    // special flag only supported by Unity8/MIR so far that hides the shell's
    // top panel in Staged mode
    flags: Qt.Window | 0x00800000

    Image {
        anchors.fill: parent
        source: "images/pic.jpg"
    }

    Label {
        anchors.centerIn: parent
        text: "This a full screen app"
        fontSize: "x-large"
    }

    Component.onCompleted: {
        console.log("visibility: " + main.visibility )
        console.log("width: " + Screen.width + " height: " + Screen.height )
    }
}

在上面的代码中,我们同时也设置如下的flags:

    flags: Qt.Window | 0x00800000

这样我们就可以完全实现一个全屏的应用了.

作者:UbuntuTouch 发表于2016/5/24 13:37:49 原文链接
阅读:307 评论:0 查看评论

Read more
niemeyer

One of my “official side projects” is the Go language driver for MongoDB, started a few years back while looking for a way to store data conveniently from the Go language while leaving aside some of the problems we have mapping code into table-based approaches.

Nowadays this is used in projects at Canonical, in the MongoDB tooling itself, and also in some of my own personal projects. For the personal server-side projects I’ve been using docker containers to conveniently deploy the database tooling, but this week when I went to update some of my older images pulled from the docker hub I found that the docker installed on that server was a bit too old, so it was time to update the servers.

But that got me wondering: what if I replaced all those containers by snaps? This would allow me to keep the convenience and safety of the isolation, while making a lot of things simpler. Unlike docker, snaps make the installed tooling more easily accessible to the host system (bins in the search path, processes as actual children from shell and systemd, etc), and can even use system resources directly assuming interfaces allow it (e.g. home files).

So I got into that, and perhaps ended up overdoing it a little bit. Based on my experience testing the driver, I really appreciate having all versions available for playing with, rather than just the latest one. This is how my local development system looks like right now:

$ snap list | grep 'Name\|mongo'
Name                  Version               Rev     Developer   Notes
mongo22               2.2.7                 1       niemeyer    -
mongo24               2.4.14                1       niemeyer    -
mongo26               2.6.12                1       niemeyer    -
mongo30               3.0.12                1       niemeyer    -
mongo32               3.2.7                 1       niemeyer    -
mongo33               3.3.9                 1       niemeyer    -

These are backed by upstream tarballs (snapcraft downloaded them pre-built from mongodb.com), and are all installed, running, and with tooling available for playing with. They are also published to the snap store which means you can easily make use of them locally as well. If you want to do that, here is a crash course on snaps and on how I packaged the database together specifically.

If you’re using Ubuntu, update to release 16.04 which has snaps working out of the box. For other distributions have a look at snapcraft.io to see what command must be run for ensuring it is available.

Then, pick your version and install it. For example, if you want to play with the features just announced at MongoDB World this week, you want the unstable version 3.3.9:

$ snap install mongo33
93.59 MB / 93.59 MB [============================] 100.00 % 1.33 MB/s 

Name     Version  Rev  Developer  Notes
mongo33  3.3.9    1    niemeyer   -

After that you already have the tooling available in your path (assuming you have /snap/bin there), and the daemon started. So go ahead and fire the client to talk to the database:

$ mongo33
MongoDB shell version: 3.3.9
connecting to: 127.0.0.1:3317/test
MongoDB server version: 3.3.9
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        http://docs.mongodb.org/
Questions? Try the support group
        http://groups.google.com/group/mongodb-user
Server has startup warnings: 
(...) 
(...) ** NOTE: This is a development version (3.3.9) of MongoDB.
(...) **       Not recommended for production.
> 

Note that the server started on the non-standard port 33017 to allow multiple versions to be running together. The pattern is that the snap mongoNN will run on port NN017.

The tools installed also follow a similar pattern. Where upstream uses mongo, mongod, mongodump, etc, the snaps will have them under /snap/bin as mongo33, mongo33.d, mongo33.dump, and so on.

The systemd unit, if you want to interact with it for restarting, improving, debugging, etc, is named snap.mongo33.mongod.service, as usual for any snap that contains a daemon (snap.<snap name>.<app name>.service).

The data for the process that runs under systemd lives in the the standard writable area that the confinement system opens up for snaps in general – in this specific case /var/snap/mongo33/common/. The common directory is reused untouched on updates across snap revisions, which implies snapd won’t copy the data over when doing such updates. This compromises slightly the update safety, but is worth it for bulk data that we don’t want to copy on every snap refresh.

So this ended up quite nicely. Next up is the Go server code itself, which will be packed as a snap too, for deploying it into the servers in a similar way. The exact details for how these snaps are being built are publicly available too.

If you want a hand doing something similar, we have some helpful people at #snappy on FreeNode and also snapcraft@lists.snapcraft.io.

@gniemeyer

Read more
Grazina Borosko

Every year since 2001, creatives from different design disciplines meet and share their ideas and innovations about digital, interaction and print design in the design festival called OFFF.

This festival was previously held in different countries, but has now found its home in Barcelona at the Design Museum. For three days the festival was jam-packed full of inspirational ideas and speakers such as Paula Scher, Tony Brook, Joshua Davis and many more.

 

IMG_4973

OFFF  space in the Design Musem

OFFF

OFFF 2016 book and program

What is the festival about?

The festival gives a great overview of design trends, work processes and implementation practices, as well as generating ideas and inspiration from around the world. A festival organizer claimed that: “it is more than just a Festival hosting innovative and international speakers, it is more than a meeting point for all talents around the world to collaborate, it is more than feeding the future. OFFF is a community inviting all those who are eager to learn to participate and get inspired by a three-day journey of conferences, workshops, activities, and performances.”

 

Ustwo

Ustwo

Hey studio

Hey studio

A word of advice…

Before coming to the festival make sure you have a list of speakers you would like to hear, because there are 50 different talks taking place covering a wide scope of topics. It was interesting to hear designers sharing their experiences in design, such as self-initiated projects, dealing with clients, social life versus private, time management and working in a team and solo difficulties.

 

Nonformat

Non-Format

Joshua Davis deisgn

Joshua Davis design

Why you should go to the festival

Being surrounded by creative people for a three days helps you look at your work from the different perspectives. It is always healthy to leave your comfort zone and talk to other creators to see what kind of issues other people have, and how they are solving them. There’s no wrong or right way in the creative process. There are different ways which might work for you, and some that don’t. Inspiring talks give you energy and make you believe that anything is possible to achieve; you just need to do it!

 

Mark Adamson

Danny Sangra

 

 

Read more
Paty Davila

Last week I was invited to Beijing to take part in the China Launch Sprint. The focus of the sprint was to identify action items in our product roadmap for the next devices that will ship Ubuntu Touch in the Chinese market later this year.

photo_2016-06-17_15-25-12

I am a lead UX designer in the product strategy team currently doing many exciting things, such as designing the convergence experience across the Ubuntu platform. I was invited to offer design support and participate in the planning of the work we will be doing with our industry partner, China Mobile, after reviewing the CTA test results.

What is CTA?

CTA stands for China type approval which is a certificate granted to a product that meets a set of regulatory, technical and safety requirements. Generally, type approval is required before a product is allowed to be sold in a particular country.

Topics covered:

  • CTA Level 1-4 test cases and developed a new testing tool for pre-install applications.
    We reviewed the content and proposed design for all five of Migu scopes with design team’s input.
  • Also, we discussed the new RCS (Rich Communication Suite) integration with our Messaging app and prepared demos [link] for MWC Shanghai, Asia’s biggest mobile event happening at the end of this month.
  • And explored ideas around the design of mCloud service integration with our storage framework.

Achievements

The sprint was very productive and a great experience to sync up with old and new faces. We were all excited to explore ideas and work together on the next steps for China Mobile and Ubuntu.

Downtown in Beijing

I had some downtime to explore the city and have a taste of Beijing’s most interesting local dishes and potions with people I met from the sprint…

photo_2016-06-17_15-23-56

Michi has creatively named this one as snake juice.

Team dinner :)

A large team dinner.

photo_2016-06-28_15-40-59

The famous Great Wall of China.

The city lights of Beijing :)

The city lights of Beijing :)

Read more
Daniel Holbach

Next week on Tuesday, 5th July, we want to have our next Snappy Playpen event. As always we are going to work together on snapping software for our repository on github. Whatever app, service or piece of software you bring is welcome.

The focus of last week was ironing out issues and documenting what we currently have. Some outcomes of this were:

We want to continue this work, but add a new side to this: upstreaming our work. It is great that we get snaps working, but it is much better if the upstream project in question can take over the ownership of snaps themselves. Having snapcraft.yaml in their source tree will make this a lot easier. To kick off this work, we started some documentation on how to best do that and track this effort.

You are all welcome to the event and we look forward to work together with you. Coordination is happening on #snappy on Freenode and Gitter. We will make sure all our experts are around to help you if you have questions.

Looking forward to seeing you there!

Read more

The Gist

How can one man give the world so much? Scott Meyers transformed my understanding of C++ with Effective C++, a book which not only teaches good C++ practices and principles, but also explains what’s going on behind the scenes to make those efforts so effectual. Effective Modern C++ is the same book aimed at a different audience. The audience of Effective C++ was the developer who could use C++ to build a humble home of straw or wood, but didn’t know that C++ was born to create homes of brick and mortar. Effective Modern C++ is for the seasoned developer who knows the brick home they built with C++98 can stand tall, but is bewildered by the plethora of modern amenities available in the top-floor penthouse that is C++11.

Takeaways

Even when first introduced to the language, it seemed the mentality around C++ was that it was simple, baremetal, and robust. It didn’t need garbage collection. It didn’t need decent threading (fork that). Lambdas? Type deduction? Go talk to a language specification who cares!

But here we are. C++11 can now act a little bit more like its cousins C# and Java, but run fast like its pappy C. I have been on a number of projects where C++11 is king for the last 3 months. The last time I had been on a C++ project, we seemed to be stuck in the ice age. As I first started reading this book, I would read one of the items and apply it directly to the code I was working on literally the following morning. This new C++ is downright luxurious compared to the old one.

C++11 gives us the auto pointer for type deduction, similar to var in C#. Some examples:

1
2
3
auto x = 1;                 // x is int
auto y = new Thing();       // y is Thing*
const auto z = "the thing"; // z is a const char*

C++11 also has new loop syntax for iterators:

1
2
3
4
5
6
7
8
std::vector<int> v{4, 6, 0, 3, 3};

for (const auto& value: v)
{
  std::cout << value;
}
std::cout << std::endl
// prints 46033

You may notice in the above code the use of curly braces to initalize the std::vector. Braced initialization allows us to use a std::initalizer_list to initalize STL objects, but it also allows basic construction.

There are now lambdas: function handles defined dynamically which can capture other variables (called a closure).

1
2
3
4
5
6
7
auto x = 5;
auto my_func = [&x](int y) {return x+y;};

my_func(5); // 10
my_func(3); // 8
x = 11;
my_func(5); // 16

In the above example, x is captured by reference. You could also copy-capture x by excluding the &.

You like garbage collection? We got you covered. There’s std::unique_ptr to represent one-shot memory that should be deleted when the pointer goes out of scope, and there’s std::shared_ptr which is reference counted and will be deleted when all references to the shared_ptr go out of scope. These language enhancements are essential, and anyone well-versed in the usage of boost::scoped_ptr and boost::shared_ptr will have no trouble getting the hang of these.

The concurrency API is pretty neat, though I haven’t had much chance to play with it. std::atomic allows you to create objects which are guaranteed to be read/write thread-safely. std::future allows you to fire off a command in another thread and continue after it’s completion.

Looking to override a method? You can indicate an intentional override with the override keyword to tell the reader and the compiler you’re intending to override a parent method. Comes in handy.

Another nice construct is nullptr. In C++, NULL is actually just 0. Because of this you might even see your fellow developers comparing pointers to 0 while you try to determine their intent. We can now compare and set our NULL pointers to nullptr: an improvement soft in functionality, but noticeable in readability.

Although I loved reading about C++ in bite-sized chunks throughout this book, there were a few things that went over my head (not just the first time). A major point of friction between myself and the author were universal references (&&) and the move operator. These concepts were new to me and difficult to grasp the first few times they were brought up in this book. It may have been the order they were presented, or it may have been my lack of contact with them in the real world, but I would recommend having some level of understanding for universal references before reading those parts of this book.

The move operation moves the contents of memory from one object into another, as opposed to a copy operation which will duplicate that memory. A universal reference is, in some sense, a way to allow either a copy or move to be called based on whether an lvalue or an rvalue is being passed in. There are lots of rules involved, and sometimes move is faster than copy but other times it isn’t. For me (and I assume many others), this confusion will lead me to largely ignore this feature for now.

There is also quite a bit of discussion early in the book about using type deduction in templates with auto and decltype, but this discussion made my head hurt and made me glad I don’t do much template metaprogramming.

On top of all this goodness (and more I didn’t mention), C++14 includes a lot of bonus features that make C++ even a little sweeter. Here’s a shortlist. (Looking for a list of C++11 features? Here you go.)

Action Items

Read more
Victor Palau

First of all, I wanted to recommend the following recipe from Digital Ocean on how to rollout your own Docker Registry in Ubuntu 14.04. As with most of their stuff, it is super easy to follow.

I also wanted to share a small improvement on the recipe to include a UI front-end to the registry.

Once you have completed the recipe and have a repository secured and running, you extend your docker-compose file to look like this:

nginx:
 image: "nginx:1.9"
 ports:
 - 443:443
 - 8080:8080
 links:
 - registry:registry
 - web:web
 volumes:
 - ./nginx/:/etc/nginx/conf.d:ro

web:
 image: hyper/docker-registry-web
 ports:
 - 8000:8080
 links:
 - registry
 environment:
 REGISTRY_HOST: registry

registry:
 image: registry:2
 ports:
 - 127.0.0.1:5000:5000
 environment:
 REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
 volumes:
 - ./data:/data

You will also need to include a configuration file for web in the nginx folder.

file: ~/docker-registry/nginx/web.conf

upstream docker-registry-web {
 server web:8080;
 }

server {
 listen 8080;
 server_name [YOUR DOMAIN];

# SSL
 ssl on;
 ssl_certificate /etc/nginx/conf.d/domain.crt;
 ssl_certificate_key /etc/nginx/conf.d/domain.key;

location / {

# To add basic authentication to v2 use auth_basic setting plus add_header
 auth_basic "registry.localhost";
 auth_basic_user_file /etc/nginx/conf.d/registry.password;

proxy_pass http://docker-registry-web;
 proxy_set_header Host $http_host; # required for docker client's sake
 proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_read_timeout 900;
 }
 }

docker-compose up and you should be able to have a ssl secured UI frontend in port 8080 (https://yourdomain:8080/)
If you have any improvement tips I am all ears!


Read more
Colin Ian King

What's new in stress-ng 0.06.07?

Since my last blog post about stress-ng, I've pushed out several more small releases that incorporate new features and (as ever) a bunch more bug fixes.  I've been eyeballing gcov kernel coverage stats to find more regions in the kernel where stress-ng needs to exercise.   Also, testing on a range of hardware (arm64, s390x, etc) and a range of kernels has eeked out some bugs and helped me to improve stress-ng.  So what's new?

New stressors:

  • ioprio  - exercises ioprio_get(2) and ioprio_set(2) (I/O scheduling classes and priorities)
  • opcode - generates random object code and executes this, generating and catching illegal instructions, bus errors,  segmentation  faults,  traps and floating  point errors.
  • stackmmap - allocates a 2MB stack that is memory mapped onto a temporary file. A recursive function works down the stack and flushes dirty stack pages back to the memory mapped file using msync(2) until the end of the stack is reached (stack overflow). This exercises dirty page and stack exception handling.
  • madvise - applies random madvise(2) advise settings on pages of a 4MB file backed shared memory mapping.
  • pty - exercise pseudo terminal operations.
  • chown - trivial chown(2) file ownership exerciser.
  • seal - fcntl(2) file SEALing exerciser.
  • locka - POSIX advisory locking exerciser.
  • lockofd - fcntl(2) F_OFD_SETLK/GETLK open file description lock exerciser.
Improved stressors:
  • msg: add in IPC_INFO, MSG_INFO, MSG_STAT msgctl calls
  • vecmath: add more ops to make vecmath more demanding
  • socket: add --sock-type socket type option, e.g. stream or seqpacket
  • shm and shm-sysv: add msync'ing on the shm regions
  • memfd: add hole punching
  • mremap: add MAP_FIXED remappings
  • shm: sync, expand, shrink shm regions
  • dup: use dup2(2)
  • seek: add SEEK_CUR, SEEK_END seek options
  • utime: exercise UTIME_NOW and UTIME_OMIT settings
  • userfaultfd: add zero page handling
  • cache:  use cacheflush() on systems that provide this syscall
  • key:  add request_key system call
  • nice: add some randomness to the delay to unsync nicenesses changes
If any new features land in Linux 4.8 I may add stressors for them, but for now I suspect that's about it for the big changes for stress-ng for the Ubuntu Yakkey 16.10 release.

Read more
David Callé

Snapcraft 2.12 is here and is making its way to your 16.04 machines today.

This release takes Snapcraft to a whole new level. For example, instead of defining your own project parts, you can now use and share them from a common, open, repository. This feature was already available in previous versions, but is now much more visible, making this repo searchable and locally cached.

Without further ado, here is a tour of what’s new in this release.

Commands

2.12 introduces ‘snapcraft update’, ‘search’ and ‘define’, which bring more visibility to the Snapcraft parts ecosystem. Parts are pieces of code for your app, that can also help you bundle libraries, set up environment variables and other tedious tasks app developers are familiar with.

They are literally parts you aggregate and assemble to create a functional app. The benefits of using a common tool is that these parts can be shared amongst developers. Here is how you can access this repository.

  • snapcraft update : refresh the list of remote parts
  • snapcraft search : list and search remote parts
  • snapcraft define : display information and content about a remote part

5273725bbff337eaf4eb07a81af97cd82051866b.png

To get a sense of how these commands are used, have a look at the above example, then you can dive into details and what we mean by “ecosystem of parts”.

Snap name registration

Another command you will find useful is the new ‘register’ one. Registering a snap name is reserving the name on the store.

  • snapcraft register

6875784c98c671707e1de1b27bb0cdba4690d68e.png

As a vendor or upstream, you can secure snap names when you are the publisher of what most users expect to see under this name.

Of course, this process can be reverted and disputed. Here is what the store workflow looks like when I try to register an already registered name:

snap-name-register.png

On the name registration page of the store, I’m going to try to register ‘my-cool-app’, which already exists.

snap-name-register-failed.png

I’m informed that the name has already been registered, but I can dispute this or use another name.

snap-name-register-dispute.png

I can now start a dispute process to retrieve ownership of the snap name.

Plugins and sources

Two new plugins have been added for parts building: qmake and gulp.

qmake

The qmake plugin has been requested since the advent of the project, and we have seen many custom versions to fill this gap. Here is what the default qmake plugin allows you to do:

  • Pass a list of options to qmake
  • Specify a Qt version
  • Declare list of .pro files to pass to the qmake invocation

gulp

The hugely popular nodejs builder is now a first class citizen in Snapcraft. It inherits from the existing nodejs plugin and allows you to:

  • Declare a list of gulp tasks
  • Request a specific nodejs version

Subversion

SVN is still a major version control system and thanks to Simon Quigley from the Lubuntu project, you can now use svn: URIs in the source field of your plugins.

Highlights

Many other fixes made their way into the release, with two highlights:

  • You can now use hidden .snapcraft.yaml files
  • snapcraft cleanbuild’ now creates ephemeral LXC containers and won’t clutter your drive anymore

The full changelog for this milestone is available here and the list of bugs in sight for 2.13 can be found here. Note that this list will probably change until the next release, but if you have a Snapcraft itch to scratch, it’s a good list to pick your first contribution from.

Install Snapcraft

On Ubuntu

Simply open up a terminal with Ctrl+Alt+t and run these commands to install Snapcraft from the Ubuntu archives on Ubuntu 16.04 LTS

sudo apt update
sudo apt install snapcraft

On other platforms

Get the Snapcraft source code ›

Get snapping!

There is a thriving community of developers who can give you a hand getting started or unblock you when creating your snap. You can participate and get help in multiple ways:

Read more
David Barth

Cordova Ubuntu Update

A few weeks ago we participated to Phonegap Day EU 2016

A few weeks ago we participated to Phonegap Day EU 2016. It was a great opportunity to meet with the Cordova development team and app developers gathered for this occasion.

We demo'ed the latest Ubuntu 16.04 LTS release, running on a brand new BQ M10 tablet in convergence mode. It was really interesting to discuss with app developers. Creating responsive user interfaces is already a common topic for web developers, and Cordova developers by extension. 

On the second day, we hosted a workshop on developing Ubuntu applications with Cordova and popular frameworks like Ionic. Alexandre Abreu also showed his new cordova-plugin-ble-central for Ubuntu. This one lets you connect to an IoT device, like one of those new RPI boards, directly to an Ubuntu app using the Bluetooth Low Energy stack. Snappy, Ubuntu and Cordova all working together !

Last but not least, we started the release process for cordova-ubuntu 4.3.4. This is the latest stable update to the Ubuntu platform support code for Cordova apps. It's coming along with a set of documentation updates available here and on the upstream cordova doc site

We've made a quick video to summarize this and walk you through the first steps of creating your own Ubuntu app using Cordova. You can now watch it at: https://www.youtube.com/watch?v=ydnG7wVrsW4

Let us know about your ideas : we're eager to see what you can do with the new release and plugins.

Read more
Luca Paulina

Juju GUI 2.0

Juju is a cloud orchestration tool which enables users to build models to run applications. You can just as easily use it to deploy a simple WordPress blog or a complex big data platform. Juju is a command line tool but also has a graphical user interface (GUI) where users can choose services from a store, assemble them visually in the GUI, build relations and configure them with the service inspector.

Juju GUI allows users to

  • Add charms and bundles from the charm store
  • Configure services
  • Deploy applications to a cloud of their choice
  • Manage charm settings
  • Monitor model health

Over the last year we’ve been working on a redesign of the Juju GUI. This redesign project focused on improving four key areas, which also acted as our guiding design principles.

1. Improve the functionality of the core features of the GUI

  • Organised similar areas of the core navigation to create a better UI model.
  • Reduced the visual noise of the canvas and the inspector to help users navigate complex models.
  • Introduced a better flow between the store and the canvas to aid adding services without losing context.
Hero before
Hero after

‹ ›

Empty state of the canvas

 

Hero before
Hero after

‹ ›

Integrated store

 

Hero before
Hero after

‹ ›

Apache charm details

 

2. Reduce cognitive load and pace the user

  • Reduced the amount of interaction patterns to minimise the amount of visual translation.
  • Added animation to core features to inform users of the navigation model in an effort to build a stronger concept of home.
  • Created a symbiotic relationship between the canvas and the inspector to help navigation of complex models.
Hero before
Hero after

‹ ›

Mediawiki deployment

 

3. Provide an at-a-glance understanding of model health

  • Prioritised the hierarchy of status so users are always aware of the most pressing issues and can discern which part of the application is effected.
  • Easier navigation to units with a negative status to aid the user in triaging issues.
  • Used the same visual patterns throughout the web app so users can spot problematic issues.
Hero before
Hero after

‹ ›

Mediawiki deployment with errors

 

4. Surface functions and facilitate task-driven navigation

  • Established a new hierarchy based on key tasks to create a more familiar navigation model.
  • Redesigned the inspector from the ground up to increase discoverability of inspector led functions.
  • Simplified the visual language and interaction patterns to help users navigate at-a-glance and with speed to triage errors, configure or scale out.
  • Surfaced relevant actions at the right time to avoid cluttering the UI.
Hero before
Hero after

‹ ›

Inspector home view

 

Hero before
Hero after

‹ ›

Inspector errors view

 

Hero before
Hero after

‹ ›

Inspector config view

 

The project has been amazing, we’re really happy to see that it’s launched and are already planning the next updates.



<>

Read more
Luca Paulina

Design in the open

As the Juju design team grew it was important to review our working process and to see if we could improve it to create a more agile working environment. The majority of employees at Canonical work distributed around the globe, for instance the Juju UI engineering team has employees from Tasmania to San Francisco. We also work on a product which is extremely technical and feedback is crucial to our velocity.

We identified the following aspects of our process which we wanted to improve:

  • We used different digital locations for storing their design outcomes and assets (Google Drive, Google Sites and Dropbox).
  • The entire company used Google Drive so it was ideal for access, but its lacklustre performance, complex sharing options and poor image viewer meant it wasn’t good for designs.
  • We used Dropbox to store iterations and final designs but it was hard to maintain developer access for sharing and reference.
  • Conversations and feedback on designs in the design team and with developers happened in email or over IRC, which often didn’t include all interested parties.
  • We would often get feedback from teams after sign-off, which would cause delays.
  • Decisions weren’t documented so it was difficult to remember why a change had been made.

Finding the right tool

I’ve always been interested in the concept of designing in the open. Benefits of the practice include being more transparent, faster and more efficient. They also give the design team more presence and visibility across the organisation. Kasia (Juju’s project manager) and I went back and forth on which products to use and eventually settled on GitHub (GH).

The Juju design team works in two week iterations and at the beginning of a new iteration we decided to set up a GH repo and trial the new process. We outlined the following rules to help us start:

  • Issues should be created for each project.
  • All designs/ideas/wireframes should be added inline to the issues.
  • All conversations should be held within GH, no more email or IRC conversations, and notes from any meetings should be added to relevant issues to create a paper trail.

Reaction

As the iteration went on, feedback started rolling in from the engineering team without us requesting it. A few developers mentioned how cool it was to see how the design process unfolded. We also saw a lot of improvement in the Juju design team: it allowed us to collaborate more easily and it was much easier to keep track of what was happening.

At the end of the trial iteration, during our clinic day, we closed completed issues and uploaded the final assets to the “code” section of the repo, creating a single place for our files.

After the first successful iteration we decided to carry this on as a permanent part of our process. The full range of benefits of moving to GH are:

  • Most employees of Canonical have a GH account and can see our work and provide feedback without needing to adopt a new tool.
  • Project management and key stakeholders are able to see what we’re doing, how we collaborate, why a decision has been made and the history of a project.
  • Provides us with a single source for all conversations which can happen around the latest iteration of a project.
  • One place where anyone can view and download the latest designs.
  • A single place for people to request work.

Conclusion

As a result of this change our designs are more accessible which allows developers and stakeholders to comment and collaborate with the design team aiding in our agile process. Below is an example thread where you can see how GH is used in the process. I shows how we designed the new contextual service block actions.

GH_conversation_new_navigation

Read more
Benjamin Zeller

New Ubuntu SDK Beta Version

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications. 

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications.

The first reports were positive, however one big problem was discovered pretty quickly:

Applications would not start on machines using the proprietary Nvidia drivers. Reason for this is that indirect GLX is not allowed by default when using those. The applications need to have access to:

  1. The glx libraries for the currently used driver
  2. The DRI and Nvidia device files

Luckily the snappy team already tackled a similar problem, so thanks to Michael Vogt (a.k.a mvo) we had a first idea how to solve it by reusing the Nvidia binaries and device files from the host by mounting them into the container.

However it is a bit more complicated in our case, because once we have the devices and directories mounted into the containers they will stay there permanently. This is a problem because the Nvidia binary directory has a version numbering, e.g. /usr/lib/nvidia-315, which changes with the currently loaded module and would stop the container from booting after the driver was changed and the old directory on the host is gone, or the container would use the wrong nvidia dir if it was not removed from the host.

The situation gets worse with optimus graphics cards were the user can switch between a integrated and dedicated graphics chip, which means device files in /dev can come and go between reboots.

Our solution to the problem is to check the integrity of the containers on every start of the Ubuntu SDK IDE and if problems are detected, the user is informed and asked for the root password to run automatic fixes. Those checks and fixes are implemented in the “usdk-target” tool and can be used from the CLI as well.

As a bonus this work will enable direct rendering for other graphics chips as well, however since we do not have access to all possible chips there might be still special cases that we could not catch.

So please report all problems to us on one of those channels:

We have released the new tool into the Tools-development PPA where the first beta was released too. However existing container might not be completely fixed automatically. They are better be recreated or manually fixed. To manually fix an existing container use the maintain mode from the options menu and add the current user into the “video” group.

To get the new version of the IDE please update the installed Ubuntu SDK IDE package:

$ sudo apt-get update && sudo apt-get install ubuntu-sdk-ide ubuntu-sdk-tools

Read more
Sergio Schvezov

The Snapcraft Parts Ecosystem

Today I am going to be discussing parts. This is one of the pillars of snapcraft (together with plugins and the lifecycle).

For those not familiar, this is snapcraft’s general purpose landing page, http://snapcraft.io/ but if you are a developer and have already been introduced to this new world of snaps, you probably want to just go and hop on to http://snapcraft.io/create/

If you go over this snapcraft tour you will notice the many uses of parts and start to wonder how to get started or think that maybe you are duplicating work done by others, or even better, maybe an upstream. This is where we start to think about the idea of sharing parts and this is exactly what we are going to go over in this post.

To be able to reproduce what follows, you’d need to have snapcraft 2.12 installed.

An overview to using remote parts

So imagine I am someone wanting to use libcurl. Normally I would write the part definition from scratch and be on with my own business but surely I might be missing out on something about optimal switches used to configure the package or even build it. I would also need to research on how to use the specific plugin required. So instead, I’ll see if someone already has done the work for me, hence I will,

$ snapcraft update
Updating parts list... |
$ snapcraft search curl
PART NAME  DESCRIPTION
curl       A tool and a library (usable from many languages) for client side URL tra...

Great, there’s a match, but is this what I want?

$ snapcraft define curl
Maintainer: 'Sergio Schvezov <sergio.schvezov@ubuntu.com>'
Description: 'A tool and a library (usable from many languages) for client side URL transfers, supporting FTP, FTPS, HTTP, HTTPS, TELNET, DICT, FILE and LDAP.'

curl:
  configflags:
  - --enable-static
  - --enable-shared
  - --disable-manual
  plugin: autotools
  snap:
  - -bin
  - -lib/*.a
  - -lib/pkgconfig
  - -lib/*.la
  - -include
  - -share
  source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
  source-type: tar

Yup, it’s what I want.

An example

There are two ways to use these parts in your snapcraft.yaml, say this is your parts section

parts:
    client:
       plugin: autotools
       source: .

My client part which is using sources that sit alongside this snapcraft.yaml, will hypothetically fail to build as it depends on the curl library I don’t yet have. There are some options here to get this going, one using after in the part definition implicitly, another involving composing and last but not least just copy pasting what snapcraft define curl returned for the part.

Implicitly

The implicit path is really straightforward. It only involves making the part look like:

parts:
    client:
       plugin: autotools
       source: .
       after: [curl]

This will use the cached definition of the part and may potentially be updated by running snapcraft update.

Composing

What if we like the part, but want to try out a new configure flag or source release? Well we can override pieces of the part; so for the case of wanting to change the source:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        source: http://curl.haxx.se/download/curl-7.45.0.tar.bz2

And we will get to build curl but using a newer version of curl. The trick is that the part definition here is missing the plugin entry, thereby instructing snapcraft to look for the full part definition from the cache.

Copy/Pasting

This path is a path one would take if they want full control over the part. It is as simple as copying in the part definition we got from running snapcraft define curl into your own. For the sake of completeness here’s how it would look like:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        configflags:
            - --enable-static
            - --enable-shared
            - --disable-manual
        plugin: autotools
        snap:
            - -bin
            - -lib/*.a
            - -lib/pkgconfig
            - -lib/*.la
            - -include
            - -share
        source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
        source-type: tar

Sharing your part

Now what if you have a part and want to share it with the rest of the world? It is rather simple really, just head over to https://wiki.ubuntu.com/snapcraft/parts and add it.

In the case of curl, I would write a yaml document that looks like:

origin: https://github.com/sergiusens/curl.git
maintainer: Sergio Schvezov <sergio.schvezov@ubuntu.com>
description:
  A tool and a library (usable from many languages) for
  client side URL transfers, supporting FTP, FTPS, HTTP,
  HTTPS, TELNET, DICT, FILE and LDAP.
project-part: curl

What does this mean? Well, the part itself is not defined on the wiki, just a pointer to it with some meta data, the part is really defined inside a snapcraft.yaml living in the origin we just told it to use.

The extent of the keywords is explained in the documentation, that is an upstream link to it.

The core idea is that a maintainer decides he wants to share a part. Such a maintainer would add a description that provides an idea of what that part (or collection of parts) is doing. Then, last but not least, the maintainer declares which parts to expose to the world as maybe not all of them should. The main part is exposed in project-part and will carry a top level name, the maintainer can expose more parts from snapcraft.yaml using the general parts keyword. These parts will be namespaced with the project-part.

Read more
Steph Wilson

Meet the newest member of the Design Team, project manager Davide Casa. He will be working with the Platform Team to keep us all in check and working towards our goals. I sat down with him to discuss his background, what he thinks makes a good project manager and what his first week was like at Canonical (spoiler alert – he survived it).

delete_me_my_pic

You can read Davide’s blog here, and reach out to him on Github and Twitter with @davidedc.

Tell us a bit about your background?

My background is in Computer Science (I did a 5 year degree). I also studied for an MBA in London.

Computer science is a passion of mine. I like to keep up to date with latest trends and play with programming languages. However, I never got paid for it, so it’s more like a hobby now to scratch an artistic itch. I often get asked in interviews: “why aren’t you a coder then?” The simple answer is that it just didn’t happen. I got my first job as a business analyst, which then developed into project management.

What do you think makes a good project manager?

I think the soft skills are incredibly relevant and crucial to the role. For example: gathering what the team’s previous experience of project management was, and what they expect from you, and how deeply and quickly you can change things.

Is project management perceived as a service or is there a practise of ‘thought leadership’?

In tech companies it varies. I’ve worked in Vodafone as a PM and you felt there was a possibility to practice a “thought leadership”, because it is such a huge company and things have to be dealt with in large cycles. Components and designs have to be agreed on in batches, because you can’t hand-wave your way through 100s of changes across a dozen mission-critical modules, it would be too risky. In some other companies less so. We’ll see how it works here.

Apart from calendars, Kanban boards and post-it notes  – what else can be used to help teams collaborate smoothly?

Indeed one of the core values of Agile is “the team”. I think people underestimate the importance of cohesiveness in a team, e.g. how easy it is for people to step forward and make mistakes without fear. A cohesive team is something that is very precious and I think that’s a regularly underestimated. You can easily buy tools and licenses, which are “easy solutions” in a way. The PM should also help to improve the cohesiveness of a team, for example creating processes that people can rely on in order to avoid attrition, and resolve things. Also to avoid treating everything like a special case to help deal with things “proportionally”.

What brings you to the Open Source world?

I like coding, and to be good coder, one must read good code. With open source the first thing you do is look around to see what others are doing and then you start to tinker with it. It has almost never been relevant for me to release software without source.

Have you got any side projects you’re currently working on?

I dabble in livecoding, which is an exotic niche of people that do live visuals and sounds with code (see our post on Qtday 2016). I am also part of the Toplap collective which works a lot on those lines too.

I also dabble in creating an exotic desktop system that runs on the web. It’s inspired by the Squeak environment, where everything is an object and is modifiable and inspectable directly within the live system. Everything is draggable, droppable and composable. For example, for a menu pops up you can change any button, both the labelling or the function it performs, or take apart any button and put it anywhere else on the desktop or in any open window. It all happens via “direct manipulation”. Imagine a paint application where at any time while working you can “open” any button from the toolbar and change what the actual painting operation does (John Maeda made such a paint app actually).

The very first desktop systems all worked that way. There was no concept of a big app or “compile and run again”. Something like a text editor app would just be a text box providing functions. The functions are then embodied in buttons and stuck around the textbox, and voila, then you have your very own flavour of text editor brought to life. Also in these live systems most operations are orthogonal: you can assume you can rotate images, right? Hence by the same token you can rotate anything on the screen. A whole window for example, or text. Two rotating lines and a few labels become a clock. The user can combine simple widgets together to make their own apps on the fly!

What was the most interesting thing you’ve learned in your first week here?

I learned a lot and I suspect that will never stop. The bread and butter here is strategy and design, which in other companies is only just a small area of work. Here it is the core of everything! So it’ll be interesting to see how this ‘strategy’ works. And how the big thinking starts with the visuals or UX in mind, and from that how it steers the whole platform. An exciting example of this can be seen in the Ubuntu Convergence story.

That’s the essence of open source I guess…

Indeed. And the fact that anti-features such as DRM, banners, bloatware, compulsory registrations and basic compilers that need 4GB of installation never live long in it. It’s our desktop after all, is it not?

Read more
Steph Wilson

The Ubuntu App Design Clinic is back! This month members of the Design Team James Mulholland (UX Designer), Jouni Helminen (Visual Designer) and Andrea Bernabei (UX Engineer) sat down with Dan Wood, contributor to the OwnCloud app.

What is OwnCloud?

OwnCloud is an open source project, self-hosted file sync and share app platform. Access & sync your files, contacts, calendars & bookmarks across your devices.

You can contribute to it here.

We covered:

  • First case usage – the first point of entry for the user, maybe a file manager or a possible tooltip introduction.
  • Convergent thinking – how the app looks across different surfaces.
  • Top-level navigation – using the header to display actions, such as settings.
  • Using Online Accounts to sync other accounts to the cloud.
  • Using sync frequency or instant syncing.

If you missed it, or want to watch it again, here it is:

The next App Design Clinic is yet to be confirmed. Stay tuned.

 

Read more