Canonical Voices

UbuntuTouch

在Qt中,我们可以利用Qt全局变量来获取一些对我们应用有用的信息。在下面的应用中,我们可以获取如下的信息:

  


在上面,我们可以看到应用的状态,运行的输入参数,应用的名称及操作系统等。


我们的应用设计非常简单:


import QtQuick 2.0
import Ubuntu.Components 1.1
import Ubuntu.Components.ListItems 1.0 as ListItems

/*!
    \brief MainView with a Label and Button elements.
*/

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "application.liu-xiao-guo"

    /*
     This property enables the application to change orientation
     when the device is rotated. The default is false.
    */
    //automaticOrientation: true

    // Removes the old toolbar and enables new features of the new header.
    useDeprecatedToolbar: false

    width: units.gu(60)
    height: units.gu(85)

    property string locale: ""

    Page {
        title: i18n.tr("application")

        Flickable {
            anchors.fill: parent
            contentHeight: content.childrenRect.height

            Column {
                id: content
                anchors.fill: parent
                spacing: units.gu(0.5)

                ListItems.SingleValue {
                    text: "Active"
                    value: Qt.application.state == Qt.ApplicationActive
                }

                ListItems.SingleValue {
                    text: "State"
                    value: {
                        switch(Qt.application.state) {
                        case Qt.ApplicationActive:
                            return "Active";
                        case Qt.ApplicationInactive:
                            return "Inactive";
                        case Qt.ApplicationSuspended:
                            return "Suspended";
                        case Qt.ApplicationHidden:
                            return "Hidden";
                        default:
                            return "Unknown";
                        }
                    }
                }

                ListItems.SingleValue {
                    text: "Layout direction"
                    value: {
                        switch(Qt.application.layoutDirection) {
                        case Qt.LeftToRight:
                            return "Left to right";
                        case Qt.RightToLeft:
                            return "Right to left";
                        default:
                            return "Unknown";
                        }
                    }
                }

                ListItems.Subtitled {
                    text: "aruguments"
                    subText: {
                        console.log("arguments: " + Qt.application.arguments);
                        var arguments = Qt.application.arguments;
                        var content = arguments.join(" ");
                        return content;
                    }
                }

                ListItems.SingleValue {
                    text: "name"
                    value: Qt.application.name
                }

                ListItems.SingleValue {
                    text: "domain"
                    value: {
                        console.log("version: " + Qt.application.version);
                        return Qt.application.domain;
                    }
                }

                ListItems.SingleValue {
                    text: "support multiple windows"
                    value: Qt.application.supportsMultipleWindows
                }

                ListItems.SingleValue {
                    text: "OS"
                    value: Qt.platform.os
                }

                ListItems.SingleValue {
                    text: "Locale"
                    value: locale
                }
            }
        }

        Component.onCompleted: {
            var keys = Object.keys(Qt.application);
            for(var i = 0; i < keys.length; i++) {
                var key = keys[i];
                // prints all properties, signals, functions from object
                console.log(key + ' : ' + Qt.application[key]);
            }

            locale = Qt.inputMethod["locale"].nativeLanguageName;
            console.log("locale: " + locale);
        }
    }
}


整个应用的源码在:git clone https://gitcafe.com/ubuntu/application.git

作者:UbuntuTouch 发表于2015/5/28 10:25:59 原文链接
阅读:195 评论:0 查看评论

Read more
UbuntuTouch

我们知道JSON数据在很多web service中被广泛使用。它在我以前的文章中都有被提到:


- 如何读取一个本地Json文件并查询该文件展示其内容

- 如何在QML应用中使用Javascript解析JSON


在今天的这篇文章中,我来介绍一种类似像XmlListModel(解析XML)的方法来解析我们的JSON。这个方法更加简单直接。关于JSONListModel的介绍可以参照地址https://github.com/kromain/qml-utils


我们今天就利用JSONListModel的网址提供的例程来做说明。


我们首先来看一看JSONListModel的写法:


JSONListModel.qml


/* JSONListModel - a QML ListModel with JSON and JSONPath support
 *
 * Copyright (c) 2012 Romain Pokrzywka (KDAB) (romain@kdab.com)
 * Licensed under the MIT licence (http://opensource.org/licenses/mit-license.php)
 */

import QtQuick 2.0
import "jsonpath.js" as JSONPath

Item {
    property string source: ""
    property string json: ""
    property string query: ""

    property ListModel model : ListModel { id: jsonModel }
    property alias count: jsonModel.count

    onSourceChanged: {
        var xhr = new XMLHttpRequest;
        xhr.open("GET", source);
        xhr.onreadystatechange = function() {
            if (xhr.readyState == XMLHttpRequest.DONE)
                json = xhr.responseText;
        }
        xhr.send();
    }

    onJsonChanged: updateJSONModel()
    onQueryChanged: updateJSONModel()

    function updateJSONModel() {
        jsonModel.clear();

        if ( json === "" )
            return;

        var objectArray = parseJSONString(json, query);
        for ( var key in objectArray ) {
            var jo = objectArray[key];
            jsonModel.append( jo );
        }
    }

    function parseJSONString(jsonString, jsonPathQuery) {
        var objectArray = JSON.parse(jsonString);
        if ( jsonPathQuery !== "" )
            objectArray = JSONPath.jsonPath(objectArray, jsonPathQuery);

        return objectArray;
    }


在这里,我们可以学习一些如何包装一个模块并使得我们的model被外界所使用。这个模块的写法很值得推荐。对于一些大型的软件来说,我们可以通过这样的方法来独立完成我们的model,并被其它模块而使用。可以实现UI和数据的分割。


首先,这个模块定义了一个source属性。当source一旦被设置后,onSourceChanged将被自动调用,从而发出请求去得到数据。当数据得到后, json的数值发生改变,进而onJsonChanged将被自动调用。最终在updateJSONModel()中,模块所定义的jsonModel将被更新,以被外面的ListView或其它的Control所使用。同样,每当query发生改变后,model的数据也将被重新修改。


就像我之前的例子一样,它使用了jsonpath.js模块来实现xpath的查询功能。


在例程里,我们可以直接使用JSONListModel来为我们的ListView填充数据:


import QtQuick 2.0
import Ubuntu.Components 1.1

/*!
    \brief MainView with a Label and Button elements.
*/

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "test1.liu-xiao-guo"

    /*
     This property enables the application to change orientation
     when the device is rotated. The default is false.
    */
    //automaticOrientation: true

    // Removes the old toolbar and enables new features of the new header.
    useDeprecatedToolbar: false

    width: units.gu(100)
    height: units.gu(75)

    Page {
        title: i18n.tr("test1")

        Column {
            spacing: units.gu(1)
            anchors {
                margins: units.gu(2)
                fill: parent
            }

            Label {
                id: label
                objectName: "label"

                text: i18n.tr("Hello..")
            }

            Button {
                objectName: "button"
                width: parent.width

                text: i18n.tr("Tap me!")

                onClicked: {
                    label.text = i18n.tr("..world!")
                }
            }
        }
    }
}


jsonData.txt


{ "store": {
    "book": [
      { "category": "reference",
        "author": "Nigel Rees",
        "title": "Sayings of the Century",
        "price": 8.95
      },
      { "category": "fiction",
        "author": "Evelyn Waugh",
        "title": "Sword of Honour",
        "price": 12.99
      },
      { "category": "fiction",
        "author": "Herman Melville",
        "title": "Moby Dick",
        "isbn": "0-553-21311-3",
        "price": 8.99
      },
      { "category": "fiction",
        "author": "J. R. R. Tolkien",
        "title": "The Lord of the Rings",
        "isbn": "0-395-19395-8",
        "price": 22.99
      }
    ],
    "bicycle": {
      "color": "red",
      "price": 19.95
    }
  }
}



通过不同的查询,我们可以得到JSON数据中不同条件下的数据。应用的展示效果如下:




项目的源码在: git clone https://gitcafe.com/ubuntu/jsonlistmodeltest.git



作者:UbuntuTouch 发表于2015/5/28 14:48:14 原文链接
阅读:150 评论:0 查看评论

Read more
UbuntuTouch

在QML API中,目前并没有一个相应的API来进行录音。我们必须使用Qt C++ API QAudioRecorder来进行录音的工作。在这篇文章中,我们来介绍如何使用这个API来进行录音。


首先,我们来创建一个“QML App with C++ plugin (qmake)”模版的应用。注意qmake的项目必须是在15.04及以上的target上才可以运行。


为了录音,我创建了一个叫做“AudioRecorder”的类:


audiorecorder.h


#ifndef AUDIORECORDER_H
#define AUDIORECORDER_H

#include <QAudioRecorder>
#include <QUrl>

class AudioRecorder : public QObject
{
    Q_OBJECT
    Q_PROPERTY ( bool recording READ recording NOTIFY recordingChanged )
    Q_PROPERTY(QString name READ name WRITE setName NOTIFY nameChanged)

public:
    explicit AudioRecorder(QObject *parent = 0);
    const bool recording() const;
    QString name() const;

    Q_INVOKABLE QStringList supportedAudioCodecs();
    Q_INVOKABLE QStringList supportedContainers();
    Q_INVOKABLE QUrl path() {
        return m_path;
    }

signals:
    void recordingChanged(bool);
    void nameChanged(QUrl);

public slots:
    void setName(QString name);
    void setRecording(bool recording );
    void record();
    void stop();

private:
    QString getFilePath(const QString filename) const;

private:
    QAudioRecorder * m_audioRecorder;
    bool m_recording;
    QString m_name;
    QUrl m_path;
};

#endif // AUDIORECORDER_H


audiorecorder.cpp


#include <QUrl>
#include <QStandardPaths>
#include <QDir>

#include "audiorecorder.h"

AudioRecorder::AudioRecorder(QObject *parent) : QObject(parent)
{
    m_audioRecorder = new QAudioRecorder( this );
    QAudioEncoderSettings audioSettings;
    audioSettings.setCodec("audio/PCM");
    audioSettings.setQuality(QMultimedia::HighQuality);
    m_audioRecorder->setEncodingSettings(audioSettings);
    // https://forum.qt.io/topic/42541/recording-audio-using-qtaudiorecorder/2
    m_audioRecorder->setContainerFormat("wav");
    m_recording = false;
}

const bool AudioRecorder::recording() const
{
    return m_recording;
}

void AudioRecorder::setRecording(bool recording ) {
    if (m_recording == recording)
        return;

    m_recording = recording;
    emit recordingChanged(m_recording);
}


void AudioRecorder::record()
{
    qDebug() << "Entering record!";

    if ( m_audioRecorder->state() == QMediaRecorder::StoppedState ) {
        qDebug() << "recording....! ";

        m_audioRecorder->record ( );

        m_recording = true;
        qDebug() << "m_recording: " << m_recording;
        emit recordingChanged(m_recording);
    }
}

void AudioRecorder::stop()
{
    qDebug() << "Entering stop!";

    if ( m_audioRecorder->state() == QMediaRecorder::RecordingState ) {
        qDebug() << "Stopping....";
        m_audioRecorder->stop();
        m_recording = false;
        emit recordingChanged(m_recording);
    }
}

QString AudioRecorder::name() const
{
    return m_name;
}

void AudioRecorder::setName(QString name)
{
    if (m_name == name)
        return;

    m_name = name;
    emit nameChanged(name);

    // at the same time update the path
    m_path = QUrl(getFilePath(name));

    // set the path
    m_audioRecorder->setOutputLocation(m_path);
}

QStringList AudioRecorder::supportedAudioCodecs() {
    return m_audioRecorder->supportedAudioCodecs();
}

QStringList AudioRecorder::supportedContainers() {
    return m_audioRecorder->supportedContainers();
}


QString AudioRecorder::getFilePath(const QString filename) const
{
    QString writablePath = QStandardPaths::
            writableLocation(QStandardPaths::DataLocation);
    qDebug() << "writablePath: " << writablePath;

    QString absolutePath = QDir(writablePath).absolutePath();
    qDebug() << "absoluePath: " << absolutePath;

    // We need to make sure we have the path for storage
    QDir dir(absolutePath);
    if ( dir.mkdir(absolutePath) ) {
        qDebug() << "Successfully created the path!";
    }

    QString path = absolutePath + "/" + filename;

    qDebug() << "path: " << path;

    return path;
}

在这里,我们使用了QStandardPath来获得在Ubuntu手机中可以访问的文件目录。这个QAudioRecorder的API的使用也是非常直接的。

我们的Main.qml的界面也非常简单:


Main.qml


import QtQuick 2.0
import Ubuntu.Components 1.1
import QtMultimedia 5.0
import AudioRecorder 1.0

/*!
    \brief MainView with a Label and Button elements.
*/

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "audiorecorder.liu-xiao-guo"

    /*
     This property enables the application to change orientation
     when the device is rotated. The default is false.
    */
    //automaticOrientation: true

    // Removes the old toolbar and enables new features of the new header.
    useDeprecatedToolbar: false

    width: units.gu(60)
    height: units.gu(85)

    Page {
        title: i18n.tr("AudioRecorder")

        AudioRecorder {
            id: audio
            name: "sample.wav"

            onRecordingChanged: {
                console.log("recording: " + recording);
            }
        }

        MediaPlayer {
            id: player
            autoPlay: true
            volume: 1.0
        }

        Column {
            anchors.fill: parent
            spacing: units.gu(1)

            Label {
                text: "Supported Audio codecs:"
            }
            ListView {
                id: audiocodecs
                width: parent.width
                height: audiocodecs.contentHeight
                model:audio.supportedAudioCodecs()
                delegate: Text {
                    text: modelData
                }
            }

            Rectangle {
                width: parent.width
                height: units.gu(0.1)
            }

            Label {
                text: "Supported Containers:"
            }
            ListView {
                id: audiocontainer
                width: parent.width
                height: audiocontainer.contentHeight
                model:audio.supportedContainers()
                delegate: Text {
                    text: modelData
                }
            }
        }

        Row {
            anchors.bottom: parent.bottom
            anchors.bottomMargin: units.gu(2)
            anchors.horizontalCenter: parent.horizontalCenter
            anchors.margins: units.gu(2)

            spacing: units.gu(2)

            Button {
                id: record
                text: "Record Audio"

                enabled: !audio.recording

                onClicked: {
                    audio.record();
                }
            }

            Button {
                id: stop
                text: "Stop"

                onClicked: {
                    audio.stop();
                    player.stop();
                }
            }

            Button {
                id: play
                text: "Play Audio"

                onClicked: {
                    console.log("path: " + audio.path() );
                    player.source = audio.path();
                    player.play();
                }
            }

        }
    }
}


在QML中,我们直接地使用:

        AudioRecorder {
            id: audio
            name: "sample.wav"

            onRecordingChanged: {
                console.log("recording: " + recording);
            }
        }


我们可以通过“Record Audio”按钮来进行录音的工作。应用的界面如下:

  

经过在手机上的实验,录音的效果非常好,非常清晰。我们可以使用“Play Audio”按钮来播放。

整个项目的源在: git clone https://gitcafe.com/ubuntu/audiorecorder.git

作者:UbuntuTouch 发表于2015/5/29 13:07:52 原文链接
阅读:157 评论:0 查看评论

Read more
UbuntuTouch

对话框Dialog的设计在许多的QML应用是经常用到的。许多新的开发者刚开始接触QML,有时找不到头绪。也许是由于QML的设计太过灵活,所以实现的方法有非常多。这里介绍几种简单的方式。


1)使用Ubuntu SDK提供的标准API


我们可以使用Ubuntu SDK提供的标准Dialog接口。使用的方法非常简单:

import QtQuick 2.4
import Ubuntu.Components 1.2
import Ubuntu.Components.Popups 1.0
Item {
    width: units.gu(80)
    height: units.gu(80)
    Component {
         id: dialog
         Dialog {
             id: dialogue
             title: "Save file"
             text: "Are you sure that you want to save this file?"
             Button {
                 text: "cancel"
                 onClicked: PopupUtils.close(dialogue)
             }
             Button {
                 text: "overwrite previous version"
                 color: UbuntuColors.orange
                 onClicked: PopupUtils.close(dialogue)
             }
             Button {
                 text: "save a copy"
                 color: UbuntuColors.orange
                 onClicked: PopupUtils.close(dialogue)
             }
         }
    }
    Button {
        anchors.centerIn: parent
        id: saveButton
        text: "save"
        onClicked: PopupUtils.open(dialog)
    }
}


就像文档中介绍的那样,我们需要import Ubuntu.Components.Popups 1.0模块来完成这个。这是目前最简单的方法,而且使用这样的方法,我们可以得到Ubuntu的look and feel。在调用时也可以传人我们的caller:

        Component {
            id: dialog
            Dialog {
                id: dialogue
                title: "Save file"
                text: "Are you sure that you want to save this file?"
                Button {
                    text: "cancel"
                    onClicked: PopupUtils.close(dialogue)
                }
                Button {
                    text: "overwrite previous version"
                    color: UbuntuColors.orange
                    onClicked: PopupUtils.close(dialogue)
                }
                Button {
                    text: "save a copy"
                    color: UbuntuColors.orange
                    onClicked: {
                        console.log("caller: " + caller );
                        if ( caller !== null ) {
                            caller.callback("file is saved!");
                        }

                        PopupUtils.close(dialogue);
                    }
                }
            }
        }

        Column {
            anchors.centerIn: parent
            spacing: units.gu(2)

            Button {
                id: saveButton
                text: "save"
                onClicked: PopupUtils.open(dialog)

                function callback(message) {
                    console.log("returned: " + message);
                }
            }

            Button {
                id: anotherSave
                text: "Another Save"
                onClicked: PopupUtils.open(dialog, anotherSave)

                function callback(message) {
                    console.log("returned: " + message);
                }
            }
}


这样我们可以通过caller来“callback”回调我们的方法。


2)创建我们自己的Dialog Component并动态调用它


我们可以自己创建一个自己的Dialog.qml文件来存放为我们的Dialog所需要的内容:

import QtQuick 2.0

Item {
    id: dialogComponent
    anchors.fill: parent

    // Add a simple animation to fade in the popup
    // let the opacity go from 0 to 1 in 400ms
    PropertyAnimation { target: dialogComponent; property: "opacity";
        duration: 400; from: 0; to: 1;
        easing.type: Easing.InOutQuad ; running: true }

    // This rectange is the a overlay to partially show the parent through it
    // and clicking outside of the 'dialog' popup will do 'nothing'
    Rectangle {
        anchors.fill: parent
        id: overlay
        color: "#000000"
        opacity: 0.6
        // add a mouse area so that clicks outside
        // the dialog window will not do anything
        MouseArea {
            anchors.fill: parent
        }
    }

    // This rectangle is the actual popup
    Rectangle {
        id: dialogWindow
        width: 300
        height: 300
        radius: 10
        anchors.centerIn: parent

        Text {
            anchors.centerIn: parent
            text: "This is the popup"
        }

        // For demo I do not put any buttons, or other fancy stuff on the popup
        // clicking the whole dialogWindow will dismiss it
        MouseArea{
            anchors.fill: parent
            onClicked: {
                // destroy object is needed when you dynamically create it
                dialogComponent.destroy()
            }
        }
    }
}

我们可以通过如下的方法来动态创建这个Dialog:

            Button {
                id: mydialog
                text: "My customized dialog"
                onClicked: {
                    Qt.createComponent("Dialog.qml").createObject(mainpage, {});
                }
            }

这个动态创建的Dialog在点击自己的页面时,被自动消除了。

        MouseArea{
            anchors.fill: parent
            onClicked: {
                // destroy object is needed when you dynamically create it
                dialogComponent.destroy()
            }
        }

这里的设计因人而异。我们可以设计自己的按钮来销毁创建的Dialog,并同时传回自己的选择。这个方法的好处是设计的UI不必绑定特定的操作系统,比如Ubuntu。这样这样的代码可以在多个平台上运行。


3)创建一个不可见的页面,在需要是显现


这个方法的实现也是非常简单,也在许多的QML应用中被使用。在创建页面时,我们可以同时创建不可见的部分。它们可以是重叠在一起的。只是在某个时间,只有一个页面可以被显示。通常不可见的页面在启动时被设置为“Dialog”。只有在需要的时候才可以被设置为可见。

为了说明问题,我们同样设置了一个自己的AnotherDialog.qml文件:

import QtQuick 2.0

Item {
    id: dialogComponent
    anchors.fill: parent

    // Add a simple animation to fade in the popup
    // let the opacity go from 0 to 1 in 400ms
    PropertyAnimation { target: dialogComponent; property: "opacity";
        duration: 400; from: 0; to: 1;
        easing.type: Easing.InOutQuad ; running: true }

    // This rectange is the a overlay to partially show the parent through it
    // and clicking outside of the 'dialog' popup will do 'nothing'
    Rectangle {
        anchors.fill: parent
        id: overlay
        color: "#000000"
        opacity: 0.6
        // add a mouse area so that clicks outside
        // the dialog window will not do anything
        MouseArea {
            anchors.fill: parent
        }
    }

    // This rectangle is the actual popup
    Rectangle {
        id: dialogWindow
        width: 300
        height: 300
        radius: 10
        anchors.centerIn: parent

        Text {
            anchors.centerIn: parent
            text: "This is the popup"
        }

        // For demo I do not put any buttons, or other fancy stuff on the popup
        // clicking the whole dialogWindow will dismiss it
        MouseArea{
            anchors.fill: parent
            onClicked: {
                // destroy object is needed when you dynamically create it
                console.log("it is clicked!");
                dialogComponent.visible = false;
            }
        }
    }
}


在我们的Main.qml中:

            Button {
                id: myanotherdialog
                text: "My another dialog"
                onClicked: {
                    dialog1.visible = true;
                }
            }
 
  ...


        AnotherDialog {
            id: dialog1
            anchors.fill: parent
            visible: false
        }

我们通过这样的方法,可以设计出我们自己想要的任何一个Dialog。如果不采用Ubuntu的任何API,你可以让它运行到任何一个你想要的平台之中。相比较第二种方法,这个方法需要占用额外的内存,并且在应用一启动时,就会被装载到内存中。

运行我们的测试应用:



    


整个项目的源码在:git clone https://gitcafe.com/ubuntu/dialog.git



作者:UbuntuTouch 发表于2015/5/29 15:38:27 原文链接
阅读:116 评论:0 查看评论

Read more
facundo


Hace un tiempo les hablé de un árbol que hice para sacar prefijos de palabras.

En el laburo estoy estudiando la forma de hacer un autocompletador. Entonces, luego de leer cosas por ahí, decidí probar ese árbol que ya tenía hecho.

Nunca le había tirado tantos datos, pero la verdad es que salió andando de perlas.

Por otro lado, tenía un detalle que necesitaba solucionar: yo quería que la búsqueda de palabras soportara errores en la escritura. O sea, que si uno buscara "maise", encontrara "maizena".

Encontré un paper bastante loco, Efficient Error-tolerant Query Autocompletion, pero que mostraba la forma de soportar errores al buscar palabras completas, no prefijos. Igual, apliqué ideas de ahí, y en un par de días de laburo conseguí lo que quería. Pero, al cargar el millón y medio de registros que tengo que cargar, ¡explotaba por memoria!

Luego de algunas optimizaciones obvias, se me ocurrió lo de deduplicar los subtrees internos. ¿Qué es deduplicar? Deduplicar es la acción por la cual si tengo un objeto A, y luego tengo otro B, que resulta ser igual a A, puedo usar el A directamente en ambos casos, descartando B (libera memoria), y listo.

Deduplicar diccionarios no es un asunto trivial. Tiré el asunto en la lista de PyAr, y en pocas horas logré que todo funcione correctamente. Ahora no sólo no explota, sino que ocupa bastante poca memoria!

    Memory usage after loading the tree: rss: +586 MB  vms: +586 MB
    Time to load the tree: 327190.99 msec
    <WordTree at 3068071276 [tau=1]: 1478347 words 30015540 (2201293) nodes (unique)>

Millón y medio de palabras, 30 millones de nodos (de los cuales 2.2 millones son únicos), ocupando 590 MB de memoria. Nada mal, ¿no? Que tarde 5.5 minutos en armar toda la estructura es un problema, la semana que viene voy a mirar eso bien.

Todo el código, acá.

Read more
Daniel Holbach

Next week we are going to have another Ubuntu Online Summit (5-7 May 2015). This is (among many other things) a great time for you to get involved with, learn about and help shape Ubuntu Snappy.

As I said in my last blog post I’m very impressed to see the general level of interest in Ubuntu Snappy given how new it is. It’ll be great to see who is joining the sessions and who is going to get involved.

For those of you who are new to it: Ubuntu Online Summit is an open event, where we’ll plan in hangouts and IRC the next Ubuntu release. You can

  • tune in
  • ask questions
  • bring up ideas
  • get to know the team
  • help out :-)

This is the preliminary schedule. Sessions might still move around a bit, but be sure to register for the event and subscribe to the blueprint/session – that way you are going to be notified of ongoing work and discussion.

Tuesday, 5th May 2015

Wednesday, 6th May 2015

Thursday, 7th May 2015

Please note that we are likely going to add more sessions, so you should definitely keep your eyes open and check the schedule every now and then.

I’m looking forward to seeing you all and seeing us shape what Snappy is going to be! See you next week!

Read more
David Callé

Internationalizing your QML app

 

As a developer, you probably want to see your apps in many hands. One way to make it happen is to enable your application for translation.

With minimal effort, you can mark your application strings for translation, expose them to community translators and integrate these translations into your package. The translations building process is handled by the SDK itself and if you happen to use Launchpad, translators will quickly see your project and help you, but you still need to mark your user-visible strings as translatable.

Let's get started ›

Read more
Loïc Molinari

A magnifying glass in QML

To create sharp visual components, we need to make sure our renderings look good at the pixel level. This is a common task and the terms precision and pixel-perfectness have become ubiquitous in discussions among programmers and designers at Canonical. In the last years, the industry started to increase the pixel density of screens, again (remember the CRT era), resulting in a higher number of pixels within a specified space (see Retina Display for instance). A consequence is that jaggies are less visible than before because we are reaching the point where the pixels are small enough that the eye is not able to detect them. In an idealized world of high density screens that would completely remove the need of anti-aliasing algorithms to smooth edges, but the fact of the matter is that we are not there yet and we will still have to thoroughly inspect the quality of anti-aliasing algorithms for a while.

Handheld magnifying glass

At a previous job, a colleague of mine used to keep a handheld magnifying glass on his desk. I was quite amused to see him glued to his screen validating the visual quality of commits with this thing. As the graphics engine programmer, I barely remember the reason for which I never proposed the inclusion of a software magnifier, it could be because of the overloaded backlog we had to deal with at the time but I guess it actually was just out of sheer mischief. Most desktop environments include a software magnifier, but depending on its quality (efficiency and ease of use), it often makes sense to integrate a custom magnifier directly in the application being developed (it makes less sense to ship it in release builds though...). This article explains how to implement an efficient one with QML using offscreen framebuffers and shaders.

Offscreen framebuffers (exposed as FBOs in OpenGL), vertex shaders and fragment shaders are now widely available in mobile and mid-range GPUs allowing the creation of interesting real-time post-processing effects for most devices on the market. Magnification, or to be more precise zooming & panning (magnification solely being the process of rendering an image at a higher scale), is one of it. In low-level graphics programming terms, all it takes is to do a first pass that renders the scene in a FBO and a second pass that renders a texture mapped quad to the default framebuffer reading the FBO as a texture. Image zooming and panning is a basic 2D scale and translate transformation that can be efficiently implemented by tweaking the texture coordinates used to sample the FBO at the second pass. The vertex shader, executed for the 4 vertices making our quad, will easily take care of it using a single multiply-add op (transformed_coords = scale * coords + translation) and the hardware accelerated rasterizer and texture units will make the actual rendering very efficient. In order to clearly distinguish the magnified pixels, it is important to use a simple nearest neighbour filter. These low-level bits are nicely exposed to QML through the ShaderEffectSource and  ShaderEffect items. The former allows to render a given Item to a FBO and the latter provides support for quads rendered using custom vertex and fragment shaders.

Here is the QML code of the magnifier:

import QtQuick 2.4

Item {
    // Public properties.
    property Item scene: null
    property MouseArea area: null

    id: root
    visible: scene != null
    property real __scaling: 1.0
    property variant __translation: Qt.point(0.0, 0.0)

    // The FBO abstraction handling our first offscreen pass.
    ShaderEffectSource {
        id: effectSource
        anchors.fill: parent
        sourceItem: scene
        hideSource: scene != null
        visible: false
        smooth: false  // Nearest neighbour texture filtering.
    }

    // The shader abstraction handling our second pass with the
    // translation and scaling in the vertex shader and the simple
    // texturing from the FBO in the fragment shader.

    ShaderEffect {
            id: effect
            anchors.fill: parent
            property real scaling: __scaling
            property variant translation: __translation
            property variant texture: effectSource

            vertexShader: "
                uniform highp mat4 qt_Matrix;
                uniform mediump float scaling;
                uniform mediump vec2 translation;
                attribute highp vec4 qt_Vertex;
                attribute mediump vec2 qt_MultiTexCoord0;
                varying vec2 texCoord;
                void main() {
                    texCoord =

                        qt_MultiTexCoord0 * vec2(scaling)
                        + translation;
                    gl_Position = qt_Matrix * qt_Vertex;
                }"

            fragmentShader: "
                uniform sampler2D texture;
                uniform lowp float qt_Opacity;
                varying mediump vec2 texCoord;
                void main() {
                    gl_FragColor =

                        texture2D(texture, texCoord) * qt_Opacity;
                }"

    }

    // Mouse handling.
    Connections {
        target: scene != null ? area : null
        [...]
     }
}

 

And here is how to use it:

import QtQuick 2.4

Item {
    id: root

    Item {
        id: scene
        anchors.fill: parent
    }

    ZoomPan {
        id: zoomPan
        anchors.fill: parent
        scene: scene
        area: mouseArea
    }

    MouseArea {
        id: mouseArea
        anchors.fill: parent
        enabled: true
        hoverEnabled: true
        acceptedButtons: Qt.AllButtons
    }
}

 

Mouse handling has been snipped off the code for conciseness but it can be studied directly from the code repository. One important point to notice is that for zooming to be a pleasant experience, it has to be implemented using a logarithmic scale as opposed to a linear scale. Each scale value at a zooming level is the previous one multiplied by the desired scale factor, so a scale factor of 2 and a zooming level n give a scale value of 2n. Another point is that to scale an image up, the range of its texture coordinates must be scaled down, this explains why the actual scaling is inverted. So a scale value of 2n would give an actual scaling of 2-n. A bit counterintuitive at first…

We’re done with the theory. Let’s have a look at the final result:

 

This technique helped me in the making of several visual elements, I would be glad if other programmers find it useful too. Zooming and panning is a very common feature in image viewers, the technique could be adapted for that use case too (with potentially some tweaks to support tiling of big pictures). Maybe that would be a good addition to the Ubuntu UI toolkit, don’t hesitate to ask if you would like official support for it.

The source code is available on launchpad:

Read more
Daniel Holbach

15.04 is out!

ubuntu.com

 

And another Ubuntu release went out the the door. I can’t believe that it’s the 22nd Ubuntu release already.

There’s a lot to be excited about in 15.04. The first phone powered by Ubuntu went out to customers and new devices are in the pipeline. The underpinnings of the various variants of Ubuntu are slowly converging, new Ubuntu flavours saw the light of day (MATE and Desktop next), new features landed, new apps added, more automated tests were added, etc. The future of Ubuntu is looking very bright.

What’s Ubuntu Core?

One thing I’m super happy about is a very very new addition: Ubuntu Core and snappy. What does it offer? It gives you a minimal Ubuntu system, automatic and bulletproof updates with rollback, an app store and very straight-forward enablement and packaging practices.

It has been brilliant to watch the snappy-devel@ and snappy-app-devel@ mailing lists in recent weeks and notice how much interest from enthusiasts, hobbyists, hardware manufacturers, porters and others get interested and get started. If you have a look at Dustin’s blog post, you get a good idea of what’s happening. It also features a video of Mark, who explains how Ubuntu has adapted to the demands of a changing IT world.

One fantastic example of how Ubuntu Snappy is already powering devices you had never thought of is the Erle-Copter. (If you can’t see the video, check out this link.)

It’s simply beautiful how product builders and hobbyists can now focus on what they’re interested in: building a tool, appliance, a robot, something crazy, something people will love or something which might change a small art of the world somewhere. What’s taken out of the equation by Ubuntu is: having to maintain a linux distro.

Maintaining a linux distro

Whenever I got a new device in my home I could SSH into, I was happy and proud. I always felt: “wow, they get it – they’re using open source software, they’re using linux”. This  feeling was replaced at some stage, when I realised how rarely my NAS or my router received system updates. When I checked for changelog entries of the updates I found out how only some of the important CVEs of the last year were mentioned, sometimes only “feature updates” were mentioned.

To me it’s clear that not all product builders or hardware companies collaborate with the NSA and create backdoors on purpose, but it’s hard work to maintain a linux stack and to do it responsibly.

That’s why I feel Ubuntu Core is an offering that “has legs” (as Mark Shuttleworth would say): as somebody who wants to focus on building a great product or solving a specific use case, you can do just that. You can ship your business logic in a snap on top of Ubuntu Core and be done with it. Brilliant!

What’s next?

Next week is Ubuntu Online Summit (5-7 May). There we are going to discuss the plans for the next time and that’s where you can get involved, ask questions, bring up your ideas and get to know the folks who are working on it now.

I’ll write a separate blog post in the coming days explaining what’s happening next week, until then feel free to have a look at:

 

Read more
Prakash

For the first time Amazon has revealed its numbers for AWS.

In its latest financial earnings report, Amazon said AWS grew 49 percent in 2014, pulling in $4.6 billion in revenue. After reaching $1.57 billion in the first quarter of this year, AWS is on track for $6.23 billion in sales by year’s end, the company said. Though its cloud business still accounted for only 7 percent of the company’s overall quarterly revenue of $22.72 billion, AWS is growing at a much faster rate than the rest of Amazon (AWS grew 49 percent, while the company’s core North American business grew 22 percent). And contrary to what the company has indicated in the past, its margins are significantly higher with AWS.

Read More: http://www.wired.com/2015/04/amazons-cloud-is-the-best-part-of-its-business-aws/

Read more
bigjools

Why?

I recently had cause to try to get federated logins working on Openstack, using Kerberos as an identity provider. I couldn’t find anything on the Internet that described this in a simple way that is understandable by a relative newbie to Openstack, so this post is attempting to do that, because it has taken me a long time to find and digest all the info scattered around. Unfortunately the actual Openstack docs are a little incoherent at the moment.

Assumptions

  • I’ve tried to get this working on older versions of Openstack but the reality is that unless you’re using Kilo or above it is going to be an uphill task, as the various parts (changes in Keystone and Horizon) don’t really come together until that release.
  • I’m only covering the case of getting this working in devstack.
  • I’m assuming you know a little about Kerberos, but not too much :)
  • I’m assuming you already have a fairly vanilla installation of Kilo devstack in a separate VM or container.
  • I use Ubuntu server. Some things will almost certainly need tweaking for other OSes.

Overview

The federated logins in Openstack work by using Apache modules to provide a remote user ID, rather than credentials in Keystone. This allows for a lot of flexibility but also provides a lot of pain points as there is a huge amount of configuration. The changes described below show how to configure Apache, Horizon and Keystone to do all of this.

Important! Follow these instructions very carefully. Kerberos is extremely fussy, and the configuration in Openstack is rather convoluted.

Pre-requisites

If you don’t already have a Kerberos server, you can install one by following https://help.ubuntu.com/community/Kerberos

The Kerberos server needs a service principal for Apache so that Apache can connect. You need to generate a keytab for Apache, and to do that you need to know the hostname for the container/VM where you are running devstack and Apache. Assuming it’s simply called ‘devstackhost':

$ kadmin -p <your admin principal>
kadmin: addprinc -randkey HTTP/devstackhost
kadmin: ktadd -k keytab.devstackhost HTTP/devstackhost

This will write a file called keytab.devstackhost, you need to copy it to your devstack host under /etc/apache2/auth/

You can test that this works with:

$ kinit -k -t /etc/apache2/auth/keytab.devstackhost HTTP/devstackhost

You may need to install the krb5-user package to get kinit. If there is no problem then the command prompt just reappears with no error. If it fails then check that you got the keytab filename right and that the principal name is correct. You can also try using kinit with a known user to see if the underlying Kerberos install is right (the realm and the key server must have been configured correctly, installing any kerberos package usually prompts to set these up).

Finally, the keytab file must be owned by www-data and read/write only by that user:

$ sudo chown www-data /etc/apache2/auth/keytab.devstackhost
$ sudo chmod 0600 /etc/apache2/auth/keytab.devstackhost

Apache Configuration

Install the Apache Kerberos module:

$ sudo apt-get install libapache2-mod-auth-kerb

Edit the /etc/apache2/sites-enabled/keystone.conf file. You need to make sure the mod_auth_kerb module is installed, and add extra Kerberos config.

LoadModule auth_kerb_module modules/mod_auth_kerb.so

<VirtualHost *:5000>

 ...

 # KERB_ID must match the IdP set in Openstack.
 SetEnv KERB_ID KERB_ID
 
 <Location ~ "kerberos" >
 AuthType Kerberos
 AuthName "Kerberos Login"
 KrbMethodNegotiate on
 KrbServiceName HTTP
 KrbSaveCredentials on
 KrbLocalUserMapping on
 KrbAuthRealms MY-REALM.COM
 Krb5Keytab /etc/apache2/auth/keytab.devstackhost
 KrbMethodK5Passwd on #optional-- if 'off' makes GSSAPI SPNEGO a requirement
 Require valid-user
 </Location>

Note:

  • Don’t forget to edit the KrbAuthRealms setting to your own realm.
  • Don’t forget to edit Krb5Keytab to match your keytab filename
  • Pretty much all browsers don’t support SPNEGO out of the box, so KrbMethodK5Passwd is enabled here which will make the browser pop up one of its own dialogs prompting for credentials (more on that later). If this is off, the browser must support SPNEGO which will fetch the Kerberos credentials from your user environment, assuming the user is already authenticated.
  • If you are using Apache 2.2 (used on Ubuntu 12.04) then KrbServiceName must be configured as HTTP/devstackhost (change devstackhost to match your own host name). This config is so that Apache uses the service principal name that we set up in the Kerberos server above.

Keystone configuration

Federation must be explicitly enabled in the keystone config.
http://docs.openstack.org/developer/keystone/extensions/federation.html explains this, but to summarise:

Edit /etc/keystone/keystone.conf and add the driver:

[federation]
driver = keystone.contrib.federation.backends.sql.Federation
trusted_dashboard = http://devstackhost/auth/websso
sso_callback_template = /etc/keystone/sso_callback_template.html

(Change “devstackhost” again)

Copy the callback template to the right place:

$ cp /opt/stack/keystone/etc/sso_callback_template.html /etc/keystone/

Enable kerberos in the auth section of /etc/keystone/keystone.conf :

[auth]
methods = external,password,token,saml2,kerberos
kerberos = keystone.auth.plugins.mapped.Mapped

Set the remote_id_attribute, which tells Openstack which IdP was used:

[kerberos]
remote_id_attribute = KERB_ID

Add the middleware to keystone-paste.conf. ‘federation_extension’ should be the second last entry in the pipeline:api_v3 entry:

[pipeline:api_v3]
pipeline = sizelimit url_normalize build_auth_context token_auth admin_token_auth json_body ec2_extension_v3 s3_extension simple_cert_extension revoke_extension federation_extension service_v3

Now we have to create the database tables for federation:

$ keystone-manage db_sync --extension federation

Openstack Configuration

Federation must use the v3 API in Keystone. Get the Openstack RC file from the API access tab of Access & Security and then source it to get the shell API credentials set up. Then:

$ export OS_AUTH_URL=http://$HOSTNAME:5000/v3
$ export OS_IDENTITY_API_VERSION=3
$ export OS_USERNAME=admin

Test this by trying something like:

$ openstack project list

Now we have to set up the mapping between remote and local users. I’m going to add a new local group and map all remote users to that group. The mapping is defined with a blob of json and it’s currently very badly documented (although if you delve into the keystone unit tests you’ll see a bunch of examples). Start by making a file called add-mapping.json:

[
    {
        "local": [
            {
                "user": {
                    "name": "{0}",
                    "domain": {"name": "Default"}
                }
            },
            {
                "group": {
                    "id": "GROUP_ID"
                    }
            }
        ],
        "remote": [
            {
                "type": "REMOTE_USER"
            }
        ]
    }
]

Now we need to add this mapping using the openstack shell.

openstack group create krbusers
openstack role add --project demo --group krbusers member
openstack identity provider create kerb group_id=`openstack group list|grep krbusers|awk '{print $2}'`
cat add-mapping.json|sed s^GROUP_ID^$group_id^ > /tmp/mapping.json
openstack mapping create --rules /tmp/mapping.json kerberos_mapping
openstack federation protocol create --identity-provider kerb --mapping kerberos_mapping kerberos
openstack identity provider set --remote-id KERB_ID kerb

(I’ve left out the command prompt so you can copy and paste this directly)

What did we just do there?

In my investigations, the part above took me the longest to figure out due to the current poor state of the docs. But basically:

  • Create a group krbusers to which all federated users will map
  • Make sure the group is in the demo project
  • Create a new identity provider which is linked to the group we just created (the API frustratingly needs the ID, not the name, hence the shell machinations)
  • Create the new mapping, then link it to a new “protocol” called kerberos which connects the mapping to the identity provider.
  • Finally, make sure the remote ID coming from Apache is linked to the identity provider. This makes sure that any requests from Apache are routed to the correct mapping. (Remember above in the Apache configuration that we set KERB_ID in the request environment? This is an arbitrary label but they need to match.)

After all this, we have a new group in Keystone called krbusers that will contain any user provided by Kerberos.

Ok, we’re nearly there! Onwards to …

Horizon Configuration

Web SSO must be enabled in Horizon. Edit the config at /opt/stack/horizon/openstack_dashboard/local/local_settings.py and make sure the following settings are set at the bottom:

WEBSSO_ENABLED = True

WEBSSO_CHOICES = (
("credentials", _("Keystone Credentials")),
("kerberos", _("Kerberos")),
)

WEBSSO_INITIAL_CHOICE="kerberos"

COMPRESS_OFFLINE=True

OPENSTACK_KEYSTONE_DEFAULT_ROLE="Member"

OPENSTACK_HOST="$HOSTNAME"

OPENSTACK_API_VERSIONS = {
"identity": 3
}

OPENSTACK_KEYSTONE_URL="http://$HOSTNAME:5000/v3"

Make sure $HOSTNAME is actually the host name for your devstack instance.

Now, restart apache

$ sudo service apache2 restart

and you should be able to test that the federation part of Keystone is working by visiting this URL

http://$HOSTNAME:5000/v3/OS-FEDERATION/identity_providers/kerb/protocols/kerberos/auth

You’ll get a load of json back if it worked OK.

You can now test the websso part of Horizon by going here:

http://$HOSTNAME:5000/v3/auth/OS-FEDERATION/websso/kerberos?origin=http://$HOSTNAME/auth/websso/

You should get a browser dialog which asks for Kerberos credentials, and if you get through this OK you’ll see the sso_callback_template returned to the browser.

Trying it out!

If you don’t have any users in your Kerberos realm, it’s easy to add one:

$ ktadmin
ktadmin: addprinc -randkey <NEW USER NAME>
ktadmin: cpw -pw <NEW PASSWORD> <NEW USER NAME>

Now visit your Openstack dashboard and you should see something like this:

kerblogin

Click “Connect” and log in and you should be all set.


Read more
David Owen

Let's say that you've got a lot of numbers that represent bitmasks of some kind, and you want to count how many times each bit is on or off across the entire set.  Maybe you're analyzing game positions represented as bitboards for an AI, or trying to find certain types of weaknesses in random-number generators, like in Forge (a successor to crypto-js) or Cryptocat (read the great write-up at Sophos).

So, you write some very straight-forward code to count the bits.  It grabs one bitmask.  If the lowest-order bit is set, it increments the counter for that bit position.  Then, it right-shifts the bitmask and moves to the counter for the next bit.  Repeat that for each bit in the mask, then repeat that for each bitmask:

const int N = 1000000;
unsigned long x[N]; // Assuming sizeof(unsigned long) == 8, or 64 bits.
int counts[64] = {0};

void count_simple(void) {
    for(int i = 0; i < N; i++) {
        int j = 0;
        while(x[i] != 0) {
            counts[j] += x[i] & 1;
            x[i] >>= 1;
            ++j;
        }
    }
}

You run your program, and it works correctly, but it's too slow.  I'll show you how to speed this up.  The technique, which applies to languages like Python or Javascript as well as to C, is both crazy, and crazy-fast!

Continue reading "A crazy fast bit-counting technique"

Read more
David Planella

Ubuntu Online Summit
The 15.04 release frenzy over, but the next big event in the Ubuntu calendar is just around the corner. In about a week, from the 5th to 7th of May, the next edition of the Ubuntu Online Summit is taking off. Three days of sessions for developers, designers, advocates, users and all members of our diverse community.

Along the developer-oriented discussions you’ll find presentations, workshops, lightning talks and much more. It’s a great opportunity for existing and new members to get together and contribute to the talks, watch a workshop to learn something new, or ask your questions to many of the rockstars who make Ubuntu.

While the schedule is being finalized, here’s an overview (and preview) of the content that you should expect in each one of the tracks:

  • App & scope development: the SDK and developer platform roadmaps, phone core apps planning, developer workshops
  • Cloud: Ubuntu Core on clouds, Juju, Cloud DevOps discussions, charm tutorials, the Charm, OpenStack
  • Community: governance discussions, community event planning, Q+As, how to get involved in Ubuntu
  • Convergence: the road to convergence, the Ubuntu desktop roadmap, requirements and use cases to bring the desktop and phone together
  • Core: snappy Ubuntu Core, snappy post-vivid plans, snappy demos and Q+As
  • Show & Tell: presentations, demos, lightning talks (read: things that break and explode) on a varied range of topics

Joining the summit is easy, you’ll just need to follow the instructions and register for free to the Ubuntu Online Summit >

UOS highlights: back to the desktop, snappy and the road to convergence

This is going to be perhaps one of the most important summits in recent times. After a successful launch of the phone, followed by the exciting announcement and delivery of snappy Ubuntu Core, Ubuntu is entering a new era. An era of lean, secure, minimal and modular systems that can run on the cloud, on Internet-enabled devices, on the desktop and virtually anywhere.

While the focus on development in the last few cycles has been on shaping up and implementing the phone, this doesn’t mean other key parts of the project have been left out. The phone has helped create the platform and tools that will ultimately bring all these projects together, into a converged code base and user experience. From desktop to phone, to the cloud, to things, and back to the desktop.

The Ubuntu 15.10 cycle begins, and so does this exciting new era. The Ubuntu Online Summit will be a unique opportunity to pave the road to convergence and discuss how the next generation of the Ubuntu desktop is built. So the desktop is back on the spotlight, and snappy will be taking the lead role in bringing Ubuntu for devices and desktop together. Expect a week of interesting discussions and of thinking out of the box to get there!

Participating in the Ubuntu Online Summit

Does this whet your appetite? Come and join us at the Summit, learn more and contribute to shaping the future of Ubuntu! There are different ways of taking part in the online event via video hangouts:

  • Participate or watch sessions – everyone is welcome to participate and join a discussion to provide input or offer contribution. If you prefer to take a rear seat, that’s fine too. You can either subscribe to sessions, watch them on your browser or directly join a live hangout. Just remember to register first and learn how to join a session.
  • Propose a session – do you want to take a more active role in contributing to Ubuntu? Do you have a topic you’d like to discuss, or an idea you’d like to implement? Then you’ll probably want to propose a session to make it happen. There is still a week for accepting proposals, so why don’t you go ahead and propose a session?

Looking forward to seeing you all at the Summit!

The post Announcing the next Ubuntu Online Summit appeared first on David Planella.

Read more
Anthony Wong

Hello world!

Welcome to Canonical Voices. This is your first post. Edit or delete it, then start blogging!

Read more
Ben Howard

I am pleased to announce initial Vagrant images [1, 2]. These images are bit-for-bit the same as the KVM images, but have a Cloud-init configuration that allows Snappy to work within the Vagrant workflow.

Vagrant enables a cross platform developer experience on MacOS, Windows or Linux [3].

Note: due to the way that Snappy works, shared file systems within Vagrant is not possible at this time. We are working on getting the shared file system support enabled, but it will take us a little bit to get going.

If you want to use Vagrant packaged in the Ubuntu archives, in a terminal run::

  • sudo apt-get -y install vagrant
  • cd <WORKSPACE>
  • vagrant init http://goo.gl/DO7a9W 
  • vagrant up
  • vagrant ssh
If you use Vagrant from [4] (i.e Windows, Mac or install the latest Vagrant) then you can run:
  • vagrant init ubuntu/ubuntu-15.04-snappy-core-edge-amd64
  • vagrant up
  • vagrant ssh

These images are a work in progress. If you encounter any issues, please report them to "snappy-devel@lists.ubuntu.com" or ping me (utlemming) on Launchpad.net

---

[1] http://cloud-images.ubuntu.com/snappy/15.04/core/edge/current/core-edge-amd64-vagrant.box
[2] https://atlas.hashicorp.com/ubuntu/boxes/ubuntu-15.04-snappy-core-edge-amd64
[3] https://docs.vagrantup.com/v2/why-vagrant/index.html
[4] https://www.vagrantup.com/downloads.html

Read more
Stéphane Graber

Introduction

For the past 6 months, Serge Hallyn, Tycho Andersen, Chuck Short, Ryan Harper and myself have been very busy working on a new container project called LXD.

Ubuntu 15.04, due to be released this Thursday, will contain LXD 0.7 in its repository. This is still the early days and while we’re confident LXD 0.7 is functional and ready for users to experiment, we still have some work to do before it’s ready for critical production use.

LXD logo

So what’s LXD?

LXD is what we call our container “lightervisor”. The core of LXD is a daemon which offers a REST API to drive full system containers just like you’d drive virtual machines.

The LXD daemon runs on every container host and client tools then connect to those to manage those containers or to move or copy them to another LXD.

We provide two such clients:

  • A command line tool called “lxc”
  • An OpenStack Nova plugin called nova-compute-lxd

The former is mostly aimed at small deployments ranging from a single machine (your laptop) to a few dozen hosts. The latter seamlessly integrates inside your OpenStack infrastructure and lets you manage containers exactly like you would virtual machines.

Why LXD?

LXC has been around for about 7 years now, it evolved from a set of very limited tools which would get you something only marginally better than a chroot, all the way to the stable set of tools, stable library and active user and development community that we have today.

Over those years, a lot of extra security features were added to the Linux kernel and LXC grew support for all of them. As we saw the need for people to build their own solution on top of LXC, we’ve developed a public API and a set of bindings. And last year, we’ve put out our first long term support release which has been a great success so far.

That being said, for a while now, we’ve been wanting to do a few big changes:

  • Make LXC secure by default (rather than it being optional).
  • Completely rework the tools to make them simpler and less confusing to newcomers.
  • Rely on container images rather than using “templates” to build them locally.
  • Proper checkpoint/restore support (live migration).

Unfortunately, solving any of those means doing very drastic changes to LXC which would likely break our existing users or at least force them to rethink the way they do things.

Instead, LXD is our opportunity to start fresh. We’re keeping LXC as the great low level container manager that it is. And build LXD on top of it, using LXC’s API to do all the low level work. That achieves the best of both worlds, we keep our low level container manager with its API and bindings but skip using its tools and templates, instead replacing those by the new experience that LXD provides.

How does LXD relate to LXC, Docker, Rocket and other container projects?

LXD is currently based on top of LXC. It uses the stable LXC API to do all the container management behind the scene, adding the REST API on top and providing a much simpler, more consistent user experience.

The focus of LXD is on system containers. That is, a container which runs a clean copy of a Linux distribution or a full appliance. From a design perspective, LXD doesn’t care about what’s running in the container.

That’s very different from Docker or Rocket which are application container managers (as opposed to system container managers) and so focus on distributing apps as containers and so very much care about what runs inside the container.

There is absolutely nothing wrong with using LXD to run a bunch of full containers which then run Docker or Rocket inside of them to run their different applications. So letting LXD manage the host resources for you, applying all the security restrictions to make the container safe and then using whatever application distribution mechanism you want inside.

Getting started with LXD

The simplest way for somebody to try LXD is by using it with its command line tool. This can easily be done on your laptop or desktop machine.

On an Ubuntu 15.04 system (or by using ppa:ubuntu-lxc/lxd-stable on 14.04 or above), you can install LXD with:

sudo apt-get install lxd

Then either logout and login again to get your group membership refreshed, or use:

newgrp lxd

From that point on, you can interact with your newly installed LXD daemon.

The “lxc” command line tool lets you interact with one or multiple LXD daemons. By default it will interact with the local daemon, but you can easily add more of them.

As an easy way to start experimenting with remote servers, you can add our public LXD server at https://images.linuxcontainers.org:8443
That server is an image-only read-only server, so all you can do with it is list images, copy images from it or start containers from it.

You’ll have to do the following to: add the server, list all of its images and then start a container from one of them:

lxc remote add images images.linuxcontainers.org
lxc image list images:
lxc launch images:ubuntu/trusty/i386 ubuntu-32

What the above does is define a new “remote” called “images” which points to images.linuxcontainers.org. Then list all of its images and finally start a local container called “ubuntu-32″ from the ubuntu/trusty/i386 image. The image will automatically be cached locally so that future containers are started instantly.

The “<remote name>:” syntax is used throughout the lxc client. When not specified, the default “local” remote is assumed. Should you only care about managing a remote server, the default remote can be changed with “lxc remote set-default”.

Now that you have a running container, you can check its status and IP information with:

lxc list

Or get even more details with:

lxc info ubuntu-32

To get a shell inside the container, or to run any other command that you want, you may do:

lxc exec ubuntu-32 /bin/bash

And you can also directly pull or push files from/to the container with:

lxc file pull ubuntu-32/path/to/file .
lxc file push /path/to/file ubuntu-32/

When done, you can stop or delete your container with one of those:

lxc stop ubuntu-32
lxc delete ubuntu-32

What’s next?

The above should be a reasonably comprehensive guide to how to use LXD on a single system. Of course, that’s not the most interesting thing to do with LXD. All the commands shown above can work against multiple hosts, containers can be remotely created, moved around, copied, …

LXD also supports live migration, snapshots, configuration profiles, device pass-through and more.

I intend to write some more posts to cover those use cases and features as well as highlight some of the work we’re currently busy doing.

LXD is a pretty young but very active project. We’ve had great contributions from existing LXC developers as well as newcomers.

The project is entirely developed in the open at https://github.com/lxc/lxd. We keep track of upcoming features and improvements through the project’s issue tracker, so it’s easy to see what will be coming soon. We also have a set of issues marked “Easy” which are meant for new contributors as easy ways to get to know the LXD code and contribute to the project.

LXD is an Apache2 licensed project, written in Go and which doesn’t require a CLA to contribute to (we do however require the standard DCO Signed-off-by). It can be built with both golang and gccgo and so works on almost all architectures.

Extra resources

More information can be found on the official LXD website:
https://linuxcontainers.org/lxd

The code, issues and pull requests can all be found on Github:
https://github.com/lxc/lxd

And a good overview of the LXD design and its API may be found in our specs:
https://github.com/lxc/lxd/tree/master/specs

Conclusion

LXD is a new and exciting project. It’s an amazing opportunity to think fresh about system containers and provide the best user experience possible, alongside great features and rock solid security.

With 7 releases and close to a thousand commits by 20 contributors, it’s a very active, fast paced project. Lots of things still remain to be implemented before we get to our 1.0 milestone release in early 2016 but looking at what was achieved in just 5 months, I’m confident we’ll have an incredible LXD in another 12 months!

For now, we’d welcome your feedback, so install LXD, play around with it, file bugs and let us know what’s important for you next.

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150421 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

Our Vivid kernel remains based on the upstream v3.19.3 stable kernel.
We do not intend any additional uploads before release this Thurs. We
have started to queue the v3.19.4 and v3.19.5 stable patches for our
first Vivid kernel SRU.
—–
Important upcoming dates:
Thurs Apr 23 – 15.04 Release (2 days away!)


Status: CVE’s

The current CVE status can be reviewed at the following link:
http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – None (no update)
  • Precise – Testing & Verification
  • Trusty – Testing & Verification
  • Utopic – Testing & Verification

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    Current cycle: 10-Apr through 02-May
    ====================================================================
    10-Apr Last day for kernel commits for this cycle
    12-Apr – 18-Apr Kernel prep week.
    19-Apr – 02-May Bug verification; Regression testing; Release
    ** NOTE: Support for Lucid ends on April 30.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Read more
Prakash

The best phone money can buy, one plus one is now available without an invitation.

If you are in the market for a new phone, here is your chance.


Read more
Nicholas Skaggs

It's Show and Tell Time!

Err, show and tell?
Who remembers their first years of schooling? At least for me growing up in the US, those first years invovled an activity called 'Show and Tell'. We were instructed to bring something in from home and talk about it. This could be a picture or souvenir from a trip or unique life event, something we made, another person who does interesting things, or just something we found really interesting. It was a way for us to learn more about each other in the classroom, as well as share cool things with each other.


Online Summit
Ok, snapping you back to reality, it's nearing time for UOS 15.05. UOS is the Ubuntu Online Summit we hold each cycle to talk about what's happening in ubuntu. UOS 15.05 will be on May 5th - May 7th.

So what does the childhood version of me reminiscing about show and tell have to do with UOS? Well, I'm glad you asked! There is a 'Show and Tell' track available to everyone as a platform for sharing interesting and unique things with the rest of the community. These sessions can be very short (5 or 10 minutes) and are a great way to share about your work within ubuntu.

With that in mind, it's a perfect opportunity for you to participate in 'Show and Tell' with the rest of the community. I encourage you to propose a session on the 'Show and Tell' track. This track exists for things like demos, quick talks, and 'show and tell' type things. It's perfect to spend 5 or 10 minutes talking about something you made or work on. Or perhaps something you find interesting. Or just a way to share a little about the team you work with or a project you've done. For those of you who may have been a part of the 'lightning talks' during the days of the physical UDS, anything that would have been considered a lightning talk is more than welcome in this track.

Cool, where do I signup?
Proposing a session is simple to do, and there's even a webpage to help! If you really get stuck, feel free to contact me, Svetlana Belkin, Marco Ceppi, or Allan Lesage who are your friendly track leads for this track. Once it's proposed the session will be assigned a date and time. Myself or another track lead will follow-up with you before UOS to ensure you are ready and the date and time is suitable for you.

Is there another way to participate?
Yes! Remember to checkout the show and tell sessions and participate by asking questions and enjoying the presentations. I guarantee you will learn something new. Maybe even useful!


Thanks for helping make UOS a success. I'll see you there!


Read more
Sergio Schvezov

Updates to snappy and ubuntu-device-flash

The past few weeks in the snappy world have been a revolt, better said a rapid evolution for it to be closer to what we wanted it to be.

Some things have change, if you are tracking the bleeding edge you will notice a couple of changes, the store for example now exposes packages depending on the OS release, and system images are now built against an OS release as well. For core images we have two choices:

  • 15.04
  • rolling

15.04 will be nicely locked down and guarantee stability while rolling will just roll on and you will see it stumble over (although it shouldn’t break badly, APIs are what we will try and aspire to keep in the breaking zone). Try is a strong word, which is why channels are being used; the core images have the concept of channel which can be:

  • stable
  • rc
  • beta
  • alpha
  • edge

Today, as of this writing, we are supporting edge and alpha for each OS release and as soon as we release we will have a stable channel enabled. Store support for channels is coming to a future near you which means that eventually packages can track different channels.

Another addition is a new snap type called oem, this snappy package allows OEMs to enable devices and boards with a degree of customization such as:

  • preinstalled unremovable or removable packages
  • default configurations for preinstalled packages and ubuntu-core
  • lock down configurations.
  • custom DTBs
  • boot files (e.g.; u-boot, uEnv.txt)

This package, uploaded to the store allows people to create custom enablements to support their product stories. This package’s capabilities can grow in the future to support some other niceties.

If you happen to use the development ppa for snappy ppa:snappy-dev/tools you should be seeing a new ubuntu-device-flash in the updates which supports most of this syntax and retires early enablement work.

So in order to create a default image for the Beagle Bone Black you would do:

sudo ubuntu-device-flash core 15.04 --channel edge --oem beagleblack --output bbb.img

To create an generic amd64 image

sudo ubuntu-device-flash core 15.04 --channel edge --output x86.img

15.04 could be replaced with rolling and today the default channel is edge but will be stable as soon as we have something in there :-)

Keep in mind now that 15.04 and rolling will return different store search results depending on what the developer has targetted.

Installing local oem snaps passing in --oem forces you to setup --developer-mode if the package is not signed by the store.

Last but not least, the flashassets entry from device tarballs used to enable new devices are now ignored in favor of using the information from the oem snappy package, this means that if you have a port you will need to move it over to the oem packaging.

Read more