Canonical Voices

liam zheng

Ubuntu手机OTA-10已发布,比起OTA-9,本次更新不仅仅在UI上有所改善,修复了已知bug,还迎来诸多新功能,新增了更多预装应用。

Ubuntu手机和常见的IOS和安卓系统使用方式有很大的区别,它采用手势滑动操作完全不需要使用手机按钮。对于首次使用Ubuntu手机的用户来说,会需要经过简单的学习来熟悉这个系统界面,OTA-10对首次开机使用教程进行了重新设计,上手Ubuntu手机更加容易。虽然和传统智能手机器的操作方式有很大区别,但实际操作更加合理,新用户上手也极快。

这是Ubuntu手机从去年上市以来, 第10次系统更新。这次OTA-10版本更新中有以下几项重大新特征。

 

新增3个预装应用

OTA-10新增加了3个预装应用,分别是邮件(Dekko)、日历、和导航应用(uNav),这些都是是日常工作、生活中会重度使用的应用,现在无需再单独进行安装。

 

新增VPN功能

智能手机的功能早已不是仅仅局限于拨打电话和发送短信了,上网功能的使用需求越来越频繁、重要。尤其是对网络有特殊要求的用户。以往可在Ubuntu桌面端上使用VPN,现在在OTA-10上也可以使用VPN服务了。使用方法和Ubuntu桌面端一致,在设置——网络,点击开启即可。

 

浏览器——针对桌面融合模式优化

谈到网络,OTA-10另一项改进就是对默认浏览器应用的优化。在手机或平板触控模式下,可使用长按操作呼出选择器功能,选择网页内容进行复制/粘贴。在桌面融合模式下,终端打开的某个网页链接,系统会自动开启浏览器应用并打开一个新的页面, 而不是另外再打开一个浏览器应用。不仅如此,当手机、平板连接到鼠标后,浏览器底部提示自动变成可点击的侧边栏。除了上面的功能,还针对内存占用进行了优化。

 

Web应用功能丰富

Ubuntu手机支持三类原生app、Scope、Webapp应用,作为现在普遍使用HTML5语言的Webapp,OTA-10增加相机(可通过开启webrtc视频参数实现)、麦克风、震动、加速器等接口系统功能调用,Webapp的功能、体验将变得更好。SoundCloud用户可在Web应用登陆自己的账户,查看歌手信息,查看、发表歌曲评论,获取精心制作的精选集,欣赏喜欢的音乐。

 

中文输入法

在OTA-10中,拼音输入法更新libpinyin7了,进一步改善中文输入体验。如果你喜欢日语,本次更新也加入了对日语键盘布局支持。

 

桌面融合模式

OTA-10修复了在融合模式下已知bug,并提供更多设置的选择。例如,新增显示器设置,可选的语言输入方式,桌面融合模式开关按钮等。

其他

手机系统更新除了可在WiFi模式下载,亦可使用手机3G/4G网络进行下载更新。还改善对外置麦克风及音量控制的识别、支持。

以上为Ubuntu手机OTA-10更新信息,如果你还没有更新的话,请打开手机,点击设置—软件更新进行更新。详细的更新日志见:链接 

 

 

Read more
Stéphane Graber

LXD logo

Introduction

Today I’m very pleased to announce the release of LXC 2.0, our second Long Term Support Release! LXC 2.0 is the result of a year of work by the LXC community with over 700 commits done by over 90 contributors!

It joins LXCFS 2.0 which was released last week and will very soon be joined by LXD 2.0 to complete our collection of 2.0 container management tools!

What’s new?

The complete changelog is linked below but the main highlights for me are:

  • More consistent user experience between the various LXC tools.
  • Improved checkpoint/restore support.
  • Complete rework of our CGroup handling code, including support for the CGroup namespace.
  • Cleaned up storage backend subsystem, including the addition of a new Ceph RBD backend.
  • A massive amount of bugfixes.
  • And lastly, we managed to get all that done without breaking our API, so LXC 2.0 is fully API compatible with LXC 1.0.

The focus with this release was stability and maintaining support for all the environments in which LXC shines. We still support all kernels from 2.6.32 though the exact feature set does obviously vary based on kernel features. We also improved support for a bunch of architectures and fixed a lot of bugs and other rough edges.

This is the release you want to run in production for the next few years!

Support length

As mentioned, LXC 2.0 is a Long Term Support release.
This is the second time we do such a release with the first being LXC 1.0.

Long Term Support releases come with a 5 years commitment from upstream to do bugfixes and security updates and release new point releases when enough fixes have accumulated.

The end of life date for the various LXC versions is as follow:

  • LXC 1.0, released February 2014 will EOL on the 1st of June 2019
  • LXC 1.1, released February 2015 will EOL on the 1st of September 2016
  • LXC 2.0, released April 2016 will EOL on the 1st of June 2021

We therefore very strongly recommend LXC 1.1 users to update to LXC 2.0 as we will not be supporting this release for very much longer.

We also recommend production deployments stick to our Long Term Support release.

Project information

Upstream website: https://linuxcontainers.org/lxc/
Release announcement: https://linuxcontainers.org/lxc/news/
Code: https://github.com/lxc/lxc
IRC channel: #lxcontainers on irc.freenode.net
Mailing-lists: https://lists.linuxcontainers.org

Try it online

Want to see what a container with LXC 2.0 installed feels like?
You can get one online to play with here.

Read more
Prakash

From a numbers standpoint, Google is actually a distant fourth in the $23 billion cloud infrastructure services market, according to Synergy Research Group. AWS ranks first with 31 percent, followed by Microsoft Azure at 9 percent, IBM at 7 percent and Google Cloud Platform at 4 percent, Synergy data show. That means of Google parent Alphabet’s $75 billion in revenue, less than $1 billion came from cloud infrastructure.

Read More: http://www.cnbc.com/2016/03/23/google-aims-to-catch-amazon-microsoft-in-cloud.html

Read more
Dustin Kirkland

Still have questions about Ubuntu on Windows?
Watch this Channel 9 session, recorded live at Build this week, hosted by Scott Hanselman, with questions answered by Windows kernel developers Russ Alexander, Ben Hillis, and myself representing Canonical and Ubuntu!

For fun, watch the crowd develop in the background over the 30 minute session!

And here's another recorded session with a demo by Rich Turner and Russ Alexander.  The real light bulb goes off at about 8:01.


Cheers,
:-Dustin

Read more
UbuntuTouch

利用CrossFadeImage能够在我们切换它的source时生产我们所需要的特效.除此之外,它本身就像我们通常所使用的一个QML Image元件.

我们还是可以通过一个先前的例程来展示如何利用这个API来做一些动画效果.首先大家可以查看我先前的文章"利用SwipeArea来识别手势操作".我们可以把它其中的Image换成我们所需要的CrossFadeImage.整个代码如下:


import QtQuick 2.4
import Ubuntu.Components 1.3

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "swipearea.liu-xiao-guo"

    width: units.gu(60)
    height: units.gu(85)

    property int index: 1

    Page {
        title: "SwipeArea sample"

        CrossFadeImage {
            id: img
            anchors.fill: parent
            source: "images/image1.jpg"
            fadeDuration: 2000
            fadeStyle: "cross"
        }

        SwipeArea {
            id: swipeleft
            anchors {
                left: parent.left
                right: parent.right
                bottom: parent.bottom
                top: parent.top
            }

            // SwipeArea.Rightwards
            direction:  SwipeArea.Leftwards

            onDraggingChanged: {
                console.log("dragging: " + dragging)

                if ( dragging ) {
                    index ++;

                    if ( index >= 5) {
                        index = 5
                    }

                    var image = "images/image" + index + ".jpg"
                    console.log("image source: " + image)
                    img.source = image
                }
            }
        }

        SwipeArea {
            id: swiperight
            anchors {
                left: parent.left
                right: parent.right
                bottom: parent.bottom
                top: parent.top
            }

            // SwipeArea.Rightwards
            direction: SwipeArea.Rightwards

            onDraggingChanged: {
                console.log("dragging1: " + dragging)

                if ( dragging ) {
                    index--

                    if ( index <= 1 ) {
                        index = 1
                    }

                    var image = "images/image" + index + ".jpg"
                    console.log("image source1: " + image)
                    img.source = image
                }
            }
        }
    }
}

运行我们的应用:


  


在我们尝试改变CrossFadeImage的source,我们可以发现我们所需要的动画的效果.当然,我们也可以修改其中的fadeStyle属性来得到我们所需要的另外一种效果.下图是fadeStyle为"overlay"时的效果.




整个项目的源码在: https://github.com/liu-xiao-guo/crossfadeimage



作者:UbuntuTouch 发表于2016/3/15 8:05:48 原文链接
阅读:199 评论:0 查看评论

Read more
UbuntuTouch

[原]利用SwipeArea来识别手势操作

在Ubuntu.Components 1.3中,有一个新增加的API叫做SwipeArea.我们可以通过这个API来识别我们的手势.这对一些需要手势进行操作的应用来说无疑是一个非常必要的接口.


我们还是先来看一个简单的代码吧:

Main.qml


import QtQuick 2.4
import Ubuntu.Components 1.3

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "swipearea.liu-xiao-guo"

    width: units.gu(60)
    height: units.gu(85)

    property int index: 1

    Page {
        title: "SwipeArea sample"

        Image {
            id: img
            anchors.fill: parent
            source: "images/image1.jpg"
        }

        SwipeArea {
            id: swipeleft
            anchors {
                left: parent.left
                right: parent.right
                bottom: parent.bottom
                top: parent.top
            }

            // SwipeArea.Rightwards
            direction:  SwipeArea.Leftwards

            onDraggingChanged: {
                console.log("dragging: " + dragging)

                if ( dragging ) {
                    index ++;

                    if ( index >= 5) {
                        index = 5
                    }

                    var image = "images/image" + index + ".jpg"
                    console.log("image source: " + image)
                    img.source = image
                }
            }
        }

        SwipeArea {
            id: swiperight
            anchors {
                left: parent.left
                right: parent.right
                bottom: parent.bottom
                top: parent.top
            }

            // SwipeArea.Rightwards
            direction: SwipeArea.Rightwards

            onDraggingChanged: {
                console.log("dragging1: " + dragging)

                if ( dragging ) {
                    index--

                    if ( index <= 1 ) {
                        index = 1
                    }

                    var image = "images/image" + index + ".jpg"
                    console.log("image source1: " + image)
                    img.source = image
                }
            }
        }
    }
}



在上面的应用中,我们通过一个识别向左划和向右划的两个手势识别来切换我们的照片.整个设计非常简单明了.

 

整个项目的源码在:https://github.com/liu-xiao-guo/swipeare


作者:UbuntuTouch 发表于2016/3/10 15:01:41 原文链接
阅读:190 评论:0 查看评论

Read more
UbuntuTouch

在其它的很多平台上我们可以使用ComboButton来实现我们一个下拉(drop-down)的选项.在Ubuntu.Components 1.3版本中,我们也有类似的东西,虽然在我之前的例程中,我们也实现了一个自己的ComboBox.


我们还是先来看一下一个简单的例子:


Main.qml


import QtQuick 2.4
import Ubuntu.Components 1.3
import Ubuntu.Components.ListItems 1.3

/*!
    \brief MainView with a Label and Button elements.
*/

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "combobutton.liu-xiao-guo"

    /*
     This property enables the application to change orientation
     when the device is rotated. The default is false.
    */
    //automaticOrientation: true


    width: units.gu(60)
    height: units.gu(85)

    Page {
        title: i18n.tr("combobutton")

        Column {
            anchors.fill: parent
            spacing: units.gu(2)

            ComboButton {
                text: "smaller content"
                Rectangle {
                    height: units.gu(5) // smaller than the default expandedHeight
                    color: "blue"
                }
            }

            ComboButton {
                id: combo
                text: "long scrolled content"

                ListView {
                    model: 10
                    delegate: Standard {
                        text: "Item #" + modelData

                        onClicked: {
                            console.log("item: " + index + " clicked")
                            combo.expanded = false;
                        }

                    }
                }
            }
        }
    }
}

运行我们的例程:

  
从上面可以看出来我们可以在一个drop-down的列表中选择我们所需要的选项.当然,我们也可以更新我们的ComboButton的text的内容.


作者:UbuntuTouch 发表于2016/3/14 15:29:21 原文链接
阅读:180 评论:0 查看评论

Read more
UbuntuTouch

当我们的视窗不能完全显示一个大的显示区域的时候,我们希望能够使用ScrollBar来方便地移动我们的视窗并展示不同部分的内容.对于触屏的设备来说,我们可以在我们的设计中加入Flickable来通过触碰的操作来完成我们视窗的移动.但是对于我们没有触碰的设备来说,我们智能通过ScrollBar或一些方向的热键来移动我们的视窗的位置.在今天的例程中,我们来使用Ubuntu SDK中提供的ScrollView来完成我们所需要的功能.


大家知道,目前Ubuntu目前正在朝convergence的方向前进.也就是说,将来我们的同一个应用不需要做任何的修改就可以部署到不同屏幕尺寸的设备上,无论这个设备是手机,平板及桌面电脑.在我们的设计中,我们需要紧紧记住这一点.对于以前使用ScrollBar的应用来说,建议一知道最新的ScrollView以保证最终的应用能够部署到所有的设备上.


我们还是先来看一看我们例程:


Main.qml

import QtQuick 2.4
import Ubuntu.Components 1.3

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "scrollview.liu-xiao-guo"

    width: units.gu(60)
    height: units.gu(85)

    Page {
        title: i18n.tr("scrollview")

        ScrollView {
            id: scrollview
            anchors.fill: parent

            Image {
                source: "images/bigpic.jpg"
                width: sourceSize.width
                height: sourceSize.height
            }

           Component.onCompleted: {
               var keys = Object.keys(scrollview);
               for( var i = 0; i < keys.length; i++ ) {
                   var key = keys[ i ];
                   var data = key + ' : ' + scrollview[ key ];
                   console.log(data )
               }
           }
        }
    }
}


从上面的代码中可以看出来,我们使用了一个ScrollView.在它的里面是一个比ScrollView视窗还要大很多的图片.图片的大小,我们通过Image中的sourceSize得到.为了能够看清整幅图,我们设计了一个ScrollView.运行我们的应用:

    

从上面的显示中可以看出来,当图片的宽度比我们的视窗的宽度还要大的时候,它显示了一个scrollbar供我们进行拖动.
从我们的调试的信息中可以看出来:

qml: contentItem : QQuickImage(0x121a828)
qml: flickableItem : QQuickFlickable(0x121a330)

ScrollView为我们创建了一个QQuickFlickable的元件,并为我们所使用.QQuickFlickable对应于我们QML中Flickable.


我们下面来创建另外一个项目:

Main.qml


import QtQuick 2.4
import Ubuntu.Components 1.3
import Ubuntu.Components.ListItems 1.3

/*!
    \brief MainView with a Label and Button elements.
*/

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "scrollview1.liu-xiao-guo"

    width: units.gu(60)
    height: units.gu(85)

    Page {
        title: i18n.tr("scrollview1")

        ScrollView {
            id: scrollview
            anchors.fill: parent

            ListView {
                model: 100
                delegate: Standard {
                    Label {
                        text: "Item + " + index
                    }
                }
            }

            Component.onCompleted: {
                var keys = Object.keys(scrollview);
                for( var i = 0; i < keys.length; i++ ) {
                    var key = keys[ i ];
                    var data = key + ' : ' + scrollview[ key ];
                    console.log(data )
                }
            }
        }
    }
}

在这里我们使用了一个ListView.运行我们的应用:

 


从上面可以看出来,当我们的ListView中的内容大于时间的窗口的时候,在视窗的外面显示了一个垂直的ScrollBar供我们来移动我们的视窗.
从我们的调试的信息中可以看出来:

qml: contentItem : QQuickListView(0xba2c30)
qml: flickableItem : QQuickListView(0xba2c30)

ScrollView没有为我们创建一个新的Flickable,它利用了现有的ListView(也是继承于一个Flickable)提供的Flickable.

作者:UbuntuTouch 发表于2016/3/15 11:27:34 原文链接
阅读:198 评论:0 查看评论

Read more
UbuntuTouch

相信大家对Ubuntu平台的融合演进非常感兴趣.我非常荣幸地参加了2016年在巴塞罗那的MWC会议.看见了我们公司最新的科技展示.


Ubuntu Unity 8 及 Convergence 演示

http://v.youku.com/v_show/id_XMTUwMDU2OTg2OA==.html



Ubuntu平台融合演进


http://v.youku.com/v_show/id_XMTUwMDU2NzA2MA==.html


作者:UbuntuTouch 发表于2016/3/15 12:08:47 原文链接
阅读:222 评论:0 查看评论

Read more
UbuntuTouch

[原]有那些UbuntuColors?

在我们设计我们的Ubuntu应用中,如果我们想把我们的应用设计成为最符合Ubuntu的颜色的话,我们需要使用UbuntuColors来作为我们设计的参考.在今天的练习中,我们来显示我们的Ubuntu系统中到底有那些颜色.


我设计了一个简单的应用来显示我们所有的Ubuntu颜色.

Main.qml

import QtQuick 2.4
import Ubuntu.Components 1.3

/*!
    \brief MainView with a Label and Button elements.
*/

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "ubuntucolors.liu-xiao-guo"

    width: units.gu(60)
    height: units.gu(85)

    Page {
        title: i18n.tr("ubuntucolors")

        ListModel {
            id: mymodel;
        }

        ListView {
            anchors.fill: parent
            model: mymodel
            delegate: ListItem {
                Rectangle {
                    anchors.fill: parent
                    color: value
                }
                Label {
                    anchors.centerIn: parent
                    text: name
                    fontSize: "large"
                    color: "white"
                }
            }
        }

        Component.onCompleted: {
            var keys = Object.keys(UbuntuColors);
            for(var i = 0; i < keys.length; i++) {
                var key = keys[i];
                // prints all properties, signals, functions from object
                var type = typeof UbuntuColors[key];
                if ( type !== 'function' &&
                     key.indexOf("Gradient") === -1 &&
                     key !== "objectName") {
                    //                    console.log("type: " + type)
                    console.log(key + ' : ' + UbuntuColors[key]);
                    var color = "" + UbuntuColors[key];
                    console.log("color: " + color)
                    mymodel.append({"name": key, "value": color});
                }
            }
        }
    }
}



运行我们的应用:



在上面的应用中,我们可以看到UbuntuColor中所定义的所有的颜色及其显示.
作者:UbuntuTouch 发表于2016/3/15 15:48:50 原文链接
阅读:209 评论:0 查看评论

Read more
UbuntuTouch

我们知道在利用Ubuntu Toolkit的时候,我们总希望能够得到我们手机的Toolkit的版本.这样我们可以知道那些API是可以用的,那些是不可以在我们的手机版本中运用的.在我们的API网址中,有许多的API有如下的描述:



它表明该API所能够支持的Ubuntu.Components的版本.那么我们怎么能够发现我们的手机的Ubuntu.Components的版本呢?


方法一:


在手机中我们通过命令行的方法来寻找,比如:



通过上面的命令,我们可以看到在手机中有如下的文件目录存在:

/usr/lib/arm-linux-gnueabihf/qt5/qml/Ubuntu/Components/1.3

表明"1.3"版本是在我们的手机中被支持的.

方法二


通过Ubuntu API的方法得到:

Main.qml


import QtQuick 2.4
import Ubuntu.Components 1.3

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "ubuntutoolkitversion.liu-xiao-guo"

    width: units.gu(60)
    height: units.gu(85)

    Page {
        title: i18n.tr("ubuntutoolkitversion")

        Column {
            anchors.centerIn: parent
            Label { text: "Toolkit ver: " + Ubuntu.toolkitVersion  }
            Label { text: "Major: " + Ubuntu.toolkitVersionMajor  }
            Label { text: "Minor: " + Ubuntu.toolkitVersionMinor  }
            Label { text: "Toolkit ver: " + Ubuntu.version(Ubuntu.toolkitVersionMajor,
                                         Ubuntu.toolkitVersionMinor)  }
        }
    }
}

就像API的描述一样,这个API是从Ubuntu.Components 1.2版本中开始支持的.运行该应用:



我们从上面的输出可以看出来,手机中的Ubuntu Toolkit的版本是1.3.


作者:UbuntuTouch 发表于2016/3/16 8:30:21 原文链接
阅读:214 评论:0 查看评论

Read more
UbuntuTouch

我们可以通过利用Ubuntu.Components.ListItems 中的Expandable来创建一个可以扩展的List列表.在有些列表应用中这个是非常有用的.它可以让我们展示更多的内容.在先前的例程"如何在QML中设计一个expandable ListView"中,我有一个类似的设计.开发者也可以参阅那个例程来做自己的设计.


我们还是来看一个简单的例程:


Main.qml


import QtQuick 2.4
import Ubuntu.Components 1.3
import Ubuntu.Components.ListItems 1.3 as ListItem

/*!
    \brief MainView with a Label and Button elements.
*/

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "expandablescolumn.liu-xiao-guo"

    width: units.gu(60)
    height: units.gu(85)

    Page {
        title: i18n.tr("expandablescolumn")

        ListModel {
            id: mymodel
            ListElement { name: "image1.jpg" }
            ListElement { name: "image2.jpg" }
            ListElement { name: "image3.jpg" }
            ListElement { name: "image4.jpg" }
            ListElement { name: "image5.jpg" }
            ListElement { name: "image6.jpg" }
            ListElement { name: "image7.jpg" }
            ListElement { name: "image8.jpg" }
            ListElement { name: "image9.jpg" }
            ListElement { name: "image10.jpg" }
            ListElement { name: "image11.jpg" }
        }

        ListItem.ExpandablesColumn {
            anchors.fill: parent
            Repeater {
                model: mymodel
                ListItem.Expandable {
                    id: exp
                    expandedHeight: units.gu(30)
                    collapsedHeight: units.gu(12)

                    Image {
                        height: exp.height
                        width: height
                        source: "images/" + name
                    }

                    Label {
                        anchors.horizontalCenter: parent.horizontalCenter
                        text: index
                    }

                    onClicked: {
                        expanded = true;
                    }
                }
            }
        }
    }
}

在这里,我们使用了一个而ExpandablesColumn来设计我们的一个类似ListView的列表.在我们的设计中,我们也可以采用UbuntuListView来完成我们的设计.具体代码可以参阅UbuntuListView中的设计.

如上面的ListItem.Expandable显示的那样,我们可以把它看成一个容器.在它里面我们可以画我们任何喜欢的界面.当点击的时候,我们可以通过如下的方法:

                    onClicked: {
                        expanded = true;
                    }

把该项打开,当然,我们也可以参考在UbuntuListView中的设计:

        delegate: ListItem.Expandable {
            id: expandingItem
            expandedHeight: units.gu(30)
            onClicked: {
                ubuntuListView.expandedIndex = index;
            }
        }

在上面的设计中,我们把UbuntuListView中的expandedIndex设为当前的index从而达到expand当前项的目的.

运行我们的例程:

   

当我们打开我们的列表中的项的时候,该项被自动expand.当然我们也可以加入一些自己喜欢的动画效果.点击任何其它的区域将使得该项又回到以前的位置.

作者:UbuntuTouch 发表于2016/3/16 15:52:20 原文链接
阅读:208 评论:0 查看评论

Read more
Stéphane Graber

LXD logo

What’s LXCFS?

LXCFS is a side project of LXC and LXD. It’s basically a tiny FUSE filesystem which gets mounted in your containers and mask a number of proc files.

At present, it supports the following files:

  • /proc/cpuinfo
    Only returns the CPUs listed in your cpuset
  • /proc/diskstats
    Returns I/O usage from the container
  • /proc/meminfo
    Only shows the amount of memory and SWAP the container can use
  • /proc/stat
    Related to cpuinfo, only lists the right CPUs
  • /proc/swaps
    Related to meminfo, only shows your container’s swap consumption
  • /proc/uptime
    Shows the container uptime instead of the host’s

It’s basically a userspace workaround to changes which were deemed unreasonable to do in the kernel. It makes containers feel much more like separate systems than they would without it.

On top of the proc virtualization feature, lxcfs also supports rendering a partial cgroupfs view which can then be mounted into a container on top of /sys/fs/cgroup, allowing processes in the container to interact with the cgroups in a safe way.

This part is only enabled on kernels that do not support the cgroup namespace, as newer kernels (4.6 upstream, 4.4 Ubuntu) no longer need this.

Why do I need it?

lxcfs isn’t absolutely needed to run LXC or LXD containers.

That being said, you will want it if:

  • You want proper resource consumption reporting inside your container
  • You need to start a systemd based container on a system running a kernel older than 4.6 upstream (or 4.4 Ubuntu)

LXD in Ubuntu actually depends on LXCFS as we think it’s a critical part of offering a good container experience on Ubuntu.

How to get it?

LXCFS is available in quite a few distributions, so chances are you can just grab it with your package manager. It may take a few days/weeks for 2.0 to be available though.

Ubuntu users have had lxcfs available for a few years now and the 2.0 release is now in the Ubuntu development release. Up to date packages for all Ubuntu releases can also be found in our PPAs.

What kind of support will this get?

LXCFS 2.0 is a long term support release. That means that upstream LXCFS will be pushing out bugfix and security releases for the next 5 years.

A separate stable branch will be setup upstream and bugfixes will be cherry-picked into it, when enough fixes have accumulated a bugfix release (like 2.0.1) will be released.

Project information

Upstream website: https://linuxcontainers.org/lxcfs/
Release announcement: https://linuxcontainers.org/lxcfs/news/
Code: https://github.com/lxc/lxcfs
IRC channel: #lxcontainers on irc.freenode.net
Mailing-lists: https://lists.linuxcontainers.org

Try it online

Want to see what a container with LXCFS installed feels like?
You can get one online to play with here.

Read more
facundo

PyCamp 2016


Durante este finde largo de semana santa hicimos la edición 2016 del PyCamp, el que para mí es el mejor evento del año.

Se realizó nuevamente en La Serranita, un lugar muy lindo y muy cómodo, el Complejo Soles Blancos. A diferencia del año pasado, que fue en Agosto, esta vez a la noche sólo estuvo bastante fresco, :). Las tardes eran con un lindo calorcito, y las noches y madrugadas estaban fresconas, ideal para pasear por la calle o dormir!

Como la vez pasada, hice Buenos Aires - Córdoba (Capital) en micro, y de ahí a La Serranita en auto (a la vuelta lo mismo). Es más, el trayecto de ida lo manejé yo (porque Pancho estaba rotazo), y la vuelta la hizo él, con lo que pude disfrutar más del paisaje.

Alta vista

Como todos los PyCamps, este se dividió mucho entre lo que es Python propiamente dicho, y lo que son otras actividades. Arranquemos con lo que es programación propiamente dicho.

El proyecto más largo en el que participé fue un Tower Defense: el típico jueguito donde uno ubica torres que atacan un flujo de enemigos que se vienen encima, y en función de la habilidad de colocar qué torres y dónde, uno se defiende mejor o peor. La idea era no sólo diseñar y armar el juego, sino también crear una inteligencia artificial que aprendiera a ubicar las torres.

En esto se anotaron casi todos, así que fue con lo primero que arrancamos. Lo más interesante fue la organización. En seguida separamos lo que es "core" de la "ai", y un grupo se quedó arriba y otro nos fuimos para la sala de abajo. No sé bien qué hicieron los de AI, arriba, pero abajo armamos entre todos la estructura básica del core, nos separamos en pequeños grupos, y atacamos todo el código en paralelo, charlando las interfaces/APIs a medida que íbamos agregando o solucionando cosas.

Fue genial. El primer día ya teníamos como un 80% de lo que logramos finalmente, y luego seguimos trabajando. El producto fue un core a todo lujo, con gráficos y todo (usamos pyglet), más una inteligencia artificial que aprendía eficientemente a ubicar las torres. Impecable.

Screenshot del TD

De los proyectos que llevaba yo, en el que más se enganchó la gente fue fades. Como con Nico tenemos los issues bien claritos y clasificados, los chicos encontraban enseguida algo para hacer. Metimos varios fixes y cerramos muchos issues, se avanzó bastante. También se anotaron varios para trabajar en la web de PyAr, se avanzó un poco, sobre problemas de formateo y links rotos (porque no existen, pero también porque apuntan mal internamente en el wiki). No hicimos tanto, quedó pendiente para seguir en otro momento. También otro grupo (principalmente Matu Varela, Mati Barrientos y Toni) estuvieron con la integración del sitio de PyAr y unos bots de Telegram, que originalmente estaban planeados para desparramar info, pero sobre los cuales luego armaron esquemas de moderación de noticias, eventos y trabajos postulados.

Con otros proyectos estuve también bastante tiempo, pero con menos gente. Para Linkode estuvimos charlando mucho con Mati Barrientos y Pablo Celayes, sobre los próximos planes a nivel de interfaz. Decidimos ir a algo como una "single page application" pero que apenas es tal, porque la interfaz de linkode es muy sencilla. Así y todo, la idea es que el "cliente web" use la API de linkode como cualquier otro cliente. Más allá de toda la charla y la decisión de cómo seguir para adelante, Matías va a estar liderando todo el lado "javascript" de linkode, metiendo código él y revisando/empujando el de otros.

Gente trabajando

Para cerrar todo lo hecho, y el PyCamp en sí, hicimos un video! Jose Luis Zanotti tiene pendiente de editarlo y armarlo, así mostramos todo lo que hicimos en un par de días...

Y por otro lado, hubieron varias actividades no relacionadas directamente con programar en Python.

El más centralizadamente coordinado fue un torneo de Tron, que ganó Jose Luis Zanotti Ya habíamos hecho algo parecido en el PyCamp de La Falda, hace varios años, y es notable como uno se engancha mirando a las personas que compiten y cómo juegan. También hubieron clases de sable, una tarde, y noches de juegos de mesa. Yo jugué dos veces al Resistance, un juego donde (aunque tiene soporte de fichas y tarjetitas) lo importante es la interacción entre las personas y como todos se tratan de convencer entre todos de que no son nazis.

La estrella de las actividades de "no programación" fue la reunión de PyAr (gracias Ariel por armar la minuta). Estuvo buenísima, por cuanto y cómo participaron todos. Charlamos de la próxima PyCon, de cómo venía el tema de la creación de la Asociación Civil, y también del PyCamp actual, y cosas que deberíamos mantener o mejorar. Luego de la reunión, un asadazo, que lo preparó (muy bien, como siempre), el anfitrión del complejo, Leandro.

En la reunión

Todas las fotos que saqué yo, acá.

Read more
Stéphane Graber

This is the fifth blog post in this series about LXD 2.0.

LXD logo

Container images

If you’ve used LXC before, you probably remember those LXC “templates”, basically shell scripts that spit out a container filesystem and a bit of configuration.

Most templates generate the filesystem by doing a full distribution bootstrapping on your local machine. This may take quite a while, won’t work for all distributions and may require significant network bandwidth.

Back in LXC 1.0, I wrote a “download” template which would allow users to download pre-packaged container images, generated on a central server from the usual template scripts and then heavily compressed, signed and distributed over https. A lot of our users switched from the old style container generation to using this new, much faster and much more reliable method of creating a container.

With LXD, we’re taking this one step further by being all-in on the image based workflow. All containers are created from an image and we have advanced image caching and pre-loading support in LXD to keep the image store up to date.

Interacting with LXD images

Before digging deeper into the image format, lets quickly go through what LXD lets you do with those images.

Transparently importing images

All containers are created from an image. The image may have come from a remote image server and have been pulled using its full hash, short hash or an alias, but in the end, every LXD container is created from a local image.

Here are a few examples:

lxc launch ubuntu:14.04 c1
lxc launch ubuntu:75182b1241be475a64e68a518ce853e800e9b50397d2f152816c24f038c94d6e c2
lxc launch ubuntu:75182b1241be c3

All of those refer to the same remote image (at the time of this writing), the first time one of those is run, the remote image will be imported in the local LXD image store as a cached image, then the container will be created from it.

The next time one of those commands are run, LXD will only check that the image is still up to date (when not referring to it by its fingerprint), if it is, it will create the container without downloading anything.

Now that the image is cached in the local image store, you can also just start it from there without even checking if it’s up to date:

lxc launch 75182b1241be c4

And lastly, if you have your own local image under the name “myimage”, you can just do:

lxc launch my-image c5

If you want to change some of that automatic caching and expiration behavior, there are instructions in an earlier post in this series.

Manually importing images

Copying from an image server

If you want to copy some remote image into your local image store but not immediately create a container from it, you can use the “lxc image copy” command. It also lets you tweak some of the image flags, for example:

lxc image copy ubuntu:14.04 local:

This simply copies the remote image into the local image store.

If you want to be able to refer to your copy of the image by something easier to remember than its fingerprint, you can add an alias at the time of the copy:

lxc image copy ubuntu:12.04 local: --alias old-ubuntu
lxc launch old-ubuntu c6

And if you would rather just use the aliases that were set on the source server, you can ask LXD to copy the for you:

lxc image copy ubuntu:15.10 local: --copy-aliases
lxc launch 15.10 c7

All of the copies above were one-shot copy, so copying the current version of the remote image into the local image store. If you want to have LXD keep the image up to date, as it does for the ones stored in its cache, you need to request it with the –auto-update flag:

lxc image copy images:gentoo/current/amd64 local: --alias gentoo --auto-update

Importing a tarball

If someone provides you with a LXD image as a single tarball, you can import it with:

lxc image import <tarball>

If you want to set an alias at import time, you can do it with:

lxc image import <tarball> --alias random-image

Now if you were provided with two tarballs, identify which contains the LXD metadata. Usually the tarball name gives it away, if not, pick the smallest of the two, metadata tarballs are tiny. Then import them both together with:

lxc image import <metadata tarball> <rootfs tarball>

Importing from a URL

“lxc image import” also works with some special URLs. If you have an https web server which serves a path with the LXD-Image-URL and LXD-Image-Hash headers set, then LXD will pull that image into its image store.

For example you can do:

lxc image import https://dl.stgraber.org/lxd --alias busybox-amd64

When pulling the image, LXD also sets some headers which the remote server could check to return an appropriate image. Those are LXD-Server-Architectures and LXD-Server-Version.

This is meant as a poor man’s image server. It can be made to work with any static web server and provides a user friendly way to import your image.

Managing the local image store

Now that we have a bunch of images in our local image store, lets see what we can do with them. We’ve already covered the most obvious, creating containers from them but there are a few more things you can do with the local image store.

Listing images

To get a list of all images in the store, just run “lxc image list”:

stgraber@dakara:~$ lxc image list
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
|     ALIAS     | FINGERPRINT  | PUBLIC |                     DESCRIPTION                      |  ARCH  |   SIZE   |         UPLOAD DATE          |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| alpine-32     | 6d9c131efab3 | yes    | Alpine edge (i386) (20160329_23:52)                  | i686   | 2.50MB   | Mar 30, 2016 at 4:36am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| busybox-amd64 | 74186c79ca2f | no     | Busybox x86_64                                       | x86_64 | 0.79MB   | Mar 30, 2016 at 4:33am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| gentoo        | 1a134c5951e0 | no     | Gentoo current (amd64) (20160329_14:12)              | x86_64 | 232.50MB | Mar 30, 2016 at 4:34am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| my-image      | c9b6e738fae7 | no     | Scientific Linux 6 x86_64 (default) (20160215_02:36) | x86_64 | 625.34MB | Mar 2, 2016 at 4:56am (UTC)  |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| old-ubuntu    | 4d558b08f22f | no     | ubuntu 12.04 LTS amd64 (release) (20160315)          | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
| w (11 more)   | d3703a994910 | no     | ubuntu 15.10 amd64 (release) (20160315)              | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+
|               | 75182b1241be | no     | ubuntu 14.04 LTS amd64 (release) (20160314)          | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) |
+---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+

You can filter based on the alias or fingerprint simply by doing:

stgraber@dakara:~$ lxc image list amd64
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
|     ALIAS     | FINGERPRINT  | PUBLIC |               DESCRIPTION               |  ARCH  |   SIZE   |          UPLOAD DATE         |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
| busybox-amd64 | 74186c79ca2f | no     | Busybox x86_64                          | x86_64 | 0.79MB   | Mar 30, 2016 at 4:33am (UTC) |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
| w (11 more)   | d3703a994910 | no     | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+

Or by specifying a key=value filter of image properties:

stgraber@dakara:~$ lxc image list os=ubuntu
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
|    ALIAS    | FINGERPRINT  | PUBLIC |                  DESCRIPTION                |  ARCH  |   SIZE   |          UPLOAD DATE         |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| old-ubuntu  | 4d558b08f22f | no     | ubuntu 12.04 LTS amd64 (release) (20160315) | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| w (11 more) | d3703a994910 | no     | ubuntu 15.10 amd64 (release) (20160315)     | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
|             | 75182b1241be | no     | ubuntu 14.04 LTS amd64 (release) (20160314) | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) |
+-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+

To see everything LXD knows about a given image, you can use “lxc image info”:

stgraber@castiana:~$ lxc image info ubuntu
Fingerprint: e8a33ec326ae7dd02331bd72f5d22181ba25401480b8e733c247da5950a7d084
Size: 139.43MB
Architecture: i686
Public: no
Timestamps:
 Created: 2016/03/15 00:00 UTC
 Uploaded: 2016/03/16 05:50 UTC
 Expires: 2017/04/26 00:00 UTC
Properties:
 version: 12.04
 aliases: 12.04,p,precise
 architecture: i386
 description: ubuntu 12.04 LTS i386 (release) (20160315)
 label: release
 os: ubuntu
 release: precise
 serial: 20160315
Aliases:
 - ubuntu
Auto update: enabled
Source:
 Server: https://cloud-images.ubuntu.com/releases
 Protocol: simplestreams
 Alias: precise/i386

Editing images

A convenient way to edit image properties and some of the flags is to use:

lxc image edit <alias or fingerprint>

This opens up your default text editor with something like this:

autoupdate: true
properties:
 aliases: 14.04,default,lts,t,trusty
 architecture: amd64
 description: ubuntu 14.04 LTS amd64 (release) (20160314)
 label: release
 os: ubuntu
 release: trusty
 serial: "20160314"
 version: "14.04"
public: false

You can change any property you want, turn auto-update on and off or mark an image as publicly available (more on that later).

Deleting images

Remove an image is a simple matter of running:

lxc image delete <alias or fingerprint>

Note that you don’t have to remove cached entries, those will automatically be removed by LXD after they expire (by default, after 10 days since they were last used).

Exporting images

If you want to get image tarballs from images currently in your image store, you can use “lxc image export”, like:

stgraber@dakara:~$ lxc image export old-ubuntu .
Output is in .
stgraber@dakara:~$ ls -lh *.tar.xz
-rw------- 1 stgraber domain admins 656 Mar 30 00:55 meta-ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz
-rw------- 1 stgraber domain admins 156M Mar 30 00:55 ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz

Image formats

LXD right now supports two image layouts, unified or split. Both of those are effectively LXD-specific though the latter makes it easier to re-use the filesystem with other container or virtual machine runtimes.

LXD being solely focused on system containers, doesn’t support any of the application container “standard” image formats out there, nor do we plan to.

Our images are pretty simple, they’re made of a container filesystem, a metadata file describing things like when the image was made, when it expires, what architecture its for, … and optionally a bunch of file templates.

See this document for up to date details on the image format.

Unified image (single tarball)

The unified image format is what LXD uses when generating images itself. They are a single big tarball, containing the container filesystem inside a “rootfs” directory, have the metadata.yaml file at the root of the tarball and any template goes into a “templates” directory.

Any compression (or none at all) can be used for that tarball. The image hash is the sha256 of the resulting compressed tarball.

Split image (two tarballs)

This format is most commonly used by anyone rolling their own images and who already have a compressed filesystem tarball.

They are made of two distinct tarball, the first contains just the metadata bits that LXD uses, so the metadata.yaml file at the root and any template in the “templates” directory.

The second tarball contains only the container filesystem directly at its root. Most distributions already produce such tarballs as they are common for bootstrapping new machines. This image format allows re-using them unmodified.

Any compression (or none at all) can be used for either tarball, they can absolutely use different compression algorithms. The image hash is the sha256 of the concatenation of the metadata and rootfs tarballs.

Image metadata

A typical metadata.yaml file looks something like:

architecture: "i686"
creation_date: 1458040200
properties:
 architecture: "i686"
 description: "Ubuntu 12.04 LTS server (20160315)"
 os: "ubuntu"
 release: "precise"
templates:
 /var/lib/cloud/seed/nocloud-net/meta-data:
  when:
   - start
  template: cloud-init-meta.tpl
 /var/lib/cloud/seed/nocloud-net/user-data:
  when:
   - start
  template: cloud-init-user.tpl
  properties:
   default: |
    #cloud-config
    {}
 /var/lib/cloud/seed/nocloud-net/vendor-data:
  when:
   - start
  template: cloud-init-vendor.tpl
  properties:
   default: |
    #cloud-config
    {}
 /etc/init/console.override:
  when:
   - create
  template: upstart-override.tpl
 /etc/init/tty1.override:
  when:
   - create
  template: upstart-override.tpl
 /etc/init/tty2.override:
  when:
   - create
  template: upstart-override.tpl
 /etc/init/tty3.override:
  when:
   - create
  template: upstart-override.tpl
 /etc/init/tty4.override:
  when:
   - create
  template: upstart-override.tpl

Properties

The two only mandatory fields are the creation date (UNIX EPOCH) and the architecture. Everything else can be left unset and the image will import fine.

The extra properties are mainly there to help the user figure out what the image is about. The “description” property for example is what’s visible in “lxc image list”. The other properties can be used by the user to search for specific images using key/value search.

Those properties can then be edited by the user through “lxc image edit” in contrast, the creation date and architecture fields are immutable.

Templates

The template mechanism allows for some files in the container to be generated or re-generated at some point in the container lifecycle.

We use the pongo2 templating engine for those and we export just about everything we know about the container to the template. That way you can have custom images which use user-defined container properties or normal LXD properties to change the content of some specific files.

As you can see in the example above, we’re using those in Ubuntu to seed cloud-init and to turn off some init scripts.

Creating your own images

LXD being focused on running full Linux systems means that we expect most users to just use clean distribution images and not spin their own image.

However there are a few cases where having your own images is useful. Such as having pre-configured images of your production servers or building your own images for a distribution or architecture that we don’t build images for.

Turning a container into an image

The easiest way by far to build an image with LXD is to just turn a container into an image.

This can be done with:

lxc launch ubuntu:14.04 my-container
lxc exec my-container bash
<do whatever change you want>
lxc publish my-container --alias my-new-image

You can even turn a past container snapshot into a new image:

lxc publish my-container/some-snapshot --alias some-image

Manually building an image

Building your own image is also pretty simple.

  1. Generate a container filesystem. This entirely depends on the distribution you’re using. For Ubuntu and Debian, it would be by using debootstrap.
  2. Configure anything that’s needed for the distribution to work properly in a container (if anything is needed).
  3. Make a tarball of that container filesystem, optionally compress it.
  4. Write a new metadata.yaml file based on the one described above.
  5. Create another tarball containing that metadata.yaml file.
  6. Import those two tarballs as a LXD image with:
    lxc image import <metadata tarball> <rootfs tarball> --alias some-name

You will probably need to go through this a few times before everything works, tweaking things here and there, possibly adding some templates and properties.

Publishing your images

All LXD daemons act as image servers. Unless told otherwise all images loaded in the image store are marked as private and so only trusted clients can retrieve those images, but should you want to make a public image server, all you have to do is tag a few images as public and make sure you LXD daemon is listening to the network.

Just running a public LXD server

The easiest way to share LXD images is to run a publicly visible LXD daemon.

You typically do that by running:

lxc config set core.https_address "[::]:8443"

Remote users can then add your server as a public image server with:

lxc remote add <some name> <IP or DNS> --public

They can then use it just as they would any of the default image servers. As the remote server was added with “–public”, no authentication is required and the client is restricted to images which have themselves been marked as public.

To change what images are public, just “lxc image edit” them and set the public flag to true.

Use a static web server

As mentioned above, “lxc image import” supports downloading from a static http server. The requirements are basically:

  • The server must support HTTPs with a valid certificate, TLS1.2 and EC ciphers
  • When hitting the URL provided to “lxc image import”, the server must return an answer including the LXD-Image-Hash and LXD-Image-URL HTTP headers

If you want to make this dynamic, you can have your server look for the LXD-Server-Architectures and LXD-Server-Version HTTP headers which LXD will provide when fetching the image. This allows you to return the right image for the server’s architecture.

Build a simplestreams server

The “ubuntu:” and “ubuntu-daily:” remotes aren’t using the LXD protocol (“images:” is), those are instead using a different protocol called simplestreams.

simplestreams is basically an image server description format, using JSON to describe a list of products and files related to those products.

It is used by a variety of tools like OpenStack, Juju, MAAS, … to find, download or mirror system images and LXD supports it as a native protocol for image retrieval.

While certainly not the easiest way to start providing LXD images, it may be worth considering if your images can also be used by some of those other tools.

More information can be found here.

Conclusion

I hope this gave you a good idea of how LXD manages its images and how to build and distribute your own. The ability to have the exact same image easily available bit for bit on a bunch of globally distributed system is a big step up from the old LXC days and leads the way to more reproducible infrastructure.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net

And if you don’t want or can’t install LXD on your own machine, you can always try it online instead!

Read more
Stéphane Graber

This is the fourth blog post in this series about LXD 2.0.

LXD logo

Available resource limits

LXD offers a variety of resource limits. Some of those are tied to the container itself, like memory quotas, CPU limits and I/O priorities. Some are tied to a particular device instead, like I/O bandwidth or disk usage limits.

As with all LXD configuration, resource limits can be dynamically changed while the container is running. Some may fail to apply, for example if setting a memory value smaller than the current memory usage, but LXD will try anyway and report back on failure.

All limits can also be inherited through profiles in which case each affected container will be constrained by that limit. That is, if you set limits.memory=256MB in the default profile, every container using the default profile (typically all of them) will have a memory limit of 256MB.

We don’t support resource limits pooling where a limit would be shared by a group of containers, there is simply no good way to implement something like that with the existing kernel APIs.

Disk

This is perhaps the most requested and obvious one. Simply setting a size limit on the container’s filesystem and have it enforced against the container.

And that’s exactly what LXD lets you do!
Unfortunately this is far more complicated than it sounds. Linux doesn’t have path-based quotas, instead most filesystems only have user and group quotas which are of little use to containers.

This means that right now LXD only supports disk limits if you’re using the ZFS or btrfs storage backend. It may be possible to implement this feature for LVM too but this depends on the filesystem being used with it and gets tricky when combined with live updates as not all filesystems allow online growth and pretty much none of them allow online shrink.

CPU

When it comes to CPU limits, we support 4 different things:

  • Just give me X CPUs
    In this mode, you let LXD pick a bunch of cores for you and then load-balance things as more containers and CPUs go online/offline.
    The container only sees that number of CPU.
  • Give me a specific set of CPUs (say, core 1, 3 and 5)
    Similar to the first mode except that no load-balancing is happening, you’re stuck with those cores no matter how busy they may be.
  • Give me 20% of whatever you have
    In this mode, you get to see all the CPUs but the scheduler will restrict you to 20% of the CPU time but only when under load! So if the system isn’t busy, your container can have as much fun as it wants. When containers next to it start using the CPU, then it gets capped.
  • Out of every measured 200ms, give me 50ms (and no more than that)
    This mode is similar to the previous one in that you get to see all the CPUs but this time, you can only use as much CPU time as you set in the limit, no matter how idle the system may be. On a system without over-commit this lets you slice your CPU very neatly and guarantees constant performance to those containers.

It’s also possible to combine one of the first two with one of the last two, that is, request a set of CPUs and then further restrict how much CPU time you get on those.

On top of that, we also have a generic priority knob which is used to tell the scheduler who wins when you’re under load and two containers are fighting for the same resource.

Memory

Memory sounds pretty simple, just give me X MB of RAM!

And it absolutely can be that simple. We support that kind of limits as well as percentage based requests, just give me 10% of whatever the host has!

Then we support some extra stuff on top. For example, you can choose to turn swap on and off on a per-container basis and if it’s on, set a priority so you can choose what container will have their memory swapped out to disk first!

Oh and memory limits are “hard” by default. That is, when you run out of memory, the kernel out of memory killer will start having some fun with your processes.

Alternatively you can set the enforcement policy to “soft”, in which case you’ll be allowed to use as much memory as you want so long as nothing else is. As soon as something else wants that memory, you won’t be able to allocate anything until you’re back under your limit or until the host has memory to spare again.

Network I/O

Network I/O is probably our simplest looking limit, trust me, the implementation really isn’t simple though!

We support two things. The first is a basic bit/s limits on network interfaces. You can set a limit of ingress and egress or just set the “max” limit which then applies to both. This is only supported for “bridged” and “p2p” type interfaces.

The second thing is a global network I/O priority which only applies when the network interface you’re trying to talk through is saturated.

Block I/O

I kept the weirdest for last. It may look straightforward and feel like that to the user but there are a bunch of cases where it won’t exactly do what you think it should.

What we support here is basically identical to what I described in Network I/O.

You can set IOps or byte/s read and write limits directly on a disk device entry and there is a global block I/O priority which tells the I/O scheduler who to prefer.

The weirdness comes from how and where those limits are applied. Unfortunately the underlying feature we use to implement those uses full block devices. That means we can’t set per-partition I/O limits let alone per-path.

It also means that when using ZFS or btrfs which can use multiple block devices to back a given path (with or without RAID), we effectively don’t know what block device is providing a given path.

This means that it’s entirely possible, in fact likely, that a container may have multiple disk entries (bind-mounts or straight mounts) which are coming from the same underlying disk.

And that’s where things get weird. To make things work, LXD has logic to guess what block devices back a given path, this does include interrogating the ZFS and btrfs tools and even figures things out recursively when it finds a loop mounted file backing a filesystem.

That logic while not perfect, usually yields a set of block devices that should have a limit applied. LXD then records that and moves on to the next path. When it’s done looking at all the paths, it gets to the very weird part. It averages the limits you’ve set for every affected block devices and then applies those.

That means that “in average” you’ll be getting the right speed in the container, but it also means that you can’t have a “/fast” and a “/slow” directory both coming from the same physical disk and with differing speed limits. LXD will let you set it up but in the end, they’ll both give you the average of the two values.

How does it all work?

Most of the limits described above are applied through the Linux kernel Cgroups API. That’s with the exception of the network limits which are applied through good old “tc”.

LXD at startup time detects what cgroups are enabled in your kernel and will only apply the limits which your kernel support. Should you be missing some cgroups, a warning will also be printed by the daemon which will then get logged by your init system.

On Ubuntu 16.04, everything is enabled by default with the exception of swap memory accounting which requires you pass the “swapaccount=1” kernel boot parameter.

Applying some limits

All the limits described above are applied directly to the container or to one of its profiles. Container-wide limits are applied with:

lxc config set CONTAINER KEY VALUE

or for a profile:

lxc profile set PROFILE KEY VALUE

while device-specific ones are applied with:

lxc config device set CONTAINER DEVICE KEY VALUE

or for a profile:

lxc profile device set PROFILE DEVICE KEY VALUE

The complete list of valid configuration keys, device types and device keys can be found here.

CPU

To just limit a container to any 2 CPUs, do:

lxc config set my-container limits.cpu 2

To pin to specific CPU cores, say the second and fourth:

lxc config set my-container limits.cpu 1,3

More complex pinning ranges like this works too:

lxc config set my-container limits.cpu 0-3,7-11

The limits are applied live, as can be seen in this example:

stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
processor : 3
stgraber@dakara:~$ lxc config set zerotier limits.cpu 2
stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1

Note that to avoid utterly confusing userspace, lxcfs arranges the /proc/cpuinfo entries so that there are no gaps.

As with just about everything in LXD, those settings can also be applied in profiles:

stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2
processor : 3
stgraber@dakara:~$ lxc profile set default limits.cpu 3
stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces
processor : 0
processor : 1
processor : 2

To limit the CPU time of a container to 10% of the total, set the CPU allowance:

lxc config set my-container limits.cpu.allowance 10%

Or to give it a fixed slice of CPU time:

lxc config set my-container limits.cpu.allowance 25ms/200ms

And lastly, to reduce the priority of a container to a minimum:

lxc config set my-container limits.cpu.priority 0

Memory

To apply a straightforward memory limit run:

lxc config set my-container limits.memory 256MB

(The supported suffixes are kB, MB, GB, TB, PB and EB)

To turn swap off for the container (defaults to enabled):

lxc config set my-container limits.memory.swap false

To tell the kernel to swap this container’s memory first:

lxc config set my-container limits.memory.swap.priority 0

And finally if you don’t want hard memory limit enforcement:

lxc config set my-container limits.memory.enforce soft

Disk and block I/O

Unlike CPU and memory, disk and I/O limits are applied to the actual device entry, so you either need to edit the original device or mask it with a more specific one.

To set a disk limit (requires btrfs or ZFS):

lxc config device set my-container root size 20GB

For example:

stgraber@dakara:~$ lxc exec zerotier -- df -h /
Filesystem                        Size Used Avail Use% Mounted on
encrypted/lxd/containers/zerotier 179G 542M  178G   1% /
stgraber@dakara:~$ lxc config device set zerotier root size 20GB
stgraber@dakara:~$ lxc exec zerotier -- df -h /
Filesystem                       Size  Used Avail Use% Mounted on
encrypted/lxd/containers/zerotier 20G  542M   20G   3% /

To restrict speed you can do the following:

lxc config device set my-container root limits.read 30MB
lxc config device set my-container root.limits.write 10MB

Or to restrict IOps instead:

lxc config device set my-container root limits.read 20Iops
lxc config device set my-container root limits.write 10Iops

And lastly, if you’re on a busy system with over-commit, you may want to also do:

lxc config set my-container limits.disk.priority 10

To increase the I/O priority for that container to the maximum.

Network I/O

Network I/O is basically identical to block I/O as far the knobs available.

For example:

stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
--2016-03-26 22:17:34-- http://speedtest.newark.linode.com/100MB-newark.bin
Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'

/dev/null 100%[===================>] 100.00M 58.7MB/s in 1.7s 

2016-03-26 22:17:36 (58.7 MB/s) - '/dev/null' saved [104857600/104857600]

stgraber@dakara:~$ lxc profile device set default eth0 limits.ingress 100Mbit
stgraber@dakara:~$ lxc profile device set default eth0 limits.egress 100Mbit
stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null
--2016-03-26 22:17:47-- http://speedtest.newark.linode.com/100MB-newark.bin
Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b
Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: '/dev/null'

/dev/null 100%[===================>] 100.00M 11.4MB/s in 8.8s 

2016-03-26 22:17:56 (11.4 MB/s) - '/dev/null' saved [104857600/104857600]

And that’s how you throttle an otherwise nice gigabit connection to a mere 100Mbit/s one!

And as with block I/O, you can set an overall network priority with:

lxc config set my-container limits.network.priority 5

Getting the current resource usage

The LXD API exports quite a bit of information on current container resource usage, you can get:

  • Memory: current, peak, current swap and peak swap
  • Disk: current disk usage
  • Network: bytes and packets received and transferred for every interface

And now if you’re running a very recent LXD (only in git at the time of this writing), you can also get all of those in “lxc info”:

stgraber@dakara:~$ lxc info zerotier
Name: zerotier
Architecture: x86_64
Created: 2016/02/20 20:01 UTC
Status: Running
Type: persistent
Profiles: default
Pid: 29258
Ips:
 eth0: inet 172.17.0.101
 eth0: inet6 2607:f2c0:f00f:2700:216:3eff:feec:65a8
 eth0: inet6 fe80::216:3eff:feec:65a8
 lo: inet 127.0.0.1
 lo: inet6 ::1
 lxcbr0: inet 10.0.3.1
 lxcbr0: inet6 fe80::f0bd:55ff:feee:97a2
 zt0: inet 29.17.181.59
 zt0: inet6 fd80:56c2:e21c:0:199:9379:e711:b3e1
 zt0: inet6 fe80::79:e7ff:fe0d:5123
Resources:
 Processes: 33
 Disk usage:
  root: 808.07MB
 Memory usage:
  Memory (current): 106.79MB
  Memory (peak): 195.51MB
  Swap (current): 124.00kB
  Swap (peak): 124.00kB
 Network usage:
  lxcbr0:
   Bytes received: 0 bytes
   Bytes sent: 570 bytes
   Packets received: 0
   Packets sent: 0
  zt0:
   Bytes received: 1.10MB
   Bytes sent: 806 bytes
   Packets received: 10957
   Packets sent: 10957
  eth0:
   Bytes received: 99.35MB
   Bytes sent: 5.88MB
   Packets received: 64481
   Packets sent: 64481
  lo:
   Bytes received: 9.57kB
   Bytes sent: 9.57kB
   Packets received: 81
   Packets sent: 81
Snapshots:
 zerotier/blah (taken at 2016/03/08 23:55 UTC) (stateless)

Conclusion

The LXD team spent quite a few months iterating over the language we’re using for those limits. It’s meant to be as simple as it can get while remaining very powerful and specific when you want it to.

Live application of those limits and inheritance through profiles makes it a very powerful tool to live manage the load on your servers without impacting the running services.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net

And if you don’t want or can’t install LXD on your own machine, you can always try it online instead!

Read more
Zsombor Egri

In 2012 we started the Ubuntu UI Toolkit development with QML-only components, all logic being provided in Javascript. This allowed us to deploy components quickly, to do fast prototyping, and tweak the behaviors and look-and-feel on the fly without the need to re-package or rebuild the entire toolkit. With all its benefits, this approach revealed its negative side, which is the impact on the performance. Complex applications are the most affected, which use many components, and their startup as well as rendering time is heavily affected by the component performance.

Then came the theming, the grid units, and the i18n localization, which introduced the plugin. The theming engine was the only component implemented in C++ as we knew from the beginning that we needed to be fast on loading and especially applying the styles on components. The style loading was done in QML using loaders, which kept the flexibility on tweaking. After several attempts on optimizing the engine, we decided to refactor it, and we managed to come up with a theming that was little more than twice as fast as the previous one. Although we started to gain speed on components initialization, components were still too slow to be applicable in kinetic scrolling. List views were still laggish, delegate creation of the simplest ListItem module component was still 60 times slower than of an Item. Therefore we decided to move components to C++ one by one, especially the ones on the critical path. StyledItem was one of the first, followed by a new ListItem component, which by now you are all familiar with. So it became crystal clear that, if we want you guys to be able to play with full QML apps and still have decent performance in your apps, we must provide at least the core logic of the components in C++, and do the styling in QML. This thought was confirmed also by the Qt developers, when they announced the start of the next generation of the Qt Quick Controls.

But let’s take the biggest issues that brought us to press the reset button.

API

When a person takes a toolkit in his/her hand, the first thing (s)he will encounter is the API. Are the component names self-explanatory, is the API easy to use, no ambiguities when using it, etc, etc. Many developers do not read the API docs, they just jump in and start running the example codes, copying the examples from the documentation without reading a line from it. And next, they will try experimenting on the components, start changing properties, add functionality to the code, and so they start shaping their ideas for their apps.

I think API wise we are in a pretty good shape, we tried to be as close to the declarative QML world as possible, and follow the practices imposed by the Qt Company. I know, not everything is configurable in the components, and that is mostly due to the policy we started with, which was to keep as much configuration in the styling as possible, so we can keep consistency in between components in the applications. But there are different ways to achieve consistency and still keep configurability on a level that developers will be happy to use the API. Both sides have their benefits: smaller API is less complex than one which has plenty of configurations, even if those are color values, on the other hand it is impossible to change its visuals. Some of you may think the API is crap because we don’t provide enough customization, or access to certain elements of the component. We do feel and understand your pain, and we will try to come over it and compensate you in the future.

Behavior

When the developer starts using a component, he/she expects the component to do what it is meant for. A Button is expected to be clickable, a text input to accept text editing gestures, a list item to provide content layouting functionality when used in views and a header to display a title and some other vital functionality for the application. If a component can cooperate with another one when placed side by side, without the developer doing anything, that is the cherry on the cake. But that’s where the problem starts: a component which must take into account its surroundings and change adapts its behavior creates confusion. A much cleaner approach is to let the developer do this rather than the components themselves, but components should provide connectors and enablers so these interactions can be achieved. Yes, application developers will have to do more, but now they will be in control.

Context Properties as Singletons

Context properties are nice when an application wants to expose a model or other logic to its QML UI layer. Those are pretty simple to implement, however also provide unreadable code for those who read the two worlds (QML and C++ or other non-QML code) separately. The problem gets even worse when these context properties are representing singletons. QML has the notion of singletons but those were not suitable for the functionality we needed for localization (i18n) theming and grid units. The quickest decision was to provide them as context properties, so whenever the locale, system theme or the screen’s grid unit changes during the application’s lifetime, these will be automatically updated, so when used in bindings, those will be automatically re-evaluated. However these context properties cannot be used in shared Javascript libraries. And our measurements had proven that importing a module which contains and uses code-behind implementation javascript libraries takes almost 3 times longer than one which has shared libraries. In addition, now when convergence brings the multi-monitor feature to Ubuntu, each monitor can have a different grid unit size, which means the global units context property singleton is not usable in an application which uses multiple windows. So we must get rid of these kinds of interpretations of the singletons and provide proper ones which are naturally supported by QML.

Complex Theming

Now this is one of the biggest problems. The theming went through a complete evolution: from CSS-like styling to a complete QML-based declarative styling, and then to sub-theming, so each application can use multiple themes at the same time. The performance increased dramatically when we dropped the first version in favor of the declarative one, but it is still slower when compared to a component which implements its visuals on top of a template that provides the logic (see QtQuick Controls second generation).

Performance

Oh, yes. All above are contributing to the slow performance of the components, which results in bad performance in applications. Styling is still a bottleneck. We’ve ported some components from QML to C++ to gain some speed in both loading and UI response time, however we have still components entirely written in QML and Javascript, and those are clearly performance eaters. And these monsters are catching your eyes, because they are used the most: AdaptivePageLayout turned to be the most loved component due to its support for the converged application development, but there are the text inputs (TextField and TextArea) which are again components taking too long to instantiate. We have to make them performant, and the only solution is to make them in C++. Of course, C++ is not the Holy Grail, one can make nasty things there too. But so far, we’ve managed to get the components we’ve ported to C++ to behave really well and even provided performance gain to the components derived from them. There was a reason why BlackBerry made its toolkit in C++ and exposed it to QML...

The Plan

So we came up with a plan. And the plan includes you. The plan needs you to succeed, it won’t work without you.

First we thought that we can introduce the new features and slowly turn all the components into performant ones. But then came the DPR support, which despite the fact that from Qt 5.7 onwards it will support floating point value, QWidget based apps will still be broken, as those only support integer sizes. This can be handled behind the scenes, however apps with multiple windows must support different grid unit/DPR sizes when those windows are laid out on different screens. This means that we must do something about the way we handle the grid units, and that, unfortunately, cannot be done without an API break.

But then, if we break it, let’s do it properly! This leads us to really go for the second generation if the UI toolkit, which we were already dreaming of for about a year. This means breaking the backwards compatibility in some APIs. However, whenever is possible, we will keep the interface compatible, but that may not apply to component inheritance.

API design

We will start sharing all API designs with you, so you can contribute! We don’t have a clear plan yet, but we could introduce a “labs” module where the API can be tried out for each component before it lands to the stable module. We must find a way to share the API documents with you so you can comment and request interface changes/additions. (So later you can blame yourself for the mistakes :) ) The policy will be the same, an API once released cannot be revoked, only deprecated. By introducing the labs module, we could give a few weeks or months of time for you to try it out, and provide fixes/comments. Of course, components which were already designed will keep the API but will be exposed for additional requests. And also, we will try to minimize the API to the use cases we have.

Styling

When it comes to component implementation we will follow the template+UI layer design, so components will be implemented on top of templates. If your application requires different layout, you will be free to implement it yourself using the template. Therefore we can say that we will have two API layers: the template layer APIs and the UI layer APIs, this last bringing additional properties to the component customizing the look and feel of the component itself, without modifying the logic of the component (i.e. colors, borders, transitions). Both layers will be treated with the same stability promise.

In addition, the theming will still be available, but will not contain anything else but the palette, and the font of the theme. We don’t know yet how will this be available to you, either through a component property of attached properties, we have to benchmark both solutions and see which one is more reliable. Both solutions have their pros and cons, let’s see which one will be the winner.

When Do We Start?

As soon as possible! First we need to open a repository and provide the skeleton for it, and then move the former singletons so we have a clear API for them. Then we need to get the components one by one from the 1.x into the new base, and revisit each component’s API with you all. We will let you know when the trunk is available so you can start playing with it.

When Will It Be Available?

The journey will be a bit longer as we must keep UI Toolkit 1.3 up to date and stable, and in parallel provide features to 2.0. The expectation is that by the end of October we should have a few components in the labs module so those can be tested. We expect to have components appearing in the labs written in C++, so no QML first then move to C++ approach anymore, as the idea is once the component API is seen to be stable enough, we move that to the released package without any effort. Also, as all the major version changes used to be, this version will not be backwards compatible nor usable with 1.x versions, meaning that your QML application would not be able to import 1.x and 2.0 same time.

Shouldn’t We Take The Next Generation of QtQuick Controls as base?

That is a good point, and we’ve been considering that option too. However some of our components’ behavior is so different that it may make sense to simply follow a different path rather than take those as base. But we promise we will consider it as an option. We’ve had a discussion back in last December when we talked about the QtQuick Controls blending in with UI Toolkit, see it here.

Final words

It will be a long journey, a tough one, but finally it will be properly open. Lots of IRC discussions, hangouts, videos, labs works… It’ll be fun! I cannot promise pizza, or beer for you guys, but I promise it'll be hell of a good ride!

 

Read more
David Henningsson

So; assume that you have some new hardware that works for the most part, but you have some problems with your built-in sound card. The problem has been fixed upstream, but if you start using that particular upstream kernel only, you will lose Ubuntu kernel security updates. In some cases, bug fixes will come to Ubuntu kernels too – after some time – but in other cases these fixes won’t, for a variety of reasons.

You want to run a standard Ubuntu kernel, except for your sound driver (or some other driver), which you want to be something different. This actually happens quite often when our team enables hardware that isn’t yet on the market, and therefore lack full support in already released kernels.

DKMS

To the rescue comes DKMS (short for Dynamic kernel module support), which installs the source of the actual driver on the machine, and, whenever the Ubuntu kernel is upgraded, automatically recompiles the driver to fit the new kernel. The compiled modules are installed into the right directory for them to be used at next boot. We’ve used this tool for several years, and found it to be incredibly useful.

Launchpad automation

Launchpad got a feature called recipes, which combines one or more bzr branches and automatically makes a source package whenever one of the source packages change. The source package is then uploaded to a ppa, which builds a binary package from the source package.

What is then the result of all this well-oiled machinery? That every day, you have the latest sound driver which is ready for you to install and use to see if it fixes your sound issues – and because it’s packaged as a normal Debian package, uninstallation is easy in case it does not work. We have had this up and running for the Intel HDA driver for several years now, and it’s been useful for both Canonical and the Ubuntu community.

Details

That’s the birds-eye overview. In practice, things are a bit more complicated. Get ready for the mandatory boxes-and-arrows picture:

hda-build-flow2

Preparing for import

Our main source is the master branch of sound.git, maintained by Takashi Iwai. However, Launchpad does not yet support git in recipe builds, therefore, a machine somewhere in the cloud runs a preparation script. This script checks the git branch for updates every hour and if there is one, starts with filtering out the “sound” directory (this is a simple optimization, because kernel trees are huge). The result is added to a bzr branch.

Actually this cloud machine does one more thing, but it’s more of a bonus: it runs some hda-emu based testing. Hda-emu is a tool for emulating an HD-audio codec, and takes alsa-info as input. So, we contributed a lot of alsa-infos from machines Canonical enable to the upstream hda-emu repository, along with some scripts to run some emulation tests on all of them. So, in case something breaks, we get an early warning, before the code reaches more people. The most common case for the test to break however is not an actual bug, but that the hda-emu tool needs updating to handle changes in the kernel driver. Therefore, the script is not stopped when this happens, it just puts a warning message in the bzr commit log.

The cloud machine runs a bzr server, which Launchpad then checks a few times per day for updates, and imports changes into a Launchpad hosted mirror branch.

Making a DKMS package

As our launchpad recipe detects that the bzr branch has changed, it re-runs the recipe. The recipe is quite simple – it only copies files from different branches into some directory, creates a source package out of the result, and uploads that package to a PPA. That’s where we combine the upstream source with our DKMS configuration. There is some scripting involved to e g figure out the names of the built kernel modules – if you’re making your own DKMS package, it will probably be easier to write that file by hand.

Unfortunately, compiling a new driver on an older kernel can be challenging, when the driver starts relying on features only present in the new kernel. Therefore, we regularly need to manually add patches to the new driver to make it compile on the older kernel.

Launchpad build

This part is just like any other build on a Launchpad PPA – it takes a source package and builds a binary package. This is where the backport patches actually get applied to the source. Please note that even though this is a binary package, what’s inside the package is not compiled binaries, it’s the source code for the driver. This is because the final compilation occurs on the end user machine.

(A funny quirk: when DKMS is invoked, it creates a .deb file by itself, but for some reason Launchpad wouldn’t accept this .deb file. I never really figured out why, but instead worked around it by manually unpacking DKMS’s .deb, then repacking it again using low-level dpkg-gencontrol and dpkg-deb tools.)

The binary package is then published in the PPA, downloaded/copied by the end user to his/her machine, and installed just like any other Debian package.

On the end user machine

The final step; where the driver source is combined with a standard Ubuntu kernel, is done on the end user’s machine. DKMS itself installs triggers on the end user machine that will be called every time a new kernel is installed, upgraded or removed.

On installation of a new kernel, DKMS will verify that the relevant kernel header package is also installed, then use these headers to recompile all installed DKMS binary packages against the new kernel. The resulting files are copied into /lib/modules/<kernel>/updates/dkms. On installation of a new DKMS binary package, the default is to recompile the new package against the latest kernel and the currently running kernel.

DKMS also runs depmod to ensure the kernel will pick up the newly compiled modules.

Final remarks

There are some caveats which might be worth mentioning.

First, if you combine the regular Ubuntu kernel (with security updates) with a DKMS driver, you will get security updates for the entire kernel except that specific driver, so in theory, you could be left with a security issue if the vulnerability is in the specific driver you use DKMS for. However, in practice the vast majority of security bugs are userspace facing code, rather than deep down in hardware specific drivers.

Second, on every Ubuntu kernel released there is a potential risk for breakage, e g, if the DKMS driver calls a function in the kernel and that function changes its signature, then the DKMS driver will fail to compile and install on the new kernel. Or even worse, the function changes behavior without changing signature, so that the DKMS driver will compile just fine, but break in some way when the driver runs. All I can say about that is that, to my knowledge, if this can happen then it happens very rarely – I’ve never seen it cause any problems in practice.

Read more
Sergio Schvezov

Snapcrafting a kernel

Introduction

With snapcraft 2.5 which can be installed on the upcoming 16.04 Xenial Xerus with apt or consumed from the 2.5 tag on github we have included two interesting plugins: kbuild and kernel.

The kbuild plugin is interesting in itself, but here we will be discussing the kernel plugin which is based out of the kbuild one.

A note of caution though, this kernel plugin is still not considered production ready. This doesn’t mean you will build kernels that don’t work on today’s version of Ubuntu Core; but caution is required as the nature of rolling, which is what this kernel plugin targets, can still change. Additionally we may still modify the plugin’s options for the part setup itself.

Last but not least we are introducing, given the nature of kernel building, some experimental cross building support. The reason for this is that cross compiling a kernel is well understood and straightforward.

Walkthrough

Objective

The final objective is to obtain a kernel snap; we will want to create a kernel that would work on the 410c DragonBoard from Arrow which features Qualcomm’s Snapdragon 410. To do so we will take a look at the 96boards wiki and the 96boards published kernel.

Setup

You must be running from a Xenial Xerus system and have at least snapcraft 2.5 installed, make sure by running:

$ snacraft -v
2.5

If not, then:

$ apt update
$ apt install snapcraft

Cloning the kernel

Since the kernel is the main project and to iterate quickly it makes sense to clone it and start snapcrafting from there, so let’s clone

git clone --depth 1 https://github.com/96boards/linux

Depending on when you do this, you might need to also cherry pick 6113222fa5386433645c7707b4239a9eba444523

Creating the base snapcraft.yaml

Go into the recently cloned kernel directory and let’s get started with a yaml that has the standard entries for someone familiar with snapcraft.yaml:

name: 96boards-kernel
version: 4.4.0
summary: 96boards reference kernel with qualcomm firmware
description: this is an example on how to build a kernel snap.

Now this is a kernel snap, so let’s add that information in; this is rather important since if not done, the resulting snap might as well be some sort of asset holder; by adding the type of snap, snappy Ubuntu Core will know what to do:

name: 96boards-kernel
version: 4.4.0
summary: 96boards reference kernel with qualcomm firmware
description: this is an example on how to build a kernel snap.
type: kernel

That’s all we need with regards to headers.

Adding parts

kernel

So let’s add some parts, the first part will use the new kernel plugin, This plugin’s help can be seen by running:

snapcraft help kernel

The kernel plugin is based out of the kbuild one, so there are some extra parameters we can use from that plugin which can be seen by running:

snapcraft help kbuild

And finally these plugins make use of snapcraft’s source helpers which can be discovered by runnning:

snapcraft help sources

So when we look at the wiki again we will notice there are 2 defconfigs, defconfig and distro.conf. Even though distro.config defines squashfs support to be built as a module, let’s make use of kconfigs and explicitly set it (we also set a couple of other kernel configurations). We will build 2 device trees making use of kernel-device-trees. In kernel-initrd-modules we will mention squashfs as we need support for it to boot.

Given that particular piece of information let’s work on adding this part:

name: 96boards-kernel
version: 4.4.0
summary: 96boards reference kernel with qualcomm firmware
description: this is an example on how to build a kernel snap.
type: kernel

parts:
        plugin: kernel
        source: .
        kdefconfig: [defconfig, distro.config]
        kconfigs:
            - CONFIG_LOCALVERSION="-96boards"
            - CONFIG_DEBUG_INFO=n
            - CONFIG_SQUASHFS=m
        kernel-initrd-modules:
            - squashfs
        kernel-image-target: Image
        kernel-device-trees:
            - qcom/apq8016-sbc
            - qcom/msm8916-mtp

firmware

To run this kernel on the DragonBoard we will need to get some firmware from Qualcomm, so head over to https://developer.qualcomm.com/download/db410c/linux-board-support-package-v1.2.zip and get the zip file. Extract the firmware tarball from inside that zip and create a firmware part:

name: 96boards-kernel
version: 4.4.0
summary: 96boards reference kernel with qualcomm firmware
description: this is an example on how to build a kernel snap.
type: kernel

parts:
    kernel:
        plugin: kernel
        source: .
        kdefconfig: [defconfig, distro.config]
        kconfigs:
            - CONFIG_LOCALVERSION="-96boards"
            - CONFIG_DEBUG_INFO=n
            - CONFIG_SQUASHFS=m
        kernel-initrd-modules:
            - squashfs
        kernel-image-target: Image
        kernel-device-trees:
            - qcom/apq8016-sbc
            - qcom/msm8916-mtp
    firmware:
        plugin: tar-content
        source: firmware.tar
        destination: lib/firmware

Building

Now that we have a complete snapcraft.yaml we will proceed to build. If you did this on a 64bit system, you will be able to cross compile this snap, just run:

$ snapcraft --target-arch arm64

This build will take a while, an average of 30 minutes give or take. You will eventually see a message that says Snapped 96boards-kernel_4.4.0_arm64.snap. That means you are done and have successfully created a kernel snap.

Read more