Canonical Voices

Stéphane Graber

Introduction

I maintain a number of development systems that are used as throw away machines to reproduce LXC and LXD bugs by the upstream developers. I use MAAS to track who’s using what and to have the machines deployed with whatever version of Ubuntu or Centos is needed to reproduce a given bug.

A number of those systems are proper servers with hardware BMCs on a management network that MAAS can drive using IPMI. Another set of systems are virtual machines that MAAS drives through libvirt.

But I’ve long had another system I wanted to get in there. That machine is a desktop computer but with a server grade SAS controller and internal and external arrays. That machine also has a Fiber Channel HBA and Infiniband card for even less common setups.

The trouble is that this being a desktop computer, it’s lacking any kind of remote management that MAAS supports. That machine does however have a good PCIe network card which provides reliable wake-on-lan.

Back in the days (MAAS 1.x), there was a wake-on-lan power type that would have covered my use case. This feature was however removed from MAAS 2.x (see LP: #1589140) and the development team suggests that users who want the old wake-on-lan feature, instead install Ubuntu 14.04 and the old MAAS 1.x branch.

Implementing Wake on LAN in MAAS 2.x

I am, however not particularly willing to install an old Ubuntu release and an old version of MAAS just for that one trivial feature, so I instead spent a bit of time to just implement the bits I needed and keep a patch around to be re-applied whenever MAAS changes.

MAAS doesn’t provide a plugin system for power types, so I unfortunately couldn’t just write a plugin and distribute that as an unofficial power type for those who need WOL. I instead had to resort to modifying MAAS directly to add the extra power type.

The code change needed to re-implement a wake-on-lan power type is pretty simple and only took me a few minutes to sort out. The patch can be found here: https://dl.stgraber.org/maas-wakeonlan.diff

To apply it to your MAAS, do:

sudo apt install wakeonlan
wget https://dl.stgraber.org/maas-wakeonlan.diff
sudo patch -p1 -d /usr/lib/python3/dist-packages/provisioningserver/ < maas-wakeonlan.diff
sudo systemctl restart maas-rackd.service maas-regiond.service

Once done, you’ll now see this in the web UI:

After selecting the new “Wake on LAN” power type, enter the MAC address of the network interface that you have WOL enabled on and save the change.

MAAS will then be able to turn the system on, allowing for the normal commissioning and deployment stages. For everything else, this power type behaves like the “Manual” type, asking the user to manually go shutdown or reboot the system as you can’t do that through Wake on LAN.

Note that you’ll have to re-apply part of the patch whenever MAAS is updated. The patch modifies two files and adds a new one. The new file won’t be removed during an upgrade, but the two modified files will get reverted and need patching again.

Conclusion

This is certainly a hack and if your system supports anything better than Wake on LAN, or you’re willing to buy a supported PDU just for that one system, then you should do that instead.

But if the inability to turn a system on is all that stands in your way from adding it to your MAAS, as was the case for me, then that patch may help you.

I hope that in time MAAS will either get that feature back in some way or get a plugin system that I can use to ship that extra power type in its own separate package without needing to alter any of MAAS’ own files.

Read more
UbuntuTouch

我们知道对于python项目来说,我们只需要在我们的snapcraft.yaml中指定plugin为python它即可为python项目下载在snapcraft中指定的python的版本。但是对于有一些项目来说,我们的开发者可能需要一个特定的python的版本,那么我们怎么来实现这个功能呢?在今天的教程中,我们来介绍在snapcraft 2.27中所增添的一个新的功能。


我们首先来看一下我做的一个项目:

https://github.com/liu-xiao-guo/python-plugin

snapcraft.yaml

name: python36
version: '0.1' 
summary: This is a simple example not using python plugin
description: |
  This is a python3 example

grade: stable 
confinement: strict

apps:
  python36:
    command: helloworld_in_python
  python-version:
    command: python3 --version

parts:
  my-python-app:
    source: https://github.com/liu-xiao-guo/python-helloworld.git
    plugin: python
    after: [python]
  python:
    source: https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tar.xz
    plugin: autotools
    configflags: [--prefix=/usr]
    build-packages: [libssl-dev]
    prime:
      - -usr/include

在这里,针对我们的python项目,它指定了在我们项目python part中所定义的python。这个python的版本是直接从网上直接进行下载的。

我们可以直接打包我们的应用,并并运行我们的应用:

$ python36
Hello, world

显然我们的python是可以正常工作的。我们可以通过命令python36.python-version命令来检查我们的python的版本:

$ python36.python-version 
Python 3.6.0

它显示了,我们目前正在运行的python的版本3.6。它就是我们在snapcraft中所下载的版本。
作者:UbuntuTouch 发表于2017/2/20 9:23:30 原文链接
阅读:477 评论:0 查看评论

Read more
UbuntuTouch

Socket.io可以使得我们的服务器和客户端进行双向的实时的数据交流。它比HTTP来说更具有传输数据量少的优点。同样地,websocket也具有同样的优点。你可以轻松地把你的数据发送到服务器,并收到以事件为驱动的响应,而不用去查询。在今天的教程中,我们来讲一下如何利用socket.io和websocket来做一个双向的通讯。


1)创建一个socket.io的服务器


首先我们先看一下我完成的一个项目:


我们首先看一下我们的snapcraft.yaml文件:

snapcraft.yaml

name: socketio
version: "0.1"
summary: A simple shows how to make use of socket io
description: socket.io snap example

grade: stable
confinement: strict

apps:
  socket:
    command: bin/socketio
    daemon: simple
    plugs: [network-bind]

parts:
  nod:
    plugin: nodejs
    source: .
   

这是一个nodejs的项目。我们使用了nodejs的plugin。我们的package.json文件如下:

package.json

{
  "name": "socketio",
  "version": "0.0.1",
  "description": "Intended as a nodejs app in a snap",
  "license": "GPL-3.0",
  "author": "xiaoguo, liu",
  "private": true,
  "bin": "./app.js",
  "dependencies": {
    "express": "^4.10.2",
    "nodejs-websocket": "^1.7.1",
    "socket.io": "^1.3.7"
  }
}

由于我们需要使用到webserver,所有我们安装了express架构包。另外,我们使用到socket.io及websocket,所有,我们把这些包都打入到我们的snap包中。

再来看看我们的应用app.js的设计:

app.js

#!/usr/bin/env node

var express = require('express');
var app = require('express')();
var http = require('http').Server(app);
var io = require('socket.io')(http);	

app.get('/', function(req, res){
   res.sendFile(__dirname + '/www/index.html');
});

app.use(express.static(__dirname + '/www'));

//Whenever someone connects this gets executed
io.on('connection', function(socket){
  console.log('A user connected');
  
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  io.emit('light-sensor-value', '' + value);
	  // console.log("value: " + value)
	  
	  // This is another way to send data
	  socket.send(value);
  }, 2000); 

  //Whenever someone disconnects this piece of code executed
  socket.on('disconnect', function () {
    console.log('A user disconnected');
  });

});

http.listen(4000, function(){
  console.log('listening on *:4000');
});

var ws = require("nodejs-websocket")

console.log("Going to create the server")

String.prototype.format = function() {
    var formatted = this;
    for (var i = 0; i < arguments.length; i++) {
        var regexp = new RegExp('\\{'+i+'\\}', 'gi');
        formatted = formatted.replace(regexp, arguments[i]);
    }
    return formatted;
};
 
// Scream server example: "hi" -> "HI!!!" 
var server = ws.createServer(function (conn) {
	
    console.log("New connection")
	var connected = true;
    
    conn.on("text", function (str) {
        console.log("Received "+str)
        conn.sendText(str.toUpperCase()+"!!!")
    })
    
    conn.on("close", function (code, reason) {
        console.log("Connection closed")
        connected = false
    })
        	
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  var data = '{"data":"{0}"}'.format(value)
	  if (connected){
		conn.send(data);
	  }
  }, 2000); 	
}).listen(4001)

在代码的第一部分,我们创建了一个webserver,它使用的端口地址是4000。我们也同时启动了socket.io服务器,等待客户端的连接。一旦有一个连接的话,我们使用如下的代码每过一段时间来发送一些数据:

//Whenever someone connects this gets executed
io.on('connection', function(socket){
  console.log('A user connected');
  
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  io.emit('light-sensor-value', '' + value);
	  // console.log("value: " + value)
	  
	  // This is another way to send data
	  socket.send(value);
  }, 2000); 

  //Whenever someone disconnects this piece of code executed
  socket.on('disconnect', function () {
    console.log('A user disconnected');
  });

});

虽然这些数据是一些随机的,但是我们主要用来展示它是如何工作的。在实际的应用中,这些数据可以是从一些传感器中得到的。在我们的客户端中,我们可以打开webserver运行的地址:


我们可以看到数据不断地进来,并在我们的客户端中显示出来。具体的设计请参考在www目录中的index.html文件。


2)创建一个websocket的服务器


在我们的app.js中,我们利用如下的代码来实现一个websocket的服务器。端口地址为4001。

app.js


var ws = require("nodejs-websocket")

console.log("Going to create the server")

String.prototype.format = function() {
    var formatted = this;
    for (var i = 0; i < arguments.length; i++) {
        var regexp = new RegExp('\\{'+i+'\\}', 'gi');
        formatted = formatted.replace(regexp, arguments[i]);
    }
    return formatted;
};
 
// Scream server example: "hi" -> "HI!!!" 
var server = ws.createServer(function (conn) {
	
    console.log("New connection")
	var connected = true;
    
    conn.on("text", function (str) {
        console.log("Received "+str)
        conn.sendText(str.toUpperCase()+"!!!")
    })
    
    conn.on("close", function (code, reason) {
        console.log("Connection closed")
        connected = false
    })
        	
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  var data = '{"data":"{0}"}'.format(value)
	  if (connected){
		conn.send(data);
	  }
  }, 2000); 	
}).listen(4001)

同样地,一旦有个连接,我们每隔两秒钟发送一个数据到我们的客户端。为了说明问题方便,我们设计了一个QML的客户端。

Main.qml


import QtQuick 2.4
import Ubuntu.Components 1.3
import Ubuntu.Components.Pickers 1.3
import Qt.WebSockets 1.0
import QtQuick.Layouts 1.1

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "dialer.liu-xiao-guo"

    width: units.gu(60)
    height: units.gu(85)

    function interpreteData(data) {
        var json = JSON.parse(data)
        console.log("Websocket data: " + data)

        console.log("value: " + json.data)
        mainHand.value = json.data
    }

    WebSocket {
        id: socket
        url: input.text
        onTextMessageReceived: {
            console.log("something is received!: " + message);
            interpreteData(message)
        }

        onStatusChanged: {
            if (socket.status == WebSocket.Error) {
                console.log("Error: " + socket.errorString)
            } else if (socket.status == WebSocket.Open) {
                // socket.sendTextMessage("Hello World....")
            } else if (socket.status == WebSocket.Closed) {
            }
        }
        active: true
    }

    Page {
        header: PageHeader {
            id: pageHeader
            title: i18n.tr("dialer")
        }

        Item {
            anchors {
                top: pageHeader.bottom
                left: parent.left
                right: parent.right
                bottom: parent.bottom
            }

            Column {
                anchors.fill: parent
                spacing: units.gu(1)
                anchors.topMargin: units.gu(2)

                Dialer {
                    id: dialer
                    size: units.gu(30)
                    minimumValue: 0
                    maximumValue: 1000
                    anchors.horizontalCenter: parent.horizontalCenter

                    DialerHand {
                        id: mainHand
                        onValueChanged: console.log(value)
                    }
                }


                TextField {
                    id: input
                    width: parent.width
                    text: "ws://192.168.1.106:4001"
                }

                Label {
                    id: value
                    text: mainHand.value
                }
            }
        }
    }
}

运行我们的服务器及客户端:



我们可以看到我们数值在不断地变化。这个客户端的代码在:https://github.com/liu-xiao-guo/dialer

在这篇文章中,我们展示了如何利用socket.io及websocket来进行双向的实时的通讯。在很多的物联网的应用中,我们可以充分利用这些通讯协议来更好地设计我们的应用。

作者:UbuntuTouch 发表于2017/2/7 16:07:16 原文链接
阅读:442 评论:3 查看评论

Read more
UbuntuTouch

[原]为自己的snap应用添加变量

在很多snap应用开发的时候,我们可以使用我们自己的一个wrapper,并在这个wrapper中指定一些变量从而能够使得我们的应用能够正常地运行。这个特性也特别适合在移植有些snap应用中需要特别设定一些路径到我们snap应用的一些可读写目录中从而避免安全的问题。那么我们怎么实现这个功能呢?


我们先来看一下我们做的一个例程:

https://github.com/liu-xiao-guo/helloworld-env

snapcraft.yaml

name: hello
version: "1.0"
summary: The 'hello' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox

grade: stable
confinement: strict
type: app  #it can be gadget or framework
icon: icon.png

apps:
 env:
   command: bin/env
   environment:
     VAR1: $SNAP/share
     VAR2: "hello, the world"
 evil:
   command: bin/evil
 sh:
   command: bin/sh

parts:
 hello:
  plugin: dump
  source: .

在上面的例子中,在“env”命令中,我们添加了environment项。在它的里面,我们定义了两个环境变量:VAR1及VAR2。
打包我们的应用,同时执行我们的命令“hello.env”。

$ hello.env | grep VAR
VAR1=$SNAP/share
VAR2=hello, the world

在这里,我们可以看出来我们在没有使用脚本的情况下,为我们的应用添加了两个环境变量VAR1及VAR2。


作者:UbuntuTouch 发表于2017/2/20 10:33:47 原文链接
阅读:317 评论:0 查看评论

Read more
UbuntuTouch

在今天的文章中,我们将介绍如何把一个HTML5的应用打包为一个snap应用。我们知道有很多的HTML5应用,但是我们如何才能把它们打包为我们的snap应用呢?特别是在Ubuntu手机手机开发的时候,有很多的已经开发好的HTML5游戏。我们可以通过我们今天讲的方法来把先前的click HTML5应用直接打包为snap应用,并可以在我们的Ubuntu桌面电脑上进行运行。当然,今天介绍的方法并不仅限于Ubuntu手机开发的HTML应用。这里的方法也适用于其它的HTML5应用。




1)HTML5应用


首先,我们看一下我之前做过的一个为Ubuntu手机而设计的一个HTML5应用。它的地址为:


你可以通过如下的方式得到这个代码:

bzr branch lp:~liu-xiao-guo/debiantrial/wuziqi

在这个应用中,我们只关心的是在它www目录里面的内容。这个项目的所有文件如下:

$ tree
.
├── manifest.json
├── wuziqi.apparmor
├── wuziqi.desktop
├── wuziqi.png
├── wuziqi.ubuntuhtmlproject
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    └── js
        └── app.js

我们希望把在www里面的内容能够最终打包到我们的snap应用中去。

2)打包HTML5应用为snap


为了能够把我们的HTML5应用打包为一个snap应用,我们可以在项目的根目录下打入如下的命令:

$ snapcraft init

上面的命令将在我们的当前的目录下生产一个新的snap目录,并在里面生一个叫做snapcraft.yaml的文件。这实际上是一个模版。我们可以通过修改这个snapcraft.yaml文件来把我们的应用进行打包。运行完上面的命令后,文件架构如下:

$ tree
.
├── manifest.json
├── snap
│   └── snapcraft.yaml
├── wuziqi.apparmor
├── wuziqi.desktop
├── wuziqi.png
├── wuziqi.ubuntuhtmlproject
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    └── js
        └── app.js

我们通过修改snapcraft.yaml文件,并最终把它变为:

snapcraft.yaml


name: wuziqi
version: '0.1'
summary: Wuziqi Game. It shows how to snap a html5 app into a snap
description: |
  This is a Wuziqi Game. There are two kind of chesses: white and black. Two players
  play it in turn. The first who puts the same color chesses into a line is the winner.

grade: stable
confinement: strict

apps:
  wuziqi:
    command: webapp-launcher www/index.html
    plugs:
      - browser-sandbox
      - camera
      - mir
      - network
      - network-bind
      - opengl
      - pulseaudio
      - screen-inhibit-control
      - unity7

plugs:
  browser-sandbox:
    interface: browser-support
    allow-sandbox: false
  platform:
    interface: content
    content: ubuntu-app-platform1
    target: ubuntu-app-platform
    default-provider: ubuntu-app-platform

parts:
  webapp:
    after: [ webapp-helper, desktop-ubuntu-app-platform ]
    plugin: dump
    source: .
    stage-packages:
      - ubuntu-html5-ui-toolkit
    organize:
      'usr/share/ubuntu-html5-ui-toolkit/': www/ubuntu-html5-ui-toolkit
    prime:
      - usr/*
      - www/*

这里的解释如下:
  • 由于这是一个HTML5的应用,我们可以通过webapp-helper来启动我们的应用。在我们的应用中我们使用被叫做webapp-helper的remote part
  • 由于在Ubuntu的手机中,web的底层部分是由Qt进行完成的,所以我们必须要把Qt也打包到我们的应用中。但是由于Qt库是比较大的,我们可以通过ubuntu-app-platform snap应用通过它提供的platform接口来得到这些Qt库。开发者可以参阅我们的文章https://developer.ubuntu.com/en/blog/2016/11/16/snapping-qt-apps/
  • 在我们的index.html文件中,有很多的诸如<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>。这显然和ubuntu-html5-ui-toolkit有关,所以,我们必须把ubuntu-html5-ui-toolkit这个包也打入到我们的应用中。这个我们通过stage-packages来安装ubuntu-html5-ui-toolkit包来实现
  • 我们通过organize把从ubuntu-html5-ui-toolkit中安装的目录ubuntu-html5-ui-toolkit重组到我们项目下的www目录中以便index.html文件引用
我们再来看看我们的原始的index.html文件:

index.html

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>An Ubuntu HTML5 application</title>
    <meta name="description" content="An Ubuntu HTML5 application">
    <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=0">

    <!-- Ubuntu UI Style imports - Ambiance theme -->
    <link href="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/css/appTemplate.css" rel="stylesheet" type="text/css" />

    <!-- Ubuntu UI javascript imports - Ambiance theme -->
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/fast-buttons.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/buttons.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/dialogs.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/page.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/pagestacks.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/tab.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/tabs.js"></script>

    <!-- Application script -->
    <script src="js/app.js"></script>
    <link href="css/app.css" rel="stylesheet" type="text/css" />

  </head>

  <body>
        <div class='test'>
          <div>
              <img src="images/w.png" alt="white" id="chess">
          </div>
          <div>
              <button id="start">Start</button>
          </div>
        </div>

        <div>
            <canvas width="640" height="640" id="canvas" onmousedown="play(event)">
                 Your Browser does not support HTML5 canvas
            </canvas>
        </div>
  </body>
</html>

从上面的代码中,在index.hml文件中它引用的文件是从/usr/share这里开始的。在一个confined的snap应用中,这个路径是不可以被访问的(因为一个应用只能访问自己安装在自己项目根目录下的文件)。为此,我们必须修改这个路径。我们必须把上面的/usr/share/的访问路径改变为相对于本项目中的www目录的访问路径:

    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/fast-buttons.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/buttons.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/dialogs.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/page.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/pagestacks.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/tab.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/tabs.js"></script>

这就是为什么我们在之前的snapcraft.yaml中看到的:

parts:
  webapp:
    after: [ webapp-helper, desktop-ubuntu-app-platform ]
    plugin: dump
    source: .
    stage-packages:
      - ubuntu-html5-ui-toolkit
    organize:
      'usr/share/ubuntu-html5-ui-toolkit/': www/ubuntu-html5-ui-toolkit
    prime:
      - usr/*
      - www/*

在上面,我们通过organize把ubuntu-html5-ui-toolkit安装后的目录重新组织并移到我的项目的www目录中,从而使得这里的文件可以直接被我们的项目所使用。我们经过打包后的文件架构显示如下:

$ tree -L 3
.
├── bin
│   ├── desktop-launch
│   └── webapp-launcher
├── command-wuziqi.wrapper
├── etc
│   └── xdg
│       └── qtchooser
├── flavor-select
├── meta
│   ├── gui
│   │   ├── wuziqi.desktop
│   │   └── wuziqi.png
│   └── snap.yaml
├── snap
├── ubuntu-app-platform
├── usr
│   ├── bin
│   │   └── webapp-container
│   └── share
│       ├── doc
│       ├── ubuntu-html5-theme -> ubuntu-html5-ui-toolkit
│       └── webbrowser-app
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    ├── js
    │   ├── app.js
    │   └── jquery.min.js
    └── ubuntu-html5-ui-toolkit
        └── 0.1

在上面,我们可以看出来ubuntu-html5-ui-toolkit现在处于在www文件目录下,可以直接被我们的项目所使用。

我们在项目的根目录下打入如下的命令:

$ snapcraft

如果一切顺利的话,我们可以得到一个.snap文件。我们可以通过如下的命令来进行安装:

$ sudo snap install wuziqi_0.1_amd64.snap --dangerous

安装完后,由于我们使用了content sharing的方法来访问Qt库,所以,我们必须安装如下的snap:

$ snap install ubuntu-app-platform 
$ snap connect wuziqi:platform ubuntu-app-platform:platform

执行上面的命令后,我们可以看到:

$ snap interfaces
Slot                          Plug
:account-control              -
:alsa                         -
:avahi-observe                -
:bluetooth-control            -
:browser-support              wuziqi:browser-sandbox
:camera                       -
:core-support                 -
:cups-control                 -
:dcdbas-control               -
:docker-support               -
:firewall-control             -
:fuse-support                 -
:gsettings                    -
:hardware-observe             -
:home                         -
:io-ports-control             -
:kernel-module-control        -
:libvirt                      -
:locale-control               -
:log-observe                  snappy-debug
:lxd-support                  -
:modem-manager                -
:mount-observe                -
:network                      downloader,wuziqi
:network-bind                 socketio,wuziqi
:network-control              -
:network-manager              -
:network-observe              -
:network-setup-observe        -
:ofono                        -
:opengl                       wuziqi
:openvswitch                  -
:openvswitch-support          -
:optical-drive                -
:physical-memory-control      -
:physical-memory-observe      -
:ppp                          -
:process-control              -
:pulseaudio                   wuziqi
:raw-usb                      -
:removable-media              -
:screen-inhibit-control       wuziqi
:shutdown                     -
:snapd-control                -
:system-observe               -
:system-trace                 -
:time-control                 -
:timeserver-control           -
:timezone-control             -
:tpm                          -
:uhid                         -
:unity7                       wuziqi
:upower-observe               -
:x11                          -
ubuntu-app-platform:platform  wuziqi
-                             wuziqi:camera
-                             wuziqi:mir

当然在我们的应用中,我们也使用了冗余的plug,比如上面的camera及mir等。我们可以看到wuziqi应用和其它Core及ubuntu-app-platform snap的连接情况。在确保它们都连接好之后,我们可以在命令行中打入如下的命令:

$ wuziqi

它将启动我们的应用。当然,我们也可以从我们的Desktop的dash中启动我们的应用:






作者:UbuntuTouch 发表于2017/2/13 10:16:55 原文链接
阅读:283 评论:0 查看评论

Read more
UbuntuTouch

[原]Ubuntu Core 配置

Core snap提供了一些配置的选项。这些选项可以允许我们定制系统的运行。就像和其它的snap一样,Core snap的配置选项可以通过snap set命令来实现:

$ snap set core option=value


选项目前的值可以通过snap get命令来获取:

$ snap get core option
value


下面我们来展示如何来禁止系统的ssh服务:

警告:禁止ssh将会使得我们不能在默认的方式下访问Ubuntu Core系统。如果我们不提供其它的方式来管理或登陆系统的话,你的系统将会是一个砖。建议你可以正在系统中设置一个用户名及密码,以防止你不能进入到系统。如果你有其它的方式进入到系统,也是可以的。当我们进入到Ubunutu Core系统中后,我们可以使用如下的命令来创建一个用户名及密码,这样我们可以通过键盘及显示器的方式来登陆。

$ sudo passwd <ubuntu-one id>
<password>

设置的选项接受如下的值:

  • false (默认):启动ssh服务。ssh服务将直接对连接请求起作用
  • true:禁止ssh服务。目前存在的ssh连接将继续保留,但是任何更进一步的连接将是不可能的
$ snap set core service.ssh.disable=true
当我们执行完上面的命令后,任何更进一步的访问将被禁止:

$ ssh liu-xiao-guo@192.168.1.106
ssh: connect to host 192.168.1.106 port 22: Connection refused

$ snap set core service.ssh.disable=false
执行上面的命令后将使得我们可以重新连接ssh。
我们可以通过如下的命令来获得当前的值:

$ snap get core service.ssh.disable
false

更多阅读:https://docs.ubuntu.com/core/en/reference/core-configuration

作者:UbuntuTouch 发表于2017/2/15 10:29:38 原文链接
阅读:253 评论:0 查看评论

Read more
UbuntuTouch

[转]Qt on Ubuntu Core

Are you working on an IoT, point of sale or digital signage device? Are you looking for a secure, supported solution to build it on? Do you have needs for graphic performance and complex UI? Did you know you could build great solutions using Qt on Ubuntu and Ubuntu Core? 

To find out how why not join this upcoming webinar. You will learn the following:

- Introduction to Ubuntu and Qt in IoT and digital signage
- Using Ubuntu and Ubuntu Core in your device
- Packaging your Qt app for easy application distribution 
- Dealing with hardware variants and GPUs


https://www.brighttalk.com/webcast/6793/246523?utm_source=China&utm_campaign=3)%20Device_FY17_IOT_Vertical_DS_Webinar_Qt&utm_medium=Social

作者:UbuntuTouch 发表于2017/2/27 13:07:38 原文链接
阅读:291 评论:0 查看评论

Read more
Alan Griffiths

MirAL 1.3.2

There’s a bugfix MirAL release (1.3.2) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

The bugfixes in 1.3.2 are:

In libmiral a couple of “fails to build from source” fixes:

Fix FTBFS against Mir < 0.26 (Xenial, Yakkety)

Update to fix FTBFS against lp:mir (and clang)

In the miral-shell example, a crash fixed:

With latest zesty’s libstdc++-6-dev miral-shell will crash when trying to draw its background text. (LP: #1677550)

Some of the launch scripts have been updated to reflect a change to the way GDK chooses the graphics backend:

change the server and client launch scripts to avoid using the default Mir socket (LP: #1675794)

Update miral-xrun to match GDK changes (LP: #1675115)

In addition a misspelling of “management” has been corrected:

miral/set_window_management_policy.h

Read more
Cemil Azizoglu

Yeay, the new Mesa (17.0.2-1ubuntu2) has landed! (Many thanks to Timo.) This new Mesa incorporates a new EGL backend for Mir (as a distro patch). We will be moving away from the old backend by Mir 1.0, but for now both the new and old backends coexist.

This new backend has been implemented as a new platform in Mesa EGL so that we can easily rip out the old platform when we are ready. Being ready means switching _all_ the EGL clients out there to the new Mesa EGL types exported by this backend.

In case you are wondering, the new EGL types are [1]:

MirConnection* –> EGLNativeDisplayType

MirSurface* –> EGLNativeWindowType

Note that we currently use MirRenderSurface for what will soon be renamed to MirSurface. So at the moment, technically we have MirRenderSurface* as theEGLNativeWindowType.

Once we feel confident we will be pushing this patch upstream as well.

There should be no visible differences in your EGL applications due to this change which is a good thing. If you are curious about the code differences that this new backend introduces check out the ‘eglapp’ wrapper that we use in a number of our example apps :

http://bazaar.launchpad.net/~mir-team/mir/development-branch/view/head:/examples/eglapp.c

The new backend is activated by the ‘-r’ switch which sets the ‘new_egl’ flag, so you can see what is done differently in the code by looking at how this flag makes the code change.

Our interfaces are maturing and we are a big step closer to Mir 1.0.

-Cemil

[1] Mir does not support pixmaps.

Read more
Brandon Schaefer

When Choosing a Backend Fails

There was a recent GDK release into zesty that now probes for Mir over X11. This can cause issues when still using an X11 desktop such as Unity7 when a Mir server is running at the same time.

A common way to test Mir is to run it on top of X, which is called Mir-on-X. This means there are now two display servers running at the same time.

An example of an issue this can cause is gnome-terminal-server. It will attempt to spawn its clients on Mir instead of X11 once the Mir server is opened. You now attempt to spawn a new terminal which causes the gnome-terminal-server to crash since it now tries to spawn on Mir but you already spawned terminals on X. As you can imagine this is frustrating to your workflow!

A simple workaround is to add this to your ~/.profile:

if [ "$XDG_CURRENT_DESKTOP" = "Unity:Unity7" ]; then
    dbus-update-activation-environment --systemd GDK_BACKEND=x11
fi

Depending on your desktop the “Unity:Unity7” bit will change.

As more toolkits will start to pick other display servers as their first pick more of these issues will become possible. Other environment variables to consider:

SDL_VIDEODRIVER
QT_QPA_PLATFORM

A bit more detail on the issue can be found here:

Choosing a Backend

Read more
Cemil Azizoglu

Hi, I’ve been wanting to have a blog for a while now. I am not sure if I’ll have the time to post on a regular basis but I’ll try.

First things first : My name is Cemil (pronounced JEH-mil), a.k.a. ‘camako’ on IRC – I work as a developer and am the team-lead in the Mir project.

Recently, I’ve been working on Mir 1.0 tasks, new Mesa EGL platform backend for Mir, Vulkan Mir WSI driver for Mesa, among other things.

Here’s something pretty for you to look at for now :

https://plus.google.com/113725654283519068012/posts/8jmrQnpJxMc

-Cemil

Read more
Stéphane Graber

LXD logo

USB devices in containers

It can be pretty useful to pass USB devices to a container. Be that some measurement equipment in a lab or maybe more commonly, an Android phone or some IoT device that you need to interact with.

Similar to what I wrote recently about GPUs, LXD supports passing USB devices into containers. Again, similarly to the GPU case, what’s actually passed into the container is a Unix character device, in this case, a /dev/bus/usb/ device node.

This restricts USB passthrough to those devices and software which use libusb to interact with them. For devices which use a kernel driver, the module should be installed and loaded on the host, and the resulting character or block device be passed to the container directly.

Note that for this to work, you’ll need LXD 2.5 or higher.

Example (Android debugging)

As an example which quite a lot of people should be able to relate to, lets run a LXD container with the Android debugging tools installed, accessing a USB connected phone.

This would for example allow you to have your app’s build system and CI run inside a container and interact with one or multiple devices connected over USB.

First, plug your phone over USB, make sure it’s unlocked and you have USB debugging enabled:

stgraber@dakara:~$ lsusb
Bus 002 Device 003: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 002: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 021: ID 17ef:6047 Lenovo 
Bus 001 Device 031: ID 046d:082d Logitech, Inc. HD Pro Webcam C920
Bus 001 Device 004: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 005: ID 046d:0a01 Logitech, Inc. USB Headset
Bus 001 Device 033: ID 0fce:51da Sony Ericsson Mobile Communications AB 
Bus 001 Device 003: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 002: ID 072f:90cc Advanced Card Systems, Ltd ACR38 SmartCard Reader
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Spot your phone in that list, in my case, that’d be the “Sony Ericsson Mobile” entry.

Now let’s create our container:

stgraber@dakara:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1

And install the Android debugging client:

stgraber@dakara:~$ lxc exec c1 -- apt install android-tools-adb
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following NEW packages will be installed:
 android-tools-adb
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 68.2 kB of archives.
After this operation, 198 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/universe amd64 android-tools-adb amd64 5.1.1r36+git20160322-0ubuntu3 [68.2 kB]
Fetched 68.2 kB in 0s (0 B/s) 
Selecting previously unselected package android-tools-adb.
(Reading database ... 25469 files and directories currently installed.)
Preparing to unpack .../android-tools-adb_5.1.1r36+git20160322-0ubuntu3_amd64.deb ...
Unpacking android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...

We can now attempt to list Android devices with:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached

Since we’ve not passed any USB device yet, the empty output is expected.

Now, let’s pass the specific device listed in “lsusb” above:

stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce productid=51da
Device sony added to c1

And try to list devices again:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

To get a shell, you can then use:

stgraber@dakara:~$ lxc exec c1 -- adb shell
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
E5823:/ $

LXD USB devices support hotplug by default. So unplugging the device and plugging it back on the host will have it removed and re-added to the container.

The “productid” property isn’t required, you can set only the “vendorid” so that any device from that vendor will be automatically attached to the container. This can be very convenient when interacting with a number of similar devices or devices which change productid depending on what mode they’re in.

stgraber@dakara:~$ lxc config device remove c1 sony
Device sony removed from c1
stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce
Device sony added to c1
stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

The optional “required” property turns off the hotplug behavior, requiring the device be present for the container to be allowed to start.

More details on USB device properties can be found here.

Conclusion

We are surrounded by a variety of odd USB devices, a good number of which come with possibly dodgy software, requiring a specific version of a specific Linux distribution to work. It’s sometimes hard to accommodate those requirements while keeping a clean and safe environment.

LXD USB device passthrough helps a lot in such cases, so long as the USB device uses a libusb based workflow and doesn’t require a specific kernel driver.

If you want to add a device which does use a kernel driver, locate the /dev node it creates, check if it’s a character or block device and pass that to LXD as a unix-char or unix-block type device.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Michael Hall

Late last year Amazon introduce a new EC2 image customized for Machine Learning (ML) workloads. To make things easier for data scientists and researchers, Amazon worked on including a selection of ML libraries into these images so they wouldn’t have to go through the process of downloading and installing them (and often times building them) themselves.

But while this saved work for the researchers, it was no small task for Amazon’s engineers. To keep offering the latest version of these libraries they had to repeat this work every time there was a new release , which was quite often for some of them. Worst of all they didn’t have a ready-made way to update those libraries on instances that were already running!

By this time they’d heard about Snaps and the work we’ve been doing with them in the cloud, so they asked if it might be a solution to their problems. Normally we wouldn’t Snap libraries like this, we would encourage applications to bundle them into their own Snap package. But these libraries had an unusual use-case: the applications that needed them weren’t mean to be distributed. Instead the application would exist to analyze a specific data set for a specific person. So as odd as it may sound, the application developer was the end user here, and the library was the end product, which made it fit into the Snap use case.

Screenshot from 2017-03-23 16-43-19To get them started I worked on developing a proof of concept based on MXNet, one of their most used ML libraries. The source code for it is part C++, part Python, and Snapcraft makes working with both together a breeze, even with the extra preparation steps needed by MXNet’s build instructions. My snapcraft.yaml could first compile the core library and then build the Python modules that wrap it, pulling in dependencies from the Ubuntu archives and Pypi as needed.

This was all that was needed to provide a consumable Snap package for MXNet. After installing it you would just need to add the snap’s path to your LD_LIBRARY_PATH and PYTHONPATH environment variables so it would be found, but after that everything Just Worked! For an added convenience I provided a python binary in the snap, wrapped in a script that would set these environment variables automatically, so any external code that needed to use MXNet from the snap could simply be called with /snap/bin/mxnet.python rather than /usr/bin/python (or, rather, just mxnet.python because /snap/bin/ is already in PATH).

I’m now working with upstream MXNet to get them building regular releases of this snap package to make it available to Amazon’s users and anyone else. The Amazon team is also seeking similar snap packages from their other ML libraries. If you are a user or contributor to any of these libraries, and you want to make it easier than ever for people to get the latest and greatest versions of them, let’s get together and make it happen! My MXNet example linked to above should give you a good starting point, and we’re always happy to help you with your snapcraft.yaml in #snapcraft on rocket.ubuntu.com.

If you’re just curious to try it out ourself, you can download my snap and then follow along with the MXNet tutorial, using the above mentioned mxnet.python for your interactive python shell.

Read more
Alan Griffiths

miral gets cut & paste

For some time now I’ve been intending to investigate the cut & paste mechanisms in the Unity8/Mir stack with the intention of ensuring they are supported in MirAL.

I’ve never had the time to do this, so I was surprised to discover that cut & paste is now working! (At least on Zesty.)

I assume that this is piggy-backing off the support being added to enable the “experimental” Unity8 desktop session, so I hope that this “magic” continues to work.

Read more
Michael Hall

Java is a well established language for developing web applications, in no small part because of it’s industry standard framework for building them: Servlets and JSP.  Another important part of this standard is the Web Archive, or WAR, file format, which defines how to provide a web application’s executables and how they should be run in a way that is independent of the application server that will be running  them.

application-server-market-share-2015WAR files make life easier for developers by separate the web application from the web server. Unfortunately this doesn’t actually make it easier to deploy a webapp, it only shifts some of the burden off of the developers and on to the user, who still needs to setup and configure an application server to host it. One popular option is Apache’s Tomcat webapp server, which is both lightweight and packs enough features to support the needs of most webapps.

And here is where Snaps come in. By combining both the application and the server into a single, installable package you get the best of both, and with a little help from Snapcraft you don’t have to do any extra work.

Snapcraft supports a modular build configuration by having multiple “parts“, each of which provides some aspect of your complete runtime environment in a way that is configurable and reusable. This is extended to a feature called “remote parts” which are pre-defined parts you can easily pull into your snap by name. It’s this combination of reusable and remote parts that are going to make snapping up java web applications incredibly easy.

The remote part we are going to use is the “tomcat” part, which will build the Tomcat application server from upstream source and bundle it in your snap ready to go. All that you, as the web developer, need to provide is your .war file. Below is an simple snapcraft.yaml that will bundle Tomcat’s “sample” war file into a self-contained snap package.

name: tomcat-sample
version: '0.1'
summary: Sample webapp using tomcat part
description: |
 This is a basic webapp snap using the remote Tomcat part

grade: stable
confinement: strict

parts:
  my-part:
    plugin: dump
    source: .
    organize:
      sample.war: ./webapps/sample.war
    after: [tomcat]

apps:
  tomcat:
    command: tomcat-launch
    daemon: simple
    plugs: [network-bind]

The important bits are the ones in bold, let’s go through them one at a time starting with the part named “my-part”. This uses the simple “dump” plugin which is just going to copy everything in it’s source (current directory in this case) into the resulting snap. Here we have just the sample.war file, which we are going to move into a “webapps” directory, because that is where the Tomcat part is going to look for war files.

Now for the magic, by specifying that “my-part” should come after the “tomcat” part (using after: [tomcat]) which isn’t defined elsewhere in the snapcraft.yaml, we will trigger Snapcraft to look for a remote part by that same name, which conveniently exists for us to use. This remote part will do two things, first it will download and build the Tomcat source code, and then it will generate a “tomcat-launch” shell script that we’ll use later. These two parts, “my-part” and “tomcat” will be combined in the final snap, with the Tomcat server automatically knowing about and installing the sample.war webapp.

The “apps” section of the snapcraft.yaml defines the application to be run. In this simple example all we need to execute is the “tomcat-launch” script that was created for us. This sets up the Tomcat environment variables and runtime directories so that it can run fully confined within the snap. And by declaring it to be a simple daemon we are additionally telling it to auto-start as soon as it’s installed (and after any reboot) which will be handled by systemd.

Now when you run “snapcraft” on this config, you will end up with the file tomcat-sample_0.1_amd64.snap which contains your web application, the Tomcat application server, and a headless Java JRE to run it all. That way the only thing your users need to do to run your app is to “snap install tomcat-sample” and everything will be up and running at http://localhost:8080/sample/ right away, no need to worry about installing dependencies or configuring services.

Screenshot from 2017-03-21 14-16-59

If you have a webapp that you currently deploy as a .war file, you can snap it yourself in just a few minutes, use the snapcraft.yaml defined above and replace the sample data with your own. To learn more about Snaps and Snapcraft in general you can follow this tutorial as well as learning how to publish your new snap to the store.

Read more
Tom Macfarlane

Our stand occupied the same space as last year with a couple of major
changes this time around – the closure of a previously adjacent aisle
resulting in an increase in overall stand space (from 380 to 456 square
metres). With the stand now open on just two sides, this presented the
design team with some difficult challenges:

  • Maximising site lines and impact upon approach
  • Utilising our existing components – hanging banners, display units,
    alcoves, meeting rooms – to work effectively within a larger space
  • Directing the flow of visitors around the stand

Design solution

Some key design decisions and smaller details:

  • Rotating the hanging fabric banners 90 degrees and moving them
    to the very front of the stand
  • Repositioning the welcome desk to maximise visibility from
    all approaches
  • Improved lighting throughout – from overhead banner illumination
    to alcoves and within all meeting rooms
  • Store room end wall angled 45 degrees to increase initial site line
  • Raised LED screens for increased visibility
  • Four new alcoves with discrete fixings for all 10x alcove screens
  • Bespoke acrylic display units for AR helmets and developer boards
  • Streamlined meeting room tables with new cable management
  • Separate store and staff rooms

Result

With thoughtful planning and attention to detail, our brand presence
at this years MWC was the strongest yet.

Initial design sketches

Plan and site line 3D render

 


Design intent drawings

 

 

 

 

 

3D lettering and stand graphics

 

 

 

 

 

Read more
LaMont Jones

The question came up “how do I add an authoritative (secondary) name server for a domain that is managed by MAAS?”

Why would I want to do that?

There are various reasons, including that the region controller may just be busy enough, or the MAAS region spread out enough, that we don’t want to have all DNS go through it.  Another reason would be to avoid exposing the region controller to the internet, while still allowing it to provide authoritative DNS data for machines inside the region.

How do I do that?

First, we’ll need to create a secondary nameserver.  For purposes of simplicity, we’ll assume that it’s an Ubuntu machine named mysecondary.example.com, and that you have installed the bind9 package.  And we’ll assume that  you have named the domain maas, that the region controller is named region.example.com, with an upstream interface having the IP address a.b.c.d, and that you have a MAAS session called admin.

On mysecondary.example.com, we add this to /etc/bind/named.conf.local:

zone "maas" { type slave; file "db.maas"; masters { a.b.c.d; }; };

Then reload named there via “rndc reload”

With the MAAS CLI, we then say (note the trailing “.” on rrdata):

maas admin dnsresource-records create name=@ domain=maas rrtype=ns rrdata=mysecondary.example.com.

At that point, mysecondary is both authoritative, and named in the NS RRset for the domain.

What else can I do?

If you call the MAAS domain somename.example.com, then you could add NS records to the example.com DNS zone delegating that zone to the MAAS region and it’s secondaries.

What are the actual limitations?

  • The region controller is always listed as a name server for the domain.  For domains other than the default.  See also bug 1672220 about address records.
  • If MAAS is told that it’s authoritative for a domain, it IS the master/primary.
  • The MAAS region does not have zones that are other than “type master”.

Read more
Stéphane Graber

LXD logo

GPU inside a container

LXD supports GPU passthrough but this is implemented in a very different way than what you would expect from a virtual machine. With containers, rather than passing a raw PCI device and have the container deal with it (which it can’t), we instead have the host setup with all needed drivers and only pass the resulting device nodes to the container.

This post focuses on NVidia and the CUDA toolkit specifically, but LXD’s passthrough feature should work with all other GPUs too. NVidia is just what I happen to have around.

The test system used below is a virtual machine with two NVidia GT 730 cards attached to it. Those are very cheap, low performance GPUs, that have the advantage of existing in low-profile PCI cards that fit fine in one of my servers and don’t require extra power.
For production CUDA workloads, you’ll want something much better than this.

Note that for this to work, you’ll need LXD 2.5 or higher.

Host setup

Install the CUDA tools and drivers on the host:

wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo apt update
sudo apt install cuda

Then reboot the system to make sure everything is properly setup. After that, you should be able to confirm that your NVidia GPU is properly working with:

ubuntu@canonical-lxd:~$ nvidia-smi 
Tue Mar 21 21:28:34 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   26C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+

And can check that the CUDA tools work properly with:

ubuntu@canonical-lxd:~$ /usr/local/cuda-8.0/extras/demo_suite/bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GT 730
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3059.4

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3267.4

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			30805.1

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Container setup

First lets just create a regular Ubuntu 16.04 container:

ubuntu@canonical-lxd:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1

Then install the CUDA demo tools in there:

lxc exec c1 -- wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
lxc exec c1 -- dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
lxc exec c1 -- apt update
lxc exec c1 -- apt install cuda-demo-suite-8-0 --no-install-recommends

At which point, you can run:

ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

Which is expected as LXD hasn’t been told to pass any GPU yet.

LXD GPU passthrough

LXD allows for pretty specific GPU passthrough, the details can be found here.
First let’s start with the most generic one, just allow access to all GPUs:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:47:54 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

Now just pass whichever is the first GPU:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu id=0
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:50:37 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

You can also specify the GPU by vendorid and productid:

ubuntu@canonical-lxd:~$ lspci -nnn | grep NVIDIA
02:06.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 730] [10de:1287] (rev a1)
02:07.0 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
02:08.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 730] [10de:1287] (rev a1)
02:09.0 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu vendorid=10de productid=1287
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:52:40 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

Which adds them both as they are exactly the same model in my setup.

But for such cases, you can also select using the card’s PCI ID with:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu pci=0000:02:08.0
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:56:52 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu 
Device gpu removed from c1

And lastly, lets confirm that we get the same result as on the host when running a CUDA workload:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- /usr/local/cuda-8.0/extras/demo_suite/bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GT 730
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3065.4

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3305.8

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			30825.7

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Conclusion

LXD makes it very easy to share one or multiple GPUs with your containers.
You can either dedicate specific GPUs to specific containers or just share them.

There is no of the overhead involved with usual PCI based passthrough and only a single instance of the driver is running with the containers acting just like normal host user processes would.

This does however require that your containers run a version of the CUDA tools which supports whatever version of the NVidia drivers is installed on the host.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Alan Griffiths

MirAL 1.3.1

There’s a bugfix MirAL release (1.3.1) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

Unsurprisingly, given the project’s original goal, the ABI is unchanged.

The bugfixes in 1.3.1 are:

In libmiral a focus management fix:

When a dialog is hidden ensure that the active window focus goes to the parent. (LP: #1671072)

In the miral-shell example, two crashes fixed:

If a surface is deleted before its decoration is painted miral-shell can crash, or hang on exit (LP: #1673038)

If the specified “titlebar” font doesn’t exist the server crashes (LP: #1671028)

In addition a misspelling of “management” has been corrected:

SetWindowManagmentPolicy => SetWindowManagementPolicy

Read more
Dustin Kirkland


Canonical announced the Ubuntu 12.04 LTS (Precise Pangolin) release almost 5 years ago, on April 26, 2012. As with all LTS releases, Canonical has provided ongoing security patches and bug fixes for a period of 5 years. The Ubuntu 12.04 LTS (Long Term Support) period will end on Friday, April 28, 2017.

Following the end-of-life of Ubuntu 12.04 LTS, Canonical is offering Ubuntu 12.04 ESM (Extended Security Maintenance), which provides important security fixes for the kernel and the most essential user space packages in Ubuntu 12.04.  These updates are delivered in a secure, private archive exclusively available to Ubuntu Advantage customers on a per-node basis.

All Ubuntu 12.04 LTS users are encouraged to upgrade to Ubuntu 14.04 LTS or Ubuntu 16.04 LTS. But for those who cannot upgrade immediately, Ubuntu 12.04 ESM updates will help ensure the on-going security and integrity of Ubuntu 12.04 systems.

Users interested in Ubuntu 12.04 ESM updates can purchase Ubuntu Advantage at http://buy.ubuntu.com/   Credentials for the private archive will be available by the end-of-life date for Ubuntu 12.04 LTS (April 28, 2017).

Questions?  Post in the comments below and join us for a live webinar, "HOWTO: Ensure the Ongoing Security Compliance of your Ubuntu 12.04 Systems", on Wednesday, March 22nd at 4pm GMT / 12pm EDT / 9am PDT.  Here, we'll discuss Ubuntu 12.04 ESM and perform a few live upgrades of Ubuntu 12.04 LTS systems.

Cheers,
Dustin

Read more