Canonical Voices

UbuntuTouch

在先前的文章"如何为我们的Ubuntu Core应用进行设置 "中,我们通过copy plugin的方法把我们想要的congfigure文件拷入到我们所需要的目录中.具体的实现是这样的:

snapcraft.yaml

parts:  
 hello:  
  plugin: copy  
  files:  
    ./bin: bin  
 config:  
  plugin: dump  
  source: .  
  organize:  
    configure: meta/hooks/configure  

由于在snapcraft 2.25版本以后,它提供了对hook的支持,所有,我们只需要要在我们的项目的根目录中建立一个叫做snap/hooks的目录,并把我们的configure文件拷入即可:

liuxg@liuxg:~/snappy/desktop/helloworld-hook$ tree -L 4
.
├── bin
│   ├── createfile
│   ├── createfiletohome
│   ├── echo
│   ├── env
│   ├── evil
│   └── sh
├── setup
│   ├── gui
│   │   ├── helloworld.desktop
│   │   └── helloworld.png
│   └── license.txt
├── snap
│   └── hooks
│       └── configure
└── snapcraft.yaml

有了这样的文件架构后,snapcraft会自动帮我们把configure文件考入到meta/hooks文件目录下.下面是我们的prime目录里的内容:

liuxg@liuxg:~/snappy/desktop/helloworld-hook/prime$ tree -L 3
.
├── bin
│   ├── createfile
│   ├── createfiletohome
│   ├── echo
│   ├── env
│   ├── evil
│   └── sh
├── command-createfiletohome.wrapper
├── command-createfile.wrapper
├── command-env.wrapper
├── command-evil.wrapper
├── command-hello-world.wrapper
├── command-sh.wrapper
├── meta
│   ├── gui
│   │   ├── helloworld.desktop
│   │   └── helloworld.png
│   ├── hooks
│   │   └── configure
│   └── snap.yaml
└── snap
    └── hooks
        └── configure

我们必须记住这个功能只是在snapcraft 2.25以上的版本中才有的.我们可以看到在meta/hooks/中有一个叫做configure的文件.

我们安装好这个snap应用,并执行如下的命令:

$ sudo snap set hello username=foo password=bar

我们可以通过如下的命令来获得这个值:

$ sudo snap get hello username
foo

显然,我们得到我们设置的值.整个源码在:https://github.com/liu-xiao-guo/helloworld-hook.另外一个例程也可以在我们的snapcraft项目中的hooks找到.

更多阅读:https://github.com/snapcore/snapcraft/blob/master/docs/hooks.md.就想文章中介绍的那样,我们也可以利用另外一种方法来实现.具体的例子见pyhooks.这种方法的好处是可以使用python语言来进行设置.运行结果如下:

liuxg@liuxg:~$ sudo snap set pyhooks fail=true
error: cannot perform the following tasks:
- Run configure hook of "pyhooks" snap (Failing as requested.)
liuxg@liuxg:~$ sudo snap set pyhooks fail=false


更多阅读:https://snapcraft.io/docs/build-snaps/hooks




作者:UbuntuTouch 发表于2017/1/20 14:05:49 原文链接
阅读:239 评论:0 查看评论

Read more
UbuntuTouch

[原]微软azure云在Ubuntu Core中的应用

在今天的教程中,我们来展示如何在Ubuntu Core中使用azure的IoT hub来开发我们的应用.Azure IoT Hub目前提供了一个框架对我们的IoT设备进行管理,并可以通过预置解决方案来展示我们的数据.在今天的文章中,我们将介绍如何把我们的设备连接到远程监视预配置解决方案中.


1)预配远程监视预配置解决方案


我们可以按照在微软的官方文档:


来创建一个我们的预配置解决方案,并最终形成向如下的配置:



这里我的解决方案的名称叫做"sensors".







如果在我们的IoT设备中有数据传上来的话,我们可以在右边的"遥测历史记录"中看到这些数据.这些数据通常是以曲线的形式来表现出来的.
在创建设备的过程中,我们需要记录下来在下面画面中的数据,以便我们在代码实现中使用:



我们也可以打开azure.cn来查看我们已经创建的所有的资源:









这里显示的"连接字符串-主秘钥"对我们以后的编程是非常有用的.需要特别留意一下.


2)生成以C语言开发的snap应用


在这一节中,我们将展示如下如何使用C语言来开发一个客户端,并最终形成一个snap应用.这个应用将和我们在上一节中所形成的远程监视预配置解决方案进行通信.我们在Ubuntu 16.04的Desktop中开发snap应用.如果大家对snap开发的安装还是不很熟悉的话,请参阅我的文章来安装好自己的环境.就像文章中介绍的那样,我们先安装必要的组件包:

$ sudo apt-get install cmake gcc g++

将 AzureIoT 存储库添加到计算机:
$ sudo add-apt-repository ppa:aziotsdklinux/ppa-azureiot
$ sudo apt-get update
安装 azure-iot-sdk-c-dev 包:
$ sudo apt-get install -y azure-iot-sdk-c-dev
这样,我们就安装好了我们所需要的组件包.由于一些原因,在我们编译我们的例程中,会出现一些的错误,所以我们必须做如下的手动修改:
/usr/include/azureiot/inc$ sudo mv azure_c_shared_utility ..
有了这个改动,就可以确保我们在如下的编译中不会出现头文件找不到的情况.

我们先来看看我已经做好的一个项目:

snapcraft.yaml

name: remote-monitor 
version: '0.1' 
summary: This is a remote-monitor snap for azure
description: |
  This is a remote-monitor sample snap for azure

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  remote-monitor:
    command: bin/sample_app
    plugs: [network]

parts:
  remote:
    plugin: cmake
    source: ./src

这个项目是一个cmake项目.由于我们的包的名称和我们的应用名称是一样的,所在我们运行我们的应用时,我们可以直接打入remote-monitorming来运行我们的应用.在做任何改变之前,我们打开remote_monitoring.c文件,并注意一下的代码:

static const char* deviceId = "mydevice";
static const char* deviceKey = "[Device Key]";
static const char* hubName = "sensorsf8f61";
static const char* hubSuffix = "azure-devices.cn";

这里的解释是:

static const char* deviceId = "[Device Id]";
static const char* deviceKey = "[Device Key]";
static const char* hubName = "[IoTHub Name]";
static const char* hubSuffix = "[IoTHub Suffix, i.e. azure-devices.net]";

我们需要根据我们自己的账号的情况替换这些值.在实际运用中,我们可以修改在remote_monitoring.c中的如下的代码:

while (1)
{
	unsigned char*buffer;
	size_t bufferSize;
	
	srand(time(NULL));
	int r = rand() % 50;  
	int r1 = rand() % 55;
	int r2 = rand() % 50;
	printf("r: %d, r1: %d, r2: %d\n", r, r1, r2);
	thermostat->Temperature = r;
	thermostat->ExternalTemperature = r1;
	thermostat->Humidity = r2;
	
	(void)printf("Sending sensor value Temperature = %d, Humidity = %d\r\n", thermostat->Temperature, thermostat->Humidity);

	if (SERIALIZE(&buffer, &bufferSize, thermostat->DeviceId, thermostat->Temperature, thermostat->Humidity, thermostat->ExternalTemperature) != CODEFIRST_OK)
	{
		(void)printf("Failed sending sensor value\r\n");
	}
 ...
}
来把我们所需要的数据传上去.在这里,我们随意写了一些随机的数据.

注意这里的"deviceKey"是我们在上节中图片中所展示的那个"Device Key".
我们在termnial中直接打入如下的命令:
$ snapcraft
这样就形成了我们的项目的snap包.我们可以通过如下的命令来进行安装:

liuxg@liuxg:~/snappy/desktop/azure/remote-monitor$ sudo snap install remote-monitor_0.1_amd64.snap --dangerous
[sudo] password for liuxg: 
remote-monitor 0.1 installed

liuxg@liuxg:~$ snap list
Name            Version  Rev  Developer  Notes
azure           0.1      x1              -
core            16.04.1  714  canonical  -
hello           1.0      x1              devmode
hello-world     6.3      27   canonical  -
hello-xiaoguo   1.0      x1              -
remote-monitor  0.1      x2    
显然我们的remote-monitor已经被成功安装.我们在terminal中打入如下的命令:
liuxg@liuxg:~$ remote-monitor 
IoTHubClient accepted the message for delivery
r: 30, r1: 37, r2: 4
Sending sensor value Temperature = 30, Humidity = 4
IoTHubClient accepted the message for delivery
r: 45, r1: 23, r2: 35
Sending sensor value Temperature = 45, Humidity = 35
IoTHubClient accepted the message for delivery
r: 16, r1: 39, r2: 25
Sending sensor value Temperature = 16, Humidity = 25
IoTHubClient accepted the message for delivery
r: 16, r1: 33, r2: 14
Sending sensor value Temperature = 16, Humidity = 14
IoTHubClient accepted the message for delivery
r: 20, r1: 29, r2: 32

显然我们的客户端应用在不断地向azure IoT Hub发送数据.我们可以通过https://www.azureiotsuite.cn/来查看我们已经收到的数据.



在下面我们可以看到设备数据的最大值及最小值的变化.
如果大家想把这个应用在树莓派等的ARM设备上运行的话,请参阅我的文章"如何为树莓派安装Ubuntu Core并在Snap系统中进行编译".具体的安装和这里介绍的是一样的.请开发者自己去试.


3)生成以nodejs开发的snap应用


在这一节中,我们将介绍如何使用nodejs来开发我们的snap应用.我们可以参阅文章"适用于 Node.js 的 Azure IoT 中心入门".就像这篇文章中所介绍的那样,我们最感兴趣的是它里面介绍的第三个控制台应用程序SimulatedDevice.js.

SimulatedDevice.js

#!/usr/bin/env node

var clientFromConnectionString = require('azure-iot-device-amqp').clientFromConnectionString;
var Message = require('azure-iot-device').Message;

var connectionString = 'HostName=sensorsf8f61.azure-devices.cn;DeviceId=mydevice;SharedAccessKey={Device Key}';

var client = clientFromConnectionString(connectionString);

function printResultFor(op) {
  return function printResult(err, res) {
    if (err) console.log(op + ' error: ' + err.toString());
    if (res) console.log(op + ' status: ' + res.constructor.name);
  };
}

var connectCallback = function (err) {
  if (err) {
    console.log('Could not connect: ' + err.amqpError);
  } else {
    console.log('Client connected');

    // Create a message and send it to the IoT Hub every second
    setInterval(function(){
        var temp = 10 + (Math.random() * 4);
        var windSpeed = 10 + (Math.random() * 4);
        var data = JSON.stringify({ deviceId: 'mydevice', temp: temp, windSpeed: windSpeed});
        var message = new Message(data);
        console.log("Sending message: " + message.getData());
        client.sendEvent(message, printResultFor('send'));

    }, 5000);
  }
};

client.open(connectCallback);

注意在上面的代码中,我们需要手动修改如下的connectionString:
var connectionString = 'HostName=sensorsf8f61.azure-devices.cn;DeviceId=mydevice;SharedAccessKey={yourdevicekey}';
就像在文章中介绍的那样,它的定义为:

var connectionString = 'HostName={youriothostname};DeviceId=myFirstNodeDevice;SharedAccessKey={yourdevicekey}';

我们需要根据我们在第一节中设置的那些参数来修改上面的字符串.大家可以参阅我的项目:

snapcraft.yaml

name: azure 
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: This is an azure snap app
description: |
  This is an azure client snap to send a message

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  azure:
    command: bin/send
    plugs: [network]

parts:
  node:
    plugin: nodejs
    source: .

同样我们可以打入snapcraft命令来生产相应的包,并进行安装:

liuxg@liuxg:~/snappy/desktop/azurenode-snap$ snap list
Name            Version  Rev  Developer  Notes
azure           0.1      x2              -
core            16.04.1  714  canonical  -
hello           1.0      x1              devmode
hello-world     6.3      27   canonical  -
hello-xiaoguo   1.0      x1              -
remote-monitor  0.1      x2         

我们可以直接运行azure命令:

liuxg@liuxg:~/snappy/desktop/azurenode-snap$ azure
Client connected
Sending message: {"deviceId":"mydevice","temp":11.826184131205082,"windSpeed":11.893792165443301}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":10.594819721765816,"windSpeed":10.54138664342463}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":11.27814894542098,"windSpeed":10.962828870862722}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":13.068702490068972,"windSpeed":10.28670579008758}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":11.723079251125455,"windSpeed":12.173830625601113}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":12.595101269893348,"windSpeed":12.120747512206435}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":11.431507185101509,"windSpeed":11.76255036983639}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":12.488932724110782,"windSpeed":13.200456796213984}
send status: MessageEnqueued

我们可以通过https://www.azureiotsuite.cn/来查看我们已经收到的数据:



我们在上面可以看到Temp及Wind Speed的曲线.同样地,如果大家想把这个应用在树莓派等的ARM设备上运行的话,请参阅我的文章"如何为树莓派安装Ubuntu Core并在Snap系统中进行编译".请开发者自己去试.


作者:UbuntuTouch 发表于2017/1/19 16:59:05 原文链接
阅读:220 评论:0 查看评论

Read more
UbuntuTouch

在我们的应用设计中,我们通过会选择一些临时的文件目录来存储我们的文件,比如在Linux中的tmp文件目录.那么在我们的snap设计中,我们应该利用哪个文件目录来存储我们的文件呢?答案是我们可以选择XDG_RUNTIME_DIR,当然这也依赖于开发者自己的选择.


我们先来看一下我的一个做好的例程:

https://github.com/liu-xiao-guo/helloworld-fifo

它的snapcraft.yaml文件如下:


name: hello
version: "1.0"
summary: The 'hello-world' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox
    * hello-world.evil - show how snappy sandboxes binaries
    * hello-world.sh   - enter interactive shell that runs in app sandbox
    * hello-world      - simply output text
grade: stable
confinement: strict
type: app  #it can be gadget or framework
icon: icon.png

apps:
 fifo:
   command: bin/fifo
 env:
   command: bin/env
 evil:
   command: bin/evil
 sh:
   command: bin/sh
 hello-world:
   command: bin/echo
 createfile:
   command: bin/createfile
 createfiletohome:
   command: bin/createfiletohome
 writetocommon:
   command: bin/writetocommon

parts:
 hello:
  plugin: dump
  source: .

在这里,我们设计了一个叫做fifo的应用.它的脚本具体如下:

#!/bin/bash

echo "Going to make a directory at: $XDG_RUNTIME_DIR"
mkdir -p $XDG_RUNTIME_DIR

echo "Create a file at the location..."
cd $XDG_RUNTIME_DIR
pwd
touch thisfile

if [ $? == 0 ]; then
	echo "The file is successfully created!"
else
	echo "The file is not successfully created!"
fi

我首先创建一个目录,并在目录中创建一个文件.显示如下:

liuxg@liuxg:~$ hello.fifo 
Going to make a directory at: /run/user/1000/snap.hello
Create a file at the location...
/run/user/1000/snap.hello
The file is successfully created!

显然这个应用的运行是没有任何的permission的问题的.它是完全可以访问并进行读写的位置.这个位置可以被我们的应用程序用来进行FIFO的操作.

我们实际上也可以运行我在应用中的env这个应用来展示所有的环境变量:

liuxg@liuxg:~$ hello.env | grep XDG_RUNTIME_DIR
XDG_RUNTIME_DIR=/run/user/1000/snap.hello


当然,我们也可以使用/tmp目录来作为临时存储文件目录.这个目录对于每个snap应用来说都是独特的,也就是每个应用有一个自己的独立的tmp目录.但是我们我们都可以按照/tmp的方式去访问.这个文件的位置可以在我们的桌面电脑的/tmp目录下找到。它的文件目录有点像/tmp/snap.1000_snap.hello.fifo_5BpMiB/tmp。

我们可以使用如下的代码来检验这个:

fifo

#!/bin/bash

echo "Going to make a directory at: $XDG_RUNTIME_DIR"
mkdir -p $XDG_RUNTIME_DIR

echo "Create a file at the location..."
cd $XDG_RUNTIME_DIR
pwd
touch thisfile

if [ $? == 0 ]; then
	echo "The file is successfully created!"
else
	echo "The file is not successfully created!"
fi

cd /tmp
pwd
echo "Haha" > test.txt

if [ $? == 0 ]; then
	echo "The test.txt file is successfully created!"
else
	echo "The test.txt file is not successfully created!"
fi


作者:UbuntuTouch 发表于2017/2/4 11:37:04 原文链接
阅读:193 评论:0 查看评论

Read more
UbuntuTouch

基于开发者程路的项目:https://github.com/dawndiy/electronic-wechat-snap,我对该项目做了一些小的修改.但最终把electronic-wechat打包为snap应用.


1)方案一


在整个方案中,我们直接把一个稳定编译好的发布直接下载打包.这个项目的源码:

snapcraft.yaml


name: electronic-wechat
version: '1.4.0'
summary: A better WeChat on macOS and Linux. Built with Electron.
description: |
  Electronic WeChat is a unoffical WeChat client. A better WeChat on
  macOS and Linux. Built with Electron.
grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  electronic-wechat:
    command: desktop-launch $SNAP/wechat.wrapper
    plugs:
      - unity7
      - opengl
      - network
      - pulseaudio
      - home
      - browser-support
      - gsettings
      - x11

parts:
  electronic-wechat:
    plugin: dump
    source: https://github.com/geeeeeeeeek/electronic-wechat/releases/download/v1.4.0/linux-x64.tar.gz
    stage-packages:
      - libnss3
      - fontconfig-config
      - gnome-themes-standard
      - fonts-wqy-microhei
      - libasound2-data
      - fcitx-frontend-gtk2
      - overlay-scrollbar-gtk2
      - libatk-adaptor
      - libcanberra-gtk-module
    filesets:
      no-easy-install-files:
        - -usr/sbin/update-icon-caches
        - -README.md
    stage:
      - $no-easy-install-files
    prime:
      - $no-easy-install-files

  wechat-copy:
    plugin: dump
    source: .
    filesets:
      wechat.wrapper: wechat.wrapper
    after: 
      - electronic-wechat
      - desktop-gtk2

在这里,我们直接在地址https://github.com/geeeeeeeeek/electronic-wechat/releases/download/v1.4.0/linux-x64.tar.gz下载已经编译好的稳定的版本,并进行打包.这里我们可以利用dump plugin来帮我们进行安装.

2)方案二


我们可以利用最新的代码来编译,并打包.这个项目的源码在:


snapcraft.yaml


name: electronic-wechat
version: '1.4.0'
summary: A better WeChat on macOS and Linux. Built with Electron.
description: |
  Electronic WeChat is a unoffical WeChat client. A better WeChat on
  macOS and Linux. Built with Electron.
grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  electronic-wechat:
    command: desktop-launch $SNAP/wechat.wrapper
    plugs:
      - unity7
      - opengl
      - network
      - pulseaudio
      - home
      - browser-support
      - gsettings
      - x11

parts:
  electronic-wechat:
    plugin: nodejs
    source-type: git
    source: https://github.com/geeeeeeeeek/electronic-wechat/
    source-branch: production
    npm-run:
      - build:linux
    install: cp -r dist $SNAPCRAFT_PART_INSTALL
    stage-packages:
      - libnss3
      - fontconfig-config
      - gnome-themes-standard
      - fonts-wqy-microhei
      - libasound2-data
      - fcitx-frontend-gtk2
      - overlay-scrollbar-gtk2
      - libatk-adaptor
      - libcanberra-gtk-module
    filesets:
      no-easy-install-files:
        - -usr/sbin/update-icon-caches
        - -README.md
    stage:
      - $no-easy-install-files
    prime:
      - $no-easy-install-files

  wechat-copy:
    plugin: dump
    source: .
    filesets:
      wechat.wrapper: wechat.wrapper
    after: 
      - electronic-wechat
      - desktop-gtk2

最新的代码在地址https://github.com/geeeeeeeeek/electronic-wechat/可以找到.我们利用nodejs plugin来帮我们进行打包.在这里,我们使用了snapcraft的Scriplets来覆盖我们的nodejs plugin中的install:

   install: cp -r dist $SNAPCRAFT_PART_INSTALL

对于上面的npm-run那一句,我们甚至也可以删除,并用snapcraft中的Scriplets来实现:

build: npm run build:linux

我们在项目的根目录下打入如下的命令:

$ snapcraft

它将最终帮我们生成我们所需要的.snap文件.我们可以安装到我们的系统中:

$ sudo snap install electronic-wechat_1.4.0_amd64.snap --dangerous

liuxg@liuxg:~$ snap list
Name                 Version  Rev  Developer  Notes
core                 16.04.1  888  canonical  -
electronic-wechat    1.4.0    x1              -
hello-world          6.3      27   canonical  -
hello-xiaoguo        1.0      x1              -
snappy-debug         0.28     26   canonical  -
ubuntu-app-platform  1        22   canonical  -

我们可以看到electronic-wechat已经被成功安装到我的电脑中.运行我们的应用:








作者:UbuntuTouch 发表于2017/2/3 16:12:02 原文链接
阅读:178 评论:0 查看评论

Read more
UbuntuTouch

在今天的文章中,我们将介绍如何把一个HTML5的应用打包为一个snap应用。我们知道有很多的HTML5应用,但是我们如何才能把它们打包为我们的snap应用呢?特别是在Ubuntu手机手机开发的时候,有很多的已经开发好的HTML5游戏。我们可以通过我们今天讲的方法来把先前的click HTML5应用直接打包为snap应用,并可以在我们的Ubuntu桌面电脑上进行运行。当然,今天介绍的方法并不仅限于Ubuntu手机开发的HTML应用。这里的方法也适用于其它的HTML5应用。




1)HTML5应用


首先,我们看一下我之前做过的一个为Ubuntu手机而设计的一个HTML5应用。它的地址为:


你可以通过如下的方式得到这个代码:

bzr branch lp:~liu-xiao-guo/debiantrial/wuziqi

在这个应用中,我们只关心的是在它www目录里面的内容。这个项目的所有文件如下:

$ tree
.
├── manifest.json
├── wuziqi.apparmor
├── wuziqi.desktop
├── wuziqi.png
├── wuziqi.ubuntuhtmlproject
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    └── js
        └── app.js

我们希望把在www里面的内容能够最终打包到我们的snap应用中去。

2)打包HTML5应用为snap


为了能够把我们的HTML5应用打包为一个snap应用,我们可以在项目的根目录下打入如下的命令:

$ snapcraft init

上面的命令将在我们的当前的目录下生产一个新的snap目录,并在里面生一个叫做snapcraft.yaml的文件。这实际上是一个模版。我们可以通过修改这个snapcraft.yaml文件来把我们的应用进行打包。运行完上面的命令后,文件架构如下:

$ tree
.
├── manifest.json
├── snap
│   └── snapcraft.yaml
├── wuziqi.apparmor
├── wuziqi.desktop
├── wuziqi.png
├── wuziqi.ubuntuhtmlproject
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    └── js
        └── app.js

我们通过修改snapcraft.yaml文件,并最终把它变为:

snapcraft.yaml


name: wuziqi
version: '0.1'
summary: Wuziqi Game. It shows how to snap a html5 app into a snap
description: |
  This is a Wuziqi Game. There are two kind of chesses: white and black. Two players
  play it in turn. The first who puts the same color chesses into a line is the winner.

grade: stable
confinement: strict

apps:
  wuziqi:
    command: webapp-launcher www/index.html
    plugs:
      - browser-sandbox
      - camera
      - mir
      - network
      - network-bind
      - opengl
      - pulseaudio
      - screen-inhibit-control
      - unity7

plugs:
  browser-sandbox:
    interface: browser-support
    allow-sandbox: false
  platform:
    interface: content
    content: ubuntu-app-platform1
    target: ubuntu-app-platform
    default-provider: ubuntu-app-platform

parts:
  webapp:
    after: [ webapp-helper, desktop-ubuntu-app-platform ]
    plugin: dump
    source: .
    stage-packages:
      - ubuntu-html5-ui-toolkit
    organize:
      'usr/share/ubuntu-html5-ui-toolkit/': www/ubuntu-html5-ui-toolkit
    prime:
      - usr/*
      - www/*

这里的解释如下:
  • 由于这是一个HTML5的应用,我们可以通过webapp-helper来启动我们的应用。在我们的应用中我们使用被叫做webapp-helper的remote part
  • 由于在Ubuntu的手机中,web的底层部分是由Qt进行完成的,所以我们必须要把Qt也打包到我们的应用中。但是由于Qt库是比较大的,我们可以通过ubuntu-app-platform snap应用通过它提供的platform接口来得到这些Qt库。开发者可以参阅我们的文章https://developer.ubuntu.com/en/blog/2016/11/16/snapping-qt-apps/
  • 在我们的index.html文件中,有很多的诸如<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>。这显然和ubuntu-html5-ui-toolkit有关,所以,我们必须把ubuntu-html5-ui-toolkit这个包也打入到我们的应用中。这个我们通过stage-packages来安装ubuntu-html5-ui-toolkit包来实现
  • 我们通过organize把从ubuntu-html5-ui-toolkit中安装的目录ubuntu-html5-ui-toolkit重组到我们项目下的www目录中以便index.html文件引用
我们再来看看我们的原始的index.html文件:

index.html

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>An Ubuntu HTML5 application</title>
    <meta name="description" content="An Ubuntu HTML5 application">
    <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=0">

    <!-- Ubuntu UI Style imports - Ambiance theme -->
    <link href="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/css/appTemplate.css" rel="stylesheet" type="text/css" />

    <!-- Ubuntu UI javascript imports - Ambiance theme -->
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/fast-buttons.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/buttons.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/dialogs.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/page.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/pagestacks.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/tab.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/tabs.js"></script>

    <!-- Application script -->
    <script src="js/app.js"></script>
    <link href="css/app.css" rel="stylesheet" type="text/css" />

  </head>

  <body>
        <div class='test'>
          <div>
              <img src="images/w.png" alt="white" id="chess">
          </div>
          <div>
              <button id="start">Start</button>
          </div>
        </div>

        <div>
            <canvas width="640" height="640" id="canvas" onmousedown="play(event)">
                 Your Browser does not support HTML5 canvas
            </canvas>
        </div>
  </body>
</html>

从上面的代码中,在index.hml文件中它引用的文件是从/usr/share这里开始的。在一个confined的snap应用中,这个路径是不可以被访问的(因为一个应用只能访问自己安装在自己项目根目录下的文件)。为此,我们必须修改这个路径。我们必须把上面的/usr/share/的访问路径改变为相对于本项目中的www目录的访问路径:

    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/fast-buttons.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/buttons.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/dialogs.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/page.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/pagestacks.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/tab.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/tabs.js"></script>

这就是为什么我们在之前的snapcraft.yaml中看到的:

parts:
  webapp:
    after: [ webapp-helper, desktop-ubuntu-app-platform ]
    plugin: dump
    source: .
    stage-packages:
      - ubuntu-html5-ui-toolkit
    organize:
      'usr/share/ubuntu-html5-ui-toolkit/': www/ubuntu-html5-ui-toolkit
    prime:
      - usr/*
      - www/*

在上面,我们通过organize把ubuntu-html5-ui-toolkit安装后的目录重新组织并移到我的项目的www目录中,从而使得这里的文件可以直接被我们的项目所使用。我们经过打包后的文件架构显示如下:

$ tree -L 3
.
├── bin
│   ├── desktop-launch
│   └── webapp-launcher
├── command-wuziqi.wrapper
├── etc
│   └── xdg
│       └── qtchooser
├── flavor-select
├── meta
│   ├── gui
│   │   ├── wuziqi.desktop
│   │   └── wuziqi.png
│   └── snap.yaml
├── snap
├── ubuntu-app-platform
├── usr
│   ├── bin
│   │   └── webapp-container
│   └── share
│       ├── doc
│       ├── ubuntu-html5-theme -> ubuntu-html5-ui-toolkit
│       └── webbrowser-app
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    ├── js
    │   ├── app.js
    │   └── jquery.min.js
    └── ubuntu-html5-ui-toolkit
        └── 0.1

在上面,我们可以看出来ubuntu-html5-ui-toolkit现在处于在www文件目录下,可以直接被我们的项目所使用。

我们在项目的根目录下打入如下的命令:

$ snapcraft

如果一切顺利的话,我们可以得到一个.snap文件。我们可以通过如下的命令来进行安装:

$ sudo snap install wuziqi_0.1_amd64.snap --dangerous

安装完后,由于我们使用了content sharing的方法来访问Qt库,所以,我们必须安装如下的snap:

$ snap install ubuntu-app-platform 
$ snap connect wuziqi:platform ubuntu-app-platform:platform

执行上面的命令后,我们可以看到:

$ snap interfaces
Slot                          Plug
:account-control              -
:alsa                         -
:avahi-observe                -
:bluetooth-control            -
:browser-support              wuziqi:browser-sandbox
:camera                       -
:core-support                 -
:cups-control                 -
:dcdbas-control               -
:docker-support               -
:firewall-control             -
:fuse-support                 -
:gsettings                    -
:hardware-observe             -
:home                         -
:io-ports-control             -
:kernel-module-control        -
:libvirt                      -
:locale-control               -
:log-observe                  snappy-debug
:lxd-support                  -
:modem-manager                -
:mount-observe                -
:network                      downloader,wuziqi
:network-bind                 socketio,wuziqi
:network-control              -
:network-manager              -
:network-observe              -
:network-setup-observe        -
:ofono                        -
:opengl                       wuziqi
:openvswitch                  -
:openvswitch-support          -
:optical-drive                -
:physical-memory-control      -
:physical-memory-observe      -
:ppp                          -
:process-control              -
:pulseaudio                   wuziqi
:raw-usb                      -
:removable-media              -
:screen-inhibit-control       wuziqi
:shutdown                     -
:snapd-control                -
:system-observe               -
:system-trace                 -
:time-control                 -
:timeserver-control           -
:timezone-control             -
:tpm                          -
:uhid                         -
:unity7                       wuziqi
:upower-observe               -
:x11                          -
ubuntu-app-platform:platform  wuziqi
-                             wuziqi:camera
-                             wuziqi:mir

当然在我们的应用中,我们也使用了冗余的plug,比如上面的camera及mir等。我们可以看到wuziqi应用和其它Core及ubuntu-app-platform snap的连接情况。在确保它们都连接好之后,我们可以在命令行中打入如下的命令:

$ wuziqi

它将启动我们的应用。当然,我们也可以从我们的Desktop的dash中启动我们的应用:






作者:UbuntuTouch 发表于2017/2/13 10:16:55 原文链接
阅读:129 评论:0 查看评论

Read more
UbuntuTouch

我们在先前的文章"如何把一个qmake的Ubuntu手机应用打包为一个snap应用"中体会了如何把一个qmake的项目打包为一个snap应用.在今天的教程中,我们利用Qt Creator来创建一个项目,并最终把我们的应用打包为一个snap项目.在打包的过程中,我们可以体会在snapcraft中的scriplets


1)创建一个Qt Helloworld项目


首先,我们打开我们的Qt Creator:







这样我们就创建了一个最简单的一个helloworld应用.


2)创建snapcraft.yaml文件


我们在项目的根目录下,打入如下的命令:
$ snapcraft init
上面的命令将会为我们在当前目录下生成一个叫做snap的目录(snapcraft version 2.26,之前的版本没有创建这个snap目录).

liuxg@liuxg:~/snappy/desktop/qtapp$ tree -L 3
.
├── main.cpp
├── mainwindow.cpp
├── mainwindow.h
├── mainwindow.ui
├── qtapp.pro
├── qtapp.pro.user
├── README.md
└── snap
    └── snapcraft.yaml

所有文件的架构如上面所示.我们可以通过编辑修改这个snapcraft.yaml文件:

snapcraft.yaml

name: qthello 
version: '0.1' 
summary: a demo for qt hello app
description: |
  This is a qt app demo

grade: stable 
confinement: strict 

apps:
  qthello:
    command: desktop-launch $SNAP/opt/myapp/qtapp
    plugs: [home, unity7, x11]

parts:
  project:
    plugin: qmake
    source: .
    qt-version: qt5
    project-files: [qtapp.pro]
    install: |
      install -d $SNAPCRAFT_PART_INSTALL/opt/myapp
      install qtapp $SNAPCRAFT_PART_INSTALL/opt/myapp/qtapp

  integration:
    plugin: nil
    stage-packages:
     - libc6
     - libstdc++6
     - libc-bin
    after: [desktop-qt5]

在这里,我们必须指出的是:

    install: |
      install -d $SNAPCRAFT_PART_INSTALL/opt/myapp
      install qtapp $SNAPCRAFT_PART_INSTALL/opt/myapp/qtapp

由于在原始的qtapp.pro文件中,并没有相应的代码让我们如何去安装我们的qtapp应用文件.我们在这里使用了上面的install来安装我们的应用.根据在Scriplets里的描述:

“install”

The install scriptlet is triggered after the build step of a plugin.

这里的scripts将会在build之后自动被自动执行.它首先创建一个叫做myapp的目录,接着把我们在build目录中的二进制执行文件qtapp安装到myapp目录下.这样就最终形成了我们的snap包.

我们安装好qthello应用,并执行:






在这个snap应用中,我们把对Qt所有的库的依赖都打包到一个包里,这样我们最终的snap包的大小很大.如果开发者想减少这个Qt应用的大小的话,开发者可以参阅文章"利用ubuntu-app-platform提供的platform接口来减小Qt应用大小"来减小整个应用的大小.





作者:UbuntuTouch 发表于2017/2/3 14:25:12 原文链接
阅读:133 评论:0 查看评论

Read more
UbuntuTouch

[原]Ubuntu Core 配置

Core snap提供了一些配置的选项。这些选项可以允许我们定制系统的运行。就像和其它的snap一样,Core snap的配置选项可以通过snap set命令来实现:

$ snap set core option=value


选项目前的值可以通过snap get命令来获取:

$ snap get core option
value


下面我们来展示如何来禁止系统的ssh服务:

警告:禁止ssh将会使得我们不能在默认的方式下访问Ubuntu Core系统。如果我们不提供其它的方式来管理或登陆系统的话,你的系统将会是一个砖。建议你可以正在系统中设置一个用户名及密码,以防止你不能进入到系统。如果你有其它的方式进入到系统,也是可以的。当我们进入到Ubunutu Core系统中后,我们可以使用如下的命令来创建一个用户名及密码,这样我们可以通过键盘及显示器的方式来登陆。

$ sudo passwd <ubuntu-one id>
<password>

设置的选项接受如下的值:

  • false (默认):启动ssh服务。ssh服务将直接对连接请求起作用
  • true:禁止ssh服务。目前存在的ssh连接将继续保留,但是任何更进一步的连接将是不可能的
$ snap set core service.ssh.disable=true
当我们执行完上面的命令后,任何更进一步的访问将被禁止:

$ ssh liu-xiao-guo@192.168.1.106
ssh: connect to host 192.168.1.106 port 22: Connection refused

$ snap set core service.ssh.disable=false
执行上面的命令后将使得我们可以重新连接ssh。
我们可以通过如下的命令来获得当前的值:

$ snap get core service.ssh.disable
false

更多阅读:https://docs.ubuntu.com/core/en/reference/core-configuration

作者:UbuntuTouch 发表于2017/2/15 10:29:38 原文链接
阅读:70 评论:0 查看评论

Read more
UbuntuTouch

[转]Qt on Ubuntu Core

Are you working on an IoT, point of sale or digital signage device? Are you looking for a secure, supported solution to build it on? Do you have needs for graphic performance and complex UI? Did you know you could build great solutions using Qt on Ubuntu and Ubuntu Core? 

To find out how why not join this upcoming webinar. You will learn the following:

- Introduction to Ubuntu and Qt in IoT and digital signage
- Using Ubuntu and Ubuntu Core in your device
- Packaging your Qt app for easy application distribution 
- Dealing with hardware variants and GPUs


https://www.brighttalk.com/webcast/6793/246523?utm_source=China&utm_campaign=3)%20Device_FY17_IOT_Vertical_DS_Webinar_Qt&utm_medium=Social

作者:UbuntuTouch 发表于2017/2/27 13:07:38 原文链接
阅读:22 评论:0 查看评论

Read more
Stéphane Graber

LXD logo

What’s Ubuntu Core?

Ubuntu Core is a version of Ubuntu that’s fully transactional and entirely based on snap packages.

Most of the system is read-only. All installed applications come from snap packages and all updates are done using transactions. Meaning that should anything go wrong at any point during a package or system update, the system will be able to revert to the previous state and report the failure.

The current release of Ubuntu Core is called series 16 and was released in November 2016.

Note that on Ubuntu Core systems, only snap packages using confinement can be installed (no “classic” snaps) and that a good number of snaps will not fully work in this environment or will require some manual intervention (creating user and groups, …). Ubuntu Core gets improved on a weekly basis as new releases of snapd and the “core” snap are put out.

Requirements

As far as LXD is concerned, Ubuntu Core is just another Linux distribution. That being said, snapd does require unprivileged FUSE mounts and AppArmor namespacing and stacking, so you will need the following:

  • An up to date Ubuntu system using the official Ubuntu kernel
  • An up to date version of LXD

Creating an Ubuntu Core container

The Ubuntu Core images are currently published on the community image server.
You can launch a new container with:

stgraber@dakara:~$ lxc launch images:ubuntu-core/16 ubuntu-core
Creating ubuntu-core
Starting ubuntu-core

The container will take a few seconds to start, first executing a first stage loader that determines what read-only image to use and setup the writable layers. You don’t want to interrupt the container in that stage and “lxc exec” will likely just fail as pretty much nothing is available at that point.

Seconds later, “lxc list” will show the container IP address, indicating that it’s booted into Ubuntu Core:

stgraber@dakara:~$ lxc list
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+
|     NAME    |  STATE  |          IPV4        |                      IPV6                    |    TYPE    | SNAPSHOTS |
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+
| ubuntu-core | RUNNING | 10.90.151.104 (eth0) | 2001:470:b368:b2b5:216:3eff:fee1:296f (eth0) | PERSISTENT | 0         |
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+

You can then interact with that container the same way you would any other:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap list
Name       Version     Rev  Developer  Notes
core       16.04.1     394  canonical  -
pc         16.04-0.8   9    canonical  -
pc-kernel  4.4.0-45-4  37   canonical  -
root@ubuntu-core:~#

Updating the container

If you’ve been tracking the development of Ubuntu Core, you’ll know that those versions above are pretty old. That’s because the disk images that are used as the source for the Ubuntu Core LXD images are only refreshed every few months. Ubuntu Core systems will automatically update once a day and then automatically reboot to boot onto the new version (and revert if this fails).

If you want to immediately force an update, you can do it with:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap refresh
pc-kernel (stable) 4.4.0-53-1 from 'canonical' upgraded
core (stable) 16.04.1 from 'canonical' upgraded
root@ubuntu-core:~# snap version
snap 2.17
snapd 2.17
series 16
root@ubuntu-core:~#

And then reboot the system and check the snapd version again:

root@ubuntu-core:~# reboot
root@ubuntu-core:~# 

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap version
snap 2.21
snapd 2.21
series 16
root@ubuntu-core:~#

You can get an history of all snapd interactions with

stgraber@dakara:~$ lxc exec ubuntu-core snap changes
ID  Status  Spawn                 Ready                 Summary
1   Done    2017-01-31T05:14:38Z  2017-01-31T05:14:44Z  Initialize system state
2   Done    2017-01-31T05:14:40Z  2017-01-31T05:14:45Z  Initialize device
3   Done    2017-01-31T05:21:30Z  2017-01-31T05:22:45Z  Refresh all snaps in the system

Installing some snaps

Let’s start with the simplest snaps of all, the good old Hello World:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install hello-world
hello-world 6.3 from 'canonical' installed
root@ubuntu-core:~# hello-world
Hello World!

And then move on to something a bit more useful:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install nextcloud
nextcloud 11.0.1snap2 from 'nextcloud' installed

Then hit your container over HTTP and you’ll get to your newly deployed Nextcloud instance.

If you feel like testing the latest LXD straight from git, you can do so with:

stgraber@dakara:~$ lxc config set ubuntu-core security.nesting true
stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install lxd --edge
lxd (edge) git-c6006fb from 'canonical' installed
root@ubuntu-core:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]: 

We detected that you are running inside an unprivileged container.
This means that unless you manually configured your host otherwise,
you will not have enough uid and gid to allocate to your containers.

LXD can re-use your container's own allocation to avoid the problem.
Doing so makes your nested containers slightly less safe as they could
in theory attack their parent container and gain more privileges than
they otherwise would.

Would you like to have your containers share their parent's allocation (yes/no) [default=yes]? 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
Would you like to create a new network bridge (yes/no) [default=yes]? 
What should the new bridge be called [default=lxdbr0]? 
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
LXD has been successfully configured.

And because container inception never gets old, lets run Ubuntu Core 16 inside Ubuntu Core 16:

root@ubuntu-core:~# lxc launch images:ubuntu-core/16 nested-core
Creating nested-core
Starting nested-core 
root@ubuntu-core:~# lxc list
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
|    NAME     |  STATE  |         IPV4        |                       IPV6                    |    TYPE    | SNAPSHOTS |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
| nested-core | RUNNING | 10.71.135.21 (eth0) | fd42:2861:5aad:3842:216:3eff:feaf:e6bd (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+

Conclusion

If you ever wanted to try Ubuntu Core, this is a great way to do it. It’s also a great tool for snap authors to make sure their snap is fully self-contained and will work in all environments.

Ubuntu Core is a great fit for environments where you want to ensure that your system is always up to date and is entirely reproducible. This does come with a number of constraints that may or may not work for you.

And lastly, a word of warning. Those images are considered as good enough for testing, but aren’t officially supported at this point. We are working towards getting fully supported Ubuntu Core LXD images on the official Ubuntu image server in the near future.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Alan Griffiths

mircade-snap

mircade, miral-kiosk and snapcraft.io

mircade is a proof-of-concept game launcher for use with miral-kiosk. It looks for installed games, works out if they use a toolkit supported by Mir and allows the user to play them.

miral-kiosk is a proof-of-concept Mir server for kiosk style use. It has very basic window management designed to support a single fullscreen application.

snapcraft.io is a packaging system that allows you to package applications (as “snaps”) in a way that runs on multiple linux distributions. You first need to have snapcraft installed on your target system (I used a dragonboard with Ubuntu Core as described in my previous article).

The mircade snap takes mircade and a few open games from the Ubuntu archive to create an “arcade style” snap for playing these games.

Setting up the Mir snaps

The mircade snap is based on the “Mir Kiosk Snaps” described here.

Mir support on Ubuntu Core is currently work in progress so the exact incantations for installing the mir-libs and mir-kiosk snaps to work with mircade varies slightly from the referenced articles (to work around bugs) and will (hopefully) change in the near future. Here’s what I found works at the time of writing:

$ snap install mir-libs --channel edge
$ snap install mir-kiosk --channel edge --devmode
$ snap connect mir-kiosk:mir-libs mir-libs:mir-libs
$ sudo reboot

Installing the mircade-snap

I found that installing the mircade snap sometimes ran out of space on the dragonboard /tmp filesystem. So…

$ TMPDIR=/writable/ snap install mircade --devmode --channel=edge
$ snap connect mircade:mir-libs mir-libs:mir-libs
$ snap disconnect mircade:mir;snap connect mircade:mir mir-kiosk:mir
$ snap disable mircade;sudo /usr/lib/snapd/snap-discard-ns mircade;snap enable mircade

Using mircade on the dragonboard

At this point you should see an orange screen with the name of a game. You can change the game by touching/clicking the top or bottom of the screen (or using the arrow keys). Start the current game by touching/clicking the middle of the screen or pressing enter.

Read more
Christian Brauner

lxc exec vs ssh

Recently, I’ve implemented several improvements for lxc exec. In case you didn’t know, lxc exec is LXD‘s client tool that uses the LXD client api to talk to the LXD daemon and execute any program the user might want. Here is a small example of what you can do with it:

asciicast

One of our main goals is to make lxc exec feel as similar to ssh as possible since this is the standard of running commands interactively or non-interactively remotely. Making lxc exec behave nicely was tricky.

1. Handling background tasks

A long-standing problem was certainly how to correctly handle background tasks. Here’s an asciinema illustration of the problem with a pre LXD 2.7 instance:

asciicast

What you can see there is that putting a task in the background will lead to lxc exec not being able to exit. A lot of sequences of commands can trigger this problem:

chb@conventiont|~
> lxc exec zest1 bash
root@zest1:~# yes &
y
y
y
.
.
.

Nothing would save you now. yes will simply write to stdout till the end of time as quickly as it can…
The root of the problem lies with stdout being kept open which is necessary to ensure that any data written by the process the user has started is actually read and sent back over the websocket connection we established.
As you can imagine this becomes a major annoyance when you e.g. run a shell session in which you want to run a process in the background and then quickly want to exit. Sorry, you are out of luck. Well, you were.
The first, and naive approach is obviously to simply close stdout as soon as you detect that the foreground program (e.g. the shell) has exited. Not quite as good as an idea as one might think… The problem becomes obvious when you then run quickly executing programs like:

lxc exec -- ls -al /usr/lib

where the lxc exec process (and the associated forkexec process (Don’t worry about it now. Just remember that Go + setns() are not on speaking terms…)) exits before all buffered data in stdout was read. In this case you will cause truncated output and no one wants that. After a few approaches to the problem that involved, disabling pty buffering (Wasn’t pretty I tell you that and also didn’t work predictably.) and other weird ideas I managed to solve this by employing a few poll() “tricks” (In some sense of the word “trick”.). Now you can finally run background tasks and cleanly exit. To wit:
asciicast

2. Reporting exit codes caused by signals

ssh is a wonderful tool. One thing however, I never really liked was the fact that when the command that was run by ssh received a signal ssh would always report -1 aka exit code 255. This is annoying when you’d like to have information about what signal caused the program to terminate. This is why I recently implemented the standard shell convention of reporting any signal-caused exits using the standard convention 128 + n where n is defined as the signal number that caused the executing program to exit. For example, on SIGKILL you would see 128 + SIGKILL = 137 (Calculating the exit codes for other deadly signals is left as an exercise to the reader.). So you can do:

chb@conventiont|~
> lxc exec zest1 sleep 100

Now, send SIGKILL to the executing program (Not to lxc exec itself, as SIGKILL is not forwardable.):

kill -KILL $(pidof sleep 100)

and finally retrieve the exit code for your program:

chb@conventiont|~
> echo $?
137

Voila. This obviously only works nicely when a) the exit code doesn’t breach the 8-bit wall-of-computing and b) when the executing program doesn’t use 137 to indicate success (Which would be… interesting(?).). Both arguments don’t seem too convincing to me. The former because most deadly signals should not breach the range. The latter because (i) that’s the users problem, (ii) these exit codes are actually reserved (I think.), (iii) you’d have the same problem running the program locally or otherwise.
The main advantage I see in this is the ability to report back fine-grained exit statuses for executing programs. Note, by no means can we report back all instances where the executing program was killed by a signal, e.g. when your program handles SIGTERM and exits cleanly there’s no easy way for LXD to detect this and report back that this program was killed by signal. You will simply receive success aka exit code 0.

3. Forwarding signals

This is probably the least interesting (or maybe it isn’t, no idea) but I found it quite useful. As you saw in the SIGKILL case before, I was explicit in pointing out that one must send SIGKILL to the executing program not to the lxc exec command itself. This is due to the fact that SIGKILL cannot be handled in a program. The only thing the program can do is die… like right now… this instance… sofort… (You get the idea…). But a lot of other signals SIGTERM, SIGHUP, and of course SIGUSR1 and SIGUSR2 can be handled. So when you send signals that can be handled to lxc exec instead of the executing program, newer versions of LXD will forward the signal to the executing process. This is pretty convenient in scripts and so on.

In any case, I hope you found this little lxc exec post/rant useful. Enjoy LXD it’s a crazy beautiful beast to play with. Give it a try online https://linuxcontainers.org/lxd/try-it/ and for all you developers out there: Checkout https://github.com/lxc/lxd and send us patches. </p>
            <a href=Read more

Stéphane Graber

LXD logo

Introduction

So far all my blog posts about LXD have been assuming an Ubuntu host with LXD installed from packages, as a snap or from source.

But LXD is perfectly happy to run on any Linux distribution which has the LXC library available (version 2.0.0 or higher), a recent kernel (3.13 or higher) and some standard system utilities available (rsync, dnsmasq, netcat, various filesystem tools, …).

In fact, you can find packages in the following Linux distributions (let me know if I missed one):

We have also had several reports of LXD being used on Centos and Fedora, where users built it from source using the distribution’s liblxc (or in the case of Centos, from an external repository).

One distribution we’ve seen a lot of requests for is Debian. A native Debian package has been in the works for a while now and the list of missing dependencies has been shrinking quite a lot lately.

But there is an easy alternative that will get you a working LXD on Debian today!
Use the same LXD snap package as I mentioned in a previous post, but on Debian!

Requirements

  • A Debian “testing” (stretch) system
  • The stock Debian kernel without apparmor support
  • If you want to use ZFS with LXD, then the “contrib” repository must be enabled and the “zfsutils-linux” package installed on the system

Installing snapd and LXD

Getting the latest stable LXD onto an up to date Debian testing system is just a matter of running:

apt install snapd
snap install lxd

If you never used snapd before, you’ll have to either logout and log back in to update your PATH, or just update your existing one with:

. /etc/profile.d/apps-bin-path.sh

And now it’s time to configure LXD with:

root@debian:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:
Create a new ZFS pool (yes/no) [default=yes]?
Name of the new ZFS pool [default=lxd]:
Would you like to use an existing block device (yes/no) [default=no]?
Size in GB of the new loop device (1GB minimum) [default=15]:
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
LXD has been successfully configured.

And finally, you can start using LXD:

root@debian:~# lxc launch images:debian/stretch debian
Creating debian
Starting debian

root@debian:~# lxc launch ubuntu:16.04 ubuntu
Creating ubuntu
Starting ubuntu

root@debian:~# lxc launch images:centos/7 centos
Creating centos
Starting centos

root@debian:~# lxc launch images:archlinux archlinux
Creating archlinux
Starting archlinux

root@debian:~# lxc launch images:gentoo gentoo
Creating gentoo
Starting gentoo

And enjoy your fresh collection of Linux distributions:

root@debian:~# lxc list
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
|   NAME    |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| archlinux | RUNNING | 10.250.240.103 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe40:7b1b (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| centos    | RUNNING | 10.250.240.109 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe87:64ff (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| debian    | RUNNING | 10.250.240.111 (eth0) | fd42:46d0:3c40:cca7:216:3eff:feb4:e984 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| gentoo    | RUNNING | 10.250.240.164 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe27:10ca (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| ubuntu    | RUNNING | 10.250.240.80 (eth0)  | fd42:46d0:3c40:cca7:216:3eff:fedc:f0a6 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+

Conclusion

The availability of snapd on other Linux distributions makes it a great way to get the latest LXD running on your distribution of choice.

There are still a number of problems with the LXD snap which may or may not be a blocker for your own use. The main ones at this point are:

  • All containers are shutdown and restarted on upgrades
  • No support for bash completion

If you want non-root users to have access to the LXD daemon. Simply make sure that a “lxd” group exists on your system and add whoever you want to manage LXD into that group, then restart the LXD daemon.

Extra information

The snapd website can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

LXD logo

Introduction

For those who haven’t heard of Kubernetes before, it’s defined by the upstream project as:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

It is important to note the “applications” part in there. Kubernetes deploys a set of single application containers and connects them together. Those containers will typically run a single process and so are very different from the full system containers that LXD itself provides.

This blog post will be very similar to one I published last year on running OpenStack inside a LXD container. Similarly to the OpenStack deployment, we’ll be using conjure-up to setup a number of LXD containers and eventually run the Docker containers that are used by Kubernetes.

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have at least 10GB of space for the containers to use and at least 4GB of RAM.

Outside of configuring LXD itself, you will also need to bump some kernel limits with the following commands:

sudo sysctl fs.inotify.max_user_instances=1048576  
sudo sysctl fs.inotify.max_queued_events=1048576  
sudo sysctl fs.inotify.max_user_watches=1048576  
sudo sysctl vm.max_map_count=262144

Setting up the container

Similarly to OpenStack, the conjure-up deployed version of Kubernetes expects a lot more privileges and resource access than LXD would typically provide. As a result, we have to create a privileged container, with nesting enabled and with AppArmor disabled.

This means that not very much of LXD’s security features will still be in effect on this container. Depending on how you feel about this, you may choose to run this on a different machine.

Note that all of this however remains better than instructions that would have you install everything directly on your host machine. If only by making it very easy to remove it all in the end.

lxc init ubuntu:16.04 kubernetes -c security.privileged=true -c security.nesting=true -c linux.kernel_modules=ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
printf "lxc.cap.drop=\nlxc.aa_profile=unconfined\n" | lxc config set kubernetes raw.lxc -
lxc config device add kubernetes mem unix-char path=/dev/mem
lxc start kubernetes

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get Kubernetes going.

lxc exec kubernetes -- apt update
lxc exec kubernetes -- apt dist-upgrade -y
lxc exec kubernetes -- apt install squashfuse -y
lxc exec kubernetes -- ln -s /bin/true /usr/local/bin/udevadm
lxc exec kubernetes -- snap install conjure-up --classic

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec kubernetes -- lxd init

And that’s it for the container configuration itself, now we can deploy Kubernetes!

Deploying Kubernetes with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy Kubernetes.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec kubernetes -- sudo -u ubuntu -i conjure-up
  • Select “Kubernetes Core”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy Kubernetes. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Interact with your new Kubernetes

We can ask juju to deploy a new kubernetes workload, in this case 5 instances of “microbot”:

root@kubernetes:~# sudo -u ubuntu -i
ubuntu@kubernetes:~$ juju run-action kubernetes-worker/0 microbot replicas=5
Action queued with id: 1d1e2997-5238-4b86-873c-ad79660db43f

You can then grab the service address from the Juju action output:

ubuntu@kubernetes:~$ juju show-action-output 1d1e2997-5238-4b86-873c-ad79660db43f
results:
 address: microbot.10.97.218.226.xip.io
status: completed
timing:
 completed: 2017-01-13 10:26:14 +0000 UTC
 enqueued: 2017-01-13 10:26:11 +0000 UTC
 started: 2017-01-13 10:26:12 +0000 UTC

Now actually using the Kubernetes tools, we can check the state of our new pods:

ubuntu@kubernetes:~$ kubectl.conjure-up-kubernetes-core-be8 get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 21m
microbot-1855935831-cn4bs 0/1 ContainerCreating 0 18s
microbot-1855935831-dh70k 0/1 ContainerCreating 0 18s
microbot-1855935831-fqwjp 0/1 ContainerCreating 0 18s
microbot-1855935831-ksmmp 0/1 ContainerCreating 0 18s
microbot-1855935831-mfvst 1/1 Running 0 18s
nginx-ingress-controller-bj5gh 1/1 Running 0 21m

After a little while, you’ll see everything’s running:

ubuntu@kubernetes:~$ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 23m
microbot-1855935831-cn4bs 1/1 Running 0 2m
microbot-1855935831-dh70k 1/1 Running 0 2m
microbot-1855935831-fqwjp 1/1 Running 0 2m
microbot-1855935831-ksmmp 1/1 Running 0 2m
microbot-1855935831-mfvst 1/1 Running 0 2m
nginx-ingress-controller-bj5gh 1/1 Running 0 23m

At which point, you can hit the service URL with:

ubuntu@kubernetes:~$ curl -s http://microbot.10.97.218.226.xip.io | grep hostname
 <p class="centered">Container hostname: microbot-1855935831-fqwjp</p>

Running this multiple times will show you different container hostnames as you get load balanced between one of those 5 new instances.

Conclusion

Similar to OpenStack, conjure-up combined with LXD makes it very easy to deploy rather complex big software, very easily and in a very self-contained way.

This isn’t the kind of setup you’d want to run in a production environment, but it’s great for developers, demos and whoever wants to try those technologies without investing into hardware.

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
facundo

Parque Acuático


Entre Navidad y Año Nuevo nos tomamos unos días de vacaciones con la familia.

Esta vez nos fuimos, por primera vez, a un Parque Acuático.

La verdad es que lo pasamos bárbaro. Yo le tenía un poco de aprensión por si Malena iba a disfrutarlo (Felipe, siendo más grande, seguro que sí). Ambos la pasaron genial, así como también Moni y yo.

Moni y Male disfrutando

El primer día llegamos a la tardecita y estaba nublado y fresco, así que en el parque acuático propiamente dicho no había nadie. Nosotros tampoco nos metimos, sino que fuimos directamente a las piletas con aguas termales, así estábamos calentitos :)

Piletas con aguas termales

Pero lo que más disfrutamos fué el parque acuático propiamente dicho, con todas sus variantes de juegos para tirarse al agua. Al principio Male se quedaba en los juegos para niños, pero luego del primer día también se tiró mucho de la rampa grande.

Juegos de los niños

Felu y Male en la rampa grande

Felu se tiró de casi todos lados (excepto el más salvaje, que era casi caída libre), incluso se tiró de los juegos grandes un montón de veces, en loop: se tiraba, subía, se tiraba, subía, se tiraba...

Felipe en el juego que te hace girar

También aprovechamos para pasear y conocer Concepción del Uruguay. Incluso una de las tardes vinieron familiares de Moni desde Concordia, y nos fuimos a las playas de Banco Pelay, donde nos metimos en el rio y jugamos con la arena hasta que se hizo de noche y nos fuimos al pueblo a comernos unas pizzas :)
http://www.turismoentrerios.com/cdeluruguay/pelay.htm

Moni con la prima Sandra y la tia Rosa

Almorzando con la familia

La escapada de pocos días al parque acuático mostró ser una copada forma de desconectar. Seguro repetiremos.

Read more
deviceguy

Movin' on...

A year has gone by since I started work with Canonical. As it turns out, I must be on my way. Where to? Not real sure at this moment, there seems plenty of companies using Qt & QML these days. \0/


But saying that, I am open to suggestions. LinkedIn
 
Plenty of IoT and devices using sensors around. Heck, even Moto Z phone has some great uses for sensor gestures similar to what I wrote for QtSensors while I was at Nokia.

But a lack of companies that allow freelance or remote work. The last few years I have worked remotely doing work for Jolla and Canonical. Both fantastic companies to work for, which really have it together for working remotely.

I am still surprised that only a handful of companies regularly allow remote work. I do not miss the stuffy non window opening offices and the long daily commute, which sometimes means riding a motorcycle through hail! (I do not suggest this for anyone)

Of course, I am still maintainer for QtSensors, QtSystemInfo for the Qt Project, and Sensor Framework for Mer, and always dreaming up new ways to use sensors. Still keeping tabs on QtNetwork bearer classes.

Although I had to send back the Canonical devices, I still have Ubuntu on my Nexus 4. I still have my Jolla phones and tablet.

That said, I still have this blog here, and besides spending my time looking for a new programming gig, I am (always) preparing to release a new album. http://llornkcor.com
and always willing to work with anyone needing music/audio/soundtrack work.

Read more
kevin gunn

1)Put the latest ubuntu-core image for dragonboard on boot (you’ll want a screen and keyboard at least)

You can find the image here http://releases.ubuntu.com/ubuntu-core/16/

Make sure you’re on the latest with the following


ssh$ snap refresh core

 

2)Then install the mir-libs and mir-kiosk

 

ssh$ snap install mir-libs --channel=edge
ssh$ snap install mir-kiosk --channel=edge
ssh$ snap install ubuntu-app-platform

 

 

3)Using the snap built from this branch

https://code.launchpad.net/~osomon/webbrowser-app/mirkiosk-snap  

This particular snap

https://code.launchpad.net/~osomon/+snap/webbrowser-mirkiosk/+build/16501

Seemed to work find, download copy over and install


ssh$ snap install webbrowser-app*.snap --devmode --dangerous

 

4) NOTE: because of bug  you have to do the following, hopefully the pull request will get merged soon and this step we can remove

 

ssh$ snap disconnect webbrowser-app:mir
ssh$ snap disconnect webbrowser-app:platform
ssh$ snap connect webbrowser-app:mir mir-kiosk:mir
ssh$ snap connect webbrowser-app:platform ubuntu-app-platform:platform
ssh$ snap disable webbrowser-app
ssh$ snap enable webbrowser-app

 

5) Now launch and use


$ webbrowser-app

 

If you should experience a crash of the web browser, just restart with the same command. Also, you will see some spew at the console you may ignore from the browser launching related to audio and Qt stuff.

 

Debugging: if you should find things aren’t working as expected, as in you do not see the web browser. Try rebooting first, which should auto launch mir-kiosk, then repeat the connection process and launching the browser. If that still doesn’t work, inspect all the connections via ssh$ snap interfaces and make sure mir-kiosk:mir-libs, webbrowser-app:mir-kiosk, webbrowser-app:ubuntu-app-platform, webbrowser-app:mir-libs are all connected as expected. Feel free to ping me or others on freenode at #snappy or #ubuntu-unity or #ubuntu-mir

Read more
Colin Ian King

The BPF Compiler Collection (BCC) is a toolkit for building kernel tracing tools that leverage the functionality provided by the Linux extended Berkeley Packet Filters (BPF).

BCC allows one to write BPF programs with front-ends in Python or Lua with kernel instrumentation written in C.  The instrumentation code is built into sandboxed eBPF byte code and is executed in the kernel.

The BCC github project README file provides an excellent overview and description of BCC and the various available BCC tools.  Building BCC from scratch can be a bit time consuming, however,  the good news is that the BCC tools are now available as a snap and so BCC can be quickly and easily installed just using:

 sudo snap install --devmode bcc  

There are currently over 50 BCC tools in the snap, so let's have a quick look at a few:

cachetop allows one to view the top page cache hit/miss statistics. To run this use:

 sudo bcc.cachetop  



The funccount tool allows one to count the number of times specific functions get called.  For example, to see how many kernel functions with the name starting with "do_" get called per second one can use:

 sudo bcc.funccount "do_*" -i 1  


To see how to use all the options in this tool, use the -h option:

 sudo bcc.funccount -h  

I've found the funccount tool to be especially useful to check on kernel activity by checking on hits on specific function names.

The slabratetop tool is useful to see the active kernel SLAB/SLUB memory allocation rates:

 sudo bcc.slabratetop  


If you want to see which process is opening specific files, one can snoop on open system calls use the opensnoop tool:

 sudo bcc.opensnoop -T


Hopefully this will give you a taste of the useful tools that are available in BCC (I have barely scratched the surface in this article).  I recommend installing the snap and giving it a try.

As it stands,BCC provides a useful mechanism to develop BPF tracing tools and I look forward to regularly updating the BCC snap as more tools are added to BCC. Kudos to Brendan Gregg for BCC!

Read more