Canonical Voices

UbuntuTouch

我们知道Ubuntu平台提供了良好的融合(convergence)设计.通过融合设计,使得我们的同样一个应用在不需要修改任何代码的情况下,重新打包就可以运行到不同的屏幕尺寸的设备上.当然,Canonical公司最终的目的是实现snap应用运行到所有设备上,而不需要进行任何的重新打包的动作.目前Ubuntu手机上支持的应用打包格式是click包.在为了的Ubuntu SDK中,最终我们会把snap的支持加入到我们的SDK之中去.那么目前我们怎么把我们已经开发好的应用打包成为一个snap应用包,并可以成功部署到我们的电脑桌面(16.04)上呢?

如果大家对如何安装一个snap应用到16.04的桌面系统上的话,请参阅文章"安装snap应用到Ubuntu 16.4桌面系统".


1)通过Ubuntu SDK开发一个我们需要的手机应用


我们可以通过Ubuntu SDK来创建一个我们想要的项目.关于如何创建一个Ubuntu手机应用,这个不在我们的这个教程范围.如果你对如何利用Ubuntu SDK开发一个手机应用感兴趣的话,请参考我们的文章"Ubuntu 手机开发培训准备".这里将不再累述!

值得指出的是:在今天的教程中,我们将教大家如何把一个qmake的Ubuntu手机应用打包为一个snap的应用.这里,我们将利用之前我已经开发的一个项目作为例程来开始.我们在terminal下打入如下的命令:

$ git clone https://github.com/liu-xiao-guo/rssreader_snap

下载后的源码结构如下:

liuxg@liuxg:~/snappy/desktop/rssreader$ tree -L 2
.
├── snapcraft.yaml
└── src
    ├── manifest.json.in
    ├── po
    ├── rssreader
    └── rssreader.pro

从上面的结构上,我们可以看到:在src目录下的整个项目是有我们的Ubuntu SDK所创建的一个qmake项目,它有一个项目文件.pro文件.在手机上的运行情况如下:

  

同时,在我们项目的根目录下,我们发现了另外一个文件snapcraft.yaml.这个文件就是为了能够让我们把我们的qmake项目最终打包为snap应用的snap项目文件.


2)为我们的qmake项目打包


在上节中,我们已经提到了我们项目的snapcraft.yaml文件.现在我们把这个文件展示如下:

snapcraft.yaml

name: rssreader-app
version: 1.0
summary: A snap app from Ubuntu phone app
description: This is an exmaple showing how to convert a Ubuntu phone app to a desktop snap app
confinement: strict

apps:
  rssreader:
    command: desktop-launch $SNAP/lib/x86_64-linux-gnu/bin/rssreader
    plugs: [network,network-bind,network-manager,home,unity7,opengl]

parts:
  rssreader:
    source: src/
    plugin: qmake
    qt-version: qt5
    build-packages:
      - cmake
      - gettext
      - intltool
      - ubuntu-touch-sounds
      - suru-icon-theme
      - qml-module-qttest
      - qml-module-qtsysteminfo
      - qml-module-qt-labs-settings
      - qtdeclarative5-u1db1.0
      - qtdeclarative5-qtmultimedia-plugin
      - qtdeclarative5-qtpositioning-plugin
      - qtdeclarative5-ubuntu-content1
      - qt5-default
      - qtbase5-dev
      - qtdeclarative5-dev
      - qtdeclarative5-dev-tools
      - qtdeclarative5-folderlistmodel-plugin
      - qtdeclarative5-ubuntu-ui-toolkit-plugin
      - xvfb
    stage-packages:
      - ubuntu-sdk-libs
      - qtubuntu-desktop
      - qml-module-qtsysteminfo
      - ubuntu-defaults-zh-cn
    stage:
      - -usr/share/pkgconfig/xkeyboard-config.pc 
    snap:
      - -usr/share/doc
      - -usr/include
    after: [desktop/qt5]

初以乍看,这个文件和我们以往所看到的文件都不同.似乎很复杂!关于snapcraft.yaml的具体解释,我们可以参考我们的官方文档"snapcraft.yaml syntax".

在这里,我们做一个简单的解释:
  • name: 这是最终的包的名称.针对我们的情况,我们最终的snap包的名字为rssreader-app_1.0_amd64.snap
  • version: 这是我们包的版本信息.就像我们包的名称rssreader-app_1.0_amd64.snap所展示的那样.1.0是我们的版本信息
  • summary: 这是一个描述我们包信息的字符串.根据我们的设计,他只能最多长达79个字符
  • description:这是一个描述我们包的字符串.它可以比summary来得更长一些
  • confinement: 受限的种类:strict 或 devmode.当我们设置为devmode时,在安装时加上--devmode选项时,可以使得我们的应用不接受任何的安全的限制.就像我们以前在Ubuntu电脑上开发一样.我们可以随意地访问任何一个我们想要访问的目录等等
  • apps: 在这里定义我们的应用及其运行时所需要的命令.针对我们的情况,我们定义rssreader为我们的应用.当我们执行我们的应用时,我们需要使用<<包名>>.<<应用名>>来运行我们的应用.针对我们的情况,我们使用rssreader-app.rssreader来通过命令行来运行我们的应用
    • command:这是用来启动我们应用所需要的命令行.针对我们的情况:desktop-launch $SNAP/lib/x86_64-linux-gnu/bin/rssreader.这里的desktop-launch来自我们下面的已经预先编译好的包 desktop/qt5
    • plugs:这一项定义了我们的snap应用所访问的权限.通过我们的设定,我们可以访问系统的$HOME目录及使用opengl等.更多细节请擦参阅Interfaces
  • parts: 每个part定义了我们软件所需要的部分.每个part就像一个mini的小项目.在我们编译时可以在parts的目录中分别找到对应的部分
    • rssreader: 我们定义的part的名称.它的名字可以是任何你所喜欢的名称
      • source: 定义part的源码.它可以在网站上的任何一个软件(bzr, git, tar)
      • plugin: 定义编译part所需要用到的plugin.开发者也可以拓展snapcraft的plugin.请参阅文章"Write your own plugins
      • qt-version:这个是针对Qt plugin来说的.定义Qt的版本
      • build-packages:定义在这里的每个包都是为了用来编译我们的项目的.需要安装的.它们将不会出现在最终的snap文件中
      • stage-packages:这些定义的包都要最终被打入到snap包中,并形成运行该应用所需要的文件.特别值得指出的是:我们加入了中文包ubuntu-defaults-zh-cn,从而使得我们的应用可以看见中文的显示.当然,我们包的大小也从100多兆增加到300多兆.注意这里的包都对应在我们通常ubuntu下的debian包
      • snap:定义了我们需要的或不需要的文件.在这里我们通过"-"来把一些不想要的文件剔除从而不打入到我们的包中
      • after:表明我们的这个part的编译必须是在desktop/qt5下载之后.对于有些项目,我们必须先得到一个part,并使用这个part来编译我们其它的part.在这种情况下,我们可以使用after来表明我们的先后顺序.在这里,我们也利用了其它人所开发的part desktop/qt5.我们可以在网址https://wiki.ubuntu.com/snapcraft/parts找到别人已经发布的parts.这些parts的描述也可以在地址https://wiki.ubuntu.com/Snappy/Parts找到.我们可以重复利用它们.在命令行中,我们也可以公共如下的命令来查找已经有的parts:
        • snapcraft update
        • snapcraft search
当然我们只做了一个简单的描述.更详细的关于snapcraft.yaml的描述,请参阅我们的文档"snapcraft.yaml syntax"或文档

3)编译我们的snap应用


为了编译我们的snap应用,其实非常简单.我们直接进入到我们的项目的根目录下,并打入如下的命令:

$ snapcraft

这样,我们就可以编译我们的应用,并最终生产我们所需要的.snap文件:

312M 7月  13 12:25 rssreader-app_1.0_amd64.snap

就像我们上节中介绍的那样,由于我们加入了中文字体,所以我们的应用变得非常庞大.没有字体的snap包大约为141M.
如果我们想要清楚我们的打包过程中的中间文件,我们可以打入如下的命令:

$ snapcraft clean

它将清除在parts, stage及prime目录下的所有的文件.更多关于snapcraft的介绍,可以参阅我的文章"安装snap应用到Ubuntu 16.4桌面系统".我们也可以通过如下的方式来得到它的帮助:

$ snapcraft --help


4)安装及运行我们的应用


我们可以通过如下的命令来安装我们的.snap文件:

$ sudo snap install rssreader-app_1.0_amd64.snap --force-dangerous

我们可以通过如下的命令来运行我们的应用:

$ rssreader-app.rssreader

运行时的画面如下:



从上面可以看出来.我们没有经过任何的修改,但是我们的手机应用也可以在16.04的桌面上运行得非常好.在本应用中,它也使用了融合(Convergence)技术,从而使得我们的应用在不同的屏幕尺寸上自动适配.大家可以参阅我的文章"运用AdaptivePageLayout来做融合(convergence)设计以实现动态布局


5)安全调试


我们可以在我们的桌面系统中安装如下的软件:

$ snap install snappy-debug

如果在安装的过程中提示还需要安装其它的应用软件,我们按照提示安装即可.
接下来我们在一个terminal中打入如下的命令:

$ snappy-debug.security scanlog

根据Interfaces文档介绍,log-observe是不能自动连接的,我们需要使用如下的命令来手动建立这种连接:

$ sudo snap connect snappy-debug:log-observe ubuntu-core:log-observe

这种方法也适用于我们建立其它需要手动链接的的情况.然后在另外一个terminal中打入如下的命令:

$ rssreader-app.rssreader

那么我们可以在这个窗口看到如下的信息:



显然在我们的应用运行时产生了一些安全的问题,并提示一些上面的输出信息.我们切换到另外一个运行命令"snappy-debug.security scanlog"的窗口:



显然在这个输出窗口也显示了一些"DENIED"安全错误信息.那么这些问题是怎么来的呢?显然,这可能是我们的应用没有设置相应的plug所致.我们可以参阅我们的官方文档interfaces,并结合我们的应用.我们可以初步判断,我们可能需要network及network-bind plug.这是因为我们的应用是一个网路的应用,需要从网上抓数据.另外,在我们代码的main.cpp中,我们使用了如下的代码:

QNetworkAccessManager *MyNetworkAccessManagerFactory::create(QObject *parent)
{
    QNetworkAccessManager *nam = new QNetworkAccessManager(parent);

    QString path = getCachePath();
    QNetworkDiskCache* cache = new QNetworkDiskCache(parent);
    cache->setCacheDirectory(path);
    nam->setCache(cache);

    return nam;
}

这段代码是为了设置一个cache,从而使得我们抓过来的照片能够得到cache,进而我们不需要浪费网路资源重复获取同样的一个照片.根据我们目前所有的plugs:

liuxg@liuxg:~$ snap interfaces
Slot                 Plug
:camera              -
:cups-control        -
:firewall-control    -
:gsettings           -
:home                rssreader-app
:locale-control      -
:log-observe         snappy-debug
:modem-manager       -
:mount-observe       -
:network             -
:network-bind        -
:network-control     -
:network-manager     -
:network-observe     -
:opengl              rssreader-app
:optical-drive       -
:ppp                 -
:pulseaudio          -
:snapd-control       -
:system-observe      -
:timeserver-control  -
:timezone-control    -
:unity7              rssreader-app
:x11                 -

显然,netwrok-manager是我们所需要的plug.我们可以在我们的snapcraft.yaml加入上面所述的两个plug.修改过后的snapcraft.yaml文件如下:

snapcraft.yaml

name: rssreader-app
version: 1.0
summary: A snap app from Ubuntu phone app
description: This is an exmaple showing how to convert a Ubuntu phone app to a desktop snap app
confinement: strict

apps:
  rssreader:
    command: desktop-launch $SNAP/lib/x86_64-linux-gnu/bin/rssreader
    plugs: [network,network-bind,network-manager,home,unity7,opengl]

parts:
  rssreader:
    source: src/
    plugin: qmake
    qt-version: qt5
    build-packages:
      - cmake
      - gettext
      - intltool
      - ubuntu-touch-sounds
      - suru-icon-theme
      - qml-module-qttest
      - qml-module-qtsysteminfo
      - qml-module-qt-labs-settings
      - qtdeclarative5-u1db1.0
      - qtdeclarative5-qtmultimedia-plugin
      - qtdeclarative5-qtpositioning-plugin
      - qtdeclarative5-ubuntu-content1
      - qt5-default
      - qtbase5-dev
      - qtdeclarative5-dev
      - qtdeclarative5-dev-tools
      - qtdeclarative5-folderlistmodel-plugin
      - qtdeclarative5-ubuntu-ui-toolkit-plugin
      - xvfb
    stage-packages:
      - ubuntu-sdk-libs
      - qtubuntu-desktop
      - qml-module-qtsysteminfo
      - ubuntu-defaults-zh-cn
    snap:
      - -usr/share/doc
      - -usr/include
    after: [desktop/qt5]
 

重新打包我们的应用并安装.我们可以利用:
$ snap interfaces
来显示我们应用所有的plug:

liuxg@liuxg:~/snappy/desktop/rssreader$ snap interfaces
Slot                 Plug
:camera              -
:cups-control        -
:firewall-control    -
:gsettings           -
:home                rssreader-app
:locale-control      -
:log-observe         snappy-debug
:modem-manager       -
:mount-observe       -
:network             rssreader-app
:network-bind        rssreader-app
:network-control     -
:network-manager     -
:network-observe     -
:opengl              rssreader-app
:optical-drive       -
:ppp                 -
:pulseaudio          -
:snapd-control       -
:system-observe      -
:timeserver-control  -
:timezone-control    -
:unity7              rssreader-app
:x11                 -
-                    rssreader-app:network-manager

在上面的最后一行,我们可以看到:
-                    rssreader-app:network-manager
这是什么意思呢?我们来重新查看连接interfaces,在那个页面里虽然目前还没有network-manager的介绍,但是我们可以看到诸如:

network-control

Can configure networking. This is restricted because it gives wide, privileged access to networking and should only be used with trusted apps.

Usage: reserved Auto-Connect: no

我们需要注意的是最下面的一句话:Auto-Connect: no.也就是说自动连接不存在.对于我们的networrk-manager的情况也是一样的.我们需要手动来连接.那么我们该如何来手动连接呢?

liuxg@liuxg:~$ snap --help
Usage:
  snap [OPTIONS] <command>

The snap tool interacts with the snapd daemon to control the snappy software platform.


Application Options:
      --version  print the version and exit

Help Options:
  -h, --help     Show this help message

Available commands:
  abort        Abort a pending change
  ack          Adds an assertion to the system
  change       List a change's tasks
  changes      List system changes
  connect      Connects a plug to a slot
  create-user  Creates a local system user
  disconnect   Disconnects a plug from a slot
  find         Finds packages to install
  help         Help
  install      Install a snap to the system
  interfaces   Lists interfaces in the system
  known        Shows known assertions of the provided type
  list         List installed snaps
  login        Authenticates on snapd and the store
  logout       Log out of the store
  refresh      Refresh a snap in the system
  remove       Remove a snap from the system
  run          Run the given snap command
  try          Try an unpacked snap in the system

我们通过上面的命令,我们可以知道snap有一个叫做connect的命令.进一步:

liuxg@liuxg:~$ snap connect -h
Usage:
  snap [OPTIONS] connect <snap>:<plug> <snap>:<slot>

The connect command connects a plug to a slot.
It may be called in the following ways:

$ snap connect <snap>:<plug> <snap>:<slot>

Connects the specific plug to the specific slot.

$ snap connect <snap>:<plug> <snap>

Connects the specific plug to the only slot in the provided snap that matches
the connected interface. If more than one potential slot exists, the command
fails.

$ snap connect <plug> <snap>[:<slot>]

Without a name for the snap offering the plug, the plug name is looked at in
the gadget snap, the kernel snap, and then the os snap, in that order. The
first of these snaps that has a matching plug name is used and the command
proceeds as above.

Application Options:
      --version            print the version and exit

Help Options:
  -h, --help               Show this help message

显然我们需要使用如下的命令来完成我们的connect工作:

$ snap connect <plug> <snap>[:<slot>]

针对我们的情况,我们使用如下的命令:

$ sudo snap connect rssreader-app:network-manager ubuntu-core:network-manager

当我们打入上面的命令后,我们重新来看我们的snap interfaces:
liuxg@liuxg:~$ snap interfaces
Slot                 Plug
:camera              -
:cups-control        -
:firewall-control    -
:gsettings           -
:home                rssreader-app
:locale-control      -
:log-observe         snappy-debug
:modem-manager       -
:mount-observe       -
:network             rssreader-app
:network-bind        rssreader-app
:network-control     -
:network-manager     rssreader-app
:network-observe     -
:opengl              rssreader-app
:optical-drive       -
:ppp                 -
:pulseaudio          -
:snapd-control       -
:system-observe      -
:timeserver-control  -
:timezone-control    -
:unity7              rssreader-app
:x11                 -

显然这一次,我们已经成功地把network-manager加入到我们的应用中.我们可以重新运行我们的应用:

liuxg@liuxg:~/snappy/desktop/rssreader$ rssreader-app.rssreader 

(process:9770): Gtk-WARNING **: Locale not supported by C library.
	Using the fallback 'C' locale.
Gtk-Message: Failed to load module "overlay-scrollbar"
Gtk-Message: Failed to load module "gail"
Gtk-Message: Failed to load module "atk-bridge"
Gtk-Message: Failed to load module "unity-gtk-module"
Gtk-Message: Failed to load module "canberra-gtk-module"
qml: columns: 1
qml: columns: 2
qml: currentIndex: 0
qml: index: 0
qml: adding page...
qml: sourcePage must be added to the view to add new page.
qml: going to add the page
qml: it is added: 958
XmbTextListToTextProperty result code -2
XmbTextListToTextProperty result code -2

显然我们的应用再也没有相应的错误提示了.



6)为我们的应用加上应用图标



到目前我们的应用虽然接近完美,但是我们还是不能够在我们的dash中启动我们的应用.为此,我们在我们的应用的根目录下创建了一个setup/gui目录.在该目录下,我们创建如下的两个文件:

liuxg@liuxg:~/snappy/desktop/rssreader/setup/gui$ ls -l
total 100
-rw-rw-r-- 1 liuxg liuxg   325 7月  14 13:18 rssreader.desktop
-rw-rw-r-- 1 liuxg liuxg 91353 7月  14 11:50 rssreader.png

加入这两个文件后的项目结构为:

liuxg@liuxg:~/snappy/desktop/rssreader$ tree -L 3
.
├── setup
│   └── gui
│       ├── rssreader.desktop
│       └── rssreader.png
├── snapcraft.yaml
└── src
    ├── manifest.json.in
    ├── po
    │   └── rssreader.liu-xiao-guo.pot
    ├── rssreader
    │   ├── components
    │   ├── main.cpp
    │   ├── Main.qml
    │   ├── rssreader.apparmor
    │   ├── rssreader.desktop
    │   ├── rssreader.png
    │   ├── rssreader.pro
    │   ├── rssreader.qrc
    │   └── tests
    └── rssreader.pro

有了上面的两个文件,我们再次重新打包并安装我们的snap应用.安装完后,在我们的dash里:


我们可以找到我们的RSS Reader应用了.


7)在不需要安装的情况下运行我们的应用



我们知道在开发的过程中,我们每次安装的时候都会产生一个新的版本.这个新的安装不仅花费很多的时间,而且占用系统的空间(每个版本占用的空间比较大).在开发时,我们能否不用安装而直接使用已经在打包过程中生产的文件呢?答案是肯定的.我们可以通过:

liuxg@liuxg:~$ snap --help
Usage:
  snap [OPTIONS] <command>

The snap tool interacts with the snapd daemon to control the snappy software platform.


Application Options:
      --version  print the version and exit

Help Options:
  -h, --help     Show this help message

Available commands:
  abort        Abort a pending change
  ack          Adds an assertion to the system
  change       List a change's tasks
  changes      List system changes
  connect      Connects a plug to a slot
  create-user  Creates a local system user
  disconnect   Disconnects a plug from a slot
  find         Finds packages to install
  help         Help
  install      Install a snap to the system
  interfaces   Lists interfaces in the system
  known        Shows known assertions of the provided type
  list         List installed snaps
  login        Authenticates on snapd and the store
  logout       Log out of the store
  refresh      Refresh a snap in the system
  remove       Remove a snap from the system
  run          Run the given snap command
  try          Try an unpacked snap in the system

我们在上面的命令中,发现一个叫做try的命令.在我们运行完snapcraft命令后,snapcraft会帮我们生产相应的.snap文件.同时它也帮我们生产如下的目录:

liuxg@liuxg:~/snappy/desktop/rssreader$ ls -d */
parts/  prime/  setup/  src/  stage/

这其中有一个叫做prime的目录,其实它的里面就是我们安装snap后的文件.我们可以使用如下的方法:

$ sudo snap try prime/

通过上面的命令我们就可以把我们的snap安装到系统中.重新展示我们的snap安装目录:

liuxg@liuxg:/var/lib/snapd/snaps$ ls -al
total 287200
drwxr-xr-x 2 root root      4096 7月  15 11:13 .
drwxr-xr-x 7 root root      4096 7月  15 11:13 ..
-rw------- 1 root root  98439168 7月  14 13:10 mpv_x1.snap
-rw------- 1 root root 127705088 7月  14 17:30 photos-app_x1.snap
lrwxrwxrwx 1 root root        42 7月  15 11:13 rssreader-app_x1.snap -> /home/liuxg/snappy/desktop/rssreader/prime
-rw------- 1 root root     16384 7月  13 15:53 snappy-debug_22.snap
-rw------- 1 root root  67899392 7月  13 12:26 ubuntu-core_122.snap

我们看见其实就是一个软链接.通过这样的方法,我们很快地部署了我们的snap应用.同时避免每次安装时各个不同版本之间带来的不同安装.snap包实际上是一个使用squashfs打包而生产的.通过snap try,我们不需要创建一个squashfs文件.他的另外一个好处是我们可以随时修改这个snap的应用的内容,这是因为它本身是read-write的.我们可以甚至加上--devmode选项来取消应用对安全的限制.


8)利用devmode免除在开发时的安全考虑



我们知道在开发的过程中,我们有时不知道需要使用哪写plug来保证我们的软件正常运行.这个时候,我们可以在我们的snapcraft中定义修改我们的confinement使之成为devmode:

snapcraft.yaml


name: rssreader-app
version: 1.0
summary: A snap app from Ubuntu phone app
description: This is an exmaple showing how to convert a Ubuntu phone app to a desktop snap app
confinement: devmode

apps:
  rssreader:
    command: desktop-launch $SNAP/lib/x86_64-linux-gnu/bin/rssreader
    plugs: [network,network-bind,network-manager,home,unity7,opengl]

parts:
  rssreader:
    source: src/
    plugin: qmake
    qt-version: qt5
    build-packages:
      - cmake
      - gettext
      - intltool
      - ubuntu-touch-sounds
      - suru-icon-theme
      - qml-module-qttest
      - qml-module-qtsysteminfo
      - qml-module-qt-labs-settings
      - qtdeclarative5-u1db1.0
      - qtdeclarative5-qtmultimedia-plugin
      - qtdeclarative5-qtpositioning-plugin
      - qtdeclarative5-ubuntu-content1
      - qt5-default
      - qtbase5-dev
      - qtdeclarative5-dev
      - qtdeclarative5-dev-tools
      - qtdeclarative5-folderlistmodel-plugin
      - qtdeclarative5-ubuntu-ui-toolkit-plugin
      - xvfb
    stage-packages:
      - ubuntu-sdk-libs
      - qtubuntu-desktop
      - qml-module-qtsysteminfo
      - ubuntu-defaults-zh-cn
    snap:
      - -usr/share/doc
      - -usr/include
    after: [desktop/qt5]
 

请注意上面的这一行:

confinement: devmode

当我们这样设计后,plugs里定义的所有的plug将不起任何的作用(我们甚至可以把它们全部删除).在我们安装我们的snap时,我们使用如下的命令:

$ sudo snap install rssreader-app_1.0_amd64.snap --devmode

注意我们上面加入--devmode选项.它表明,我们的应用将不受任何的安全沙箱的限制.它可以做它任何喜欢做的事情.在运行的过程中将不再生产任何的安全错误的提示.这样的设计对于我们开发者来说,我们可以很快地开发出我们所需要的应用,而不用先考虑我们的安全问题.












作者:UbuntuTouch 发表于2016/7/13 14:37:48 原文链接
阅读:926 评论:0 查看评论

Read more
UbuntuTouch

今天我把我们global同事的一个视频传到我们的youku网站上了.请大家观看如何从零开始来打包一个snap应用.大家如果有什么问题,请在文章的下面进行评论.我会尽我所能来回到所有的问题.


Ubuntu Snapcraft演示

https://www.youtube.com/watch?time_continue=1&v=K0IzxsIFjJY


Ubuntu Snappy and Snap Packages | Linux Explained:

https://www.youtube.com/watch?v=0ApRUndiXKU

作者:UbuntuTouch 发表于2016/7/20 17:09:04 原文链接
阅读:483 评论:0 查看评论

Read more
UbuntuTouch

在先前的文章"如何把一个qmake的Ubuntu手机应用打包为一个snap应用"中,我们介绍了如何把一个qmake的一个Ubuntu手机应用打包为一个snap的桌面应用.在今天的教程中,我们将展示如何把一个cmake的Ubuntu手机项目转换为一个snap的桌面应用.



1)通过Ubuntu SDK开发一个我们需要的手机应用



我们可以通过Ubuntu SDK来创建一个我们想要的项目.关于如何创建一个Ubuntu手机应用,这个不在我们的这个教程范围.如果你对如何利用Ubuntu SDK开发一个手机应用感兴趣的话,请参考我们的文章"Ubuntu 手机开发培训准备".这里将不再累述!


值得指出的是:在今天的教程中,我们将教大家如何把一个cmake的Ubuntu手机应用打包为一个snap的应用.这里,我们将利用之前我已经开发的一个项目作为例程来开始.我们在terminal下打入如下的命令:

$ git clone https://github.com/liu-xiao-guo/photos

下载后的源码结构如下:

liuxg@liuxg:~/snappy/desktop/photos$ tree -L 2
.
├── photos.wrapper
├── setup
│   └── gui
├── snapcraft.yaml
├── snappy-qt5.conf
└── src
    ├── app
    ├── CMakeLists.txt
    ├── manifest.json.in
    ├── photos.apparmor
    └── po

就像我们上面展示的那样,在src的目录中有一个完整的可以执行的cmake手机应用.它的项目管理文件是CMakeLists.txt.在它的根目录下有一个文件叫做snapcraft.yaml文件.这个是用来打包我们的CMake手机应用,并使之成为一个可以在我们的16.04桌面上运行的snap应用.



2)为我们的CMake项目打包


在上节中,我们已经提到了我们项目的snapcraft.yaml文件.现在我们把这个文件展示如下:


snapcraft.yaml


name: photos-app
version: 1.0
summary: Ubuntu photos app
description: |
   This is a demo app showing how to convert a cmake ubuntu phone app to a snap app

apps:
  photos:
    command: photos
    plugs: [network,home,unity7,opengl]

parts:
  photos:
    plugin: cmake
    configflags: [-DCMAKE_INSTALL_PREFIX=/usr, -DCLICK_MODE=off]
    source: src/
    build-packages:
      - cmake
      - gettext
      - intltool
      - ubuntu-touch-sounds
      - suru-icon-theme
      - qml-module-qttest
      - qml-module-qtsysteminfo
      - qml-module-qt-labs-settings
      - qtdeclarative5-u1db1.0
      - qtdeclarative5-qtmultimedia-plugin
      - qtdeclarative5-qtpositioning-plugin
      - qtdeclarative5-ubuntu-content1
      - qt5-default
      - qtbase5-dev
      - qtdeclarative5-dev
      - qtdeclarative5-dev-tools
      - qtdeclarative5-folderlistmodel-plugin
      - qtdeclarative5-ubuntu-ui-toolkit-plugin
      - xvfb
    stage-packages:
      - ubuntu-sdk-libs
      - qtubuntu-desktop
      - qml-module-qtsysteminfo
      - ubuntu-defaults-zh-cn
    snap:
      - -usr/share/doc
      - -usr/include
  environment:
    plugin: copy
    files:
      photos.wrapper: bin/photos
      snappy-qt5.conf: etc/xdg/qtchooser/snappy-qt5.conf

在这里,我不想再累述这里的每一个字段的意思.大家可以参阅我的文章"如何把一个qmake的Ubuntu手机应用打包为一个snap应用".特别值得指出的是,在我们的这个snapcraft.yaml文件中,我们定义了一个新的environment的part(它可以是我们喜欢的任何名称).在它里面,它使用了copy plugin.它把当前目录下的photos.wrapper拷入到bin/photos下.我们的command中的执行文件是photos.在我们打包时,我们必须记住需要把我们的script变为可以执行的脚本:

$ chmod a+x photos.wrapper

在我们的项目中,因为CMake项目不含有任何的C++代码,所以我们必须使用qmlscene来启动.这个在我们的photos.wrapper中可以看到:

photos.wrapper

#!/bin/sh

ARCH='x86_64-linux-gnu'

export LD_LIBRARY_PATH=$SNAP/usr/lib/$ARCH:$LD_LIBRARY_PATH

# XKB config
export XKB_CONFIG_ROOT=$SNAP/usr/share/X11/xkb

# Qt Platform to Mir
#export QT_QPA_PLATFORM=ubuntumirclient
export QTCHOOSER_NO_GLOBAL_DIR=1
export QT_SELECT=snappy-qt5

# Qt Libs
export LD_LIBRARY_PATH=$SNAP/usr/lib/$ARCH/qt5/libs:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$SNAP/usr/lib/$ARCH/pulseaudio:$LD_LIBRARY_PATH

# Qt Modules
export QT_PLUGIN_PATH=$SNAP/usr/lib/$ARCH/qt5/plugins
export QML2_IMPORT_PATH=$SNAP/usr/lib/$ARCH/qt5/qml/photos
export QML2_IMPORT_PATH=$QML2_IMPORT_PATH:$SNAP/usr/lib/$ARCH/qt5/qml
export QML2_IMPORT_PATH=$QML2_IMPORT_PATH:$SNAP/lib/$ARCH

# Mesa Libs
export LD_LIBRARY_PATH=$SNAP/usr/lib/$ARCH/mesa:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$SNAP/usr/lib/$ARCH/mesa-egl:$LD_LIBRARY_PATH

# XDG Config
export XDG_CONFIG_DIRS=$SNAP/etc/xdg:$XDG_CONFIG_DIRS
export XDG_CONFIG_DIRS=$SNAP/usr/xdg:$XDG_CONFIG_DIRS
# Note: this doesn't seem to work, QML's LocalStorage either ignores
# or fails to use $SNAP_USER_DATA if defined here
export XDG_DATA_DIRS=$SNAP_USER_DATA:$XDG_DATA_DIRS
export XDG_DATA_DIRS=$SNAP/usr/share:$XDG_DATA_DIRS

# Not good, needed for fontconfig
export XDG_DATA_HOME=$SNAP/usr/share

# Font Config
export FONTCONFIG_PATH=$SNAP/etc/fonts/config.d
export FONTCONFIG_FILE=$SNAP/etc/fonts/fonts.conf

# Tell libGL where to find the drivers
export LIBGL_DRIVERS_PATH=$SNAP/usr/lib/$ARCH/dri

# Necessary for the SDK to find the translations directory
export APP_DIR=$SNAP

# ensure the snappy gl libs win
export LD_LIBRARY_PATH="$SNAP_LIBRARY_PATH:$LD_LIBRARY_PATH"

cd $SNAP
exec $SNAP/usr/bin/qmlscene $SNAP/share/qml/photos/Main.qml

在上面的最后一句,它显示了我们如何启动我们的应用:

exec $SNAP/usr/bin/qmlscene $SNAP/share/qml/photos/Main.qml

我们可以直接在我们的项目根目录下打入如下的命令:

$ snapcraft

它即可把我们的应用打包为我们想要的snap应用包.

关于如何安装和运行我们的应用.这里我就不再累述.请大家参阅我文章"如何把一个qmake的Ubuntu手机应用打包为一个snap应用".

下面,我们把运行的结果的截图显示如下:

 


作者:UbuntuTouch 发表于2016/7/15 14:20:31 原文链接
阅读:306 评论:0 查看评论

Read more
UbuntuTouch

[原]helloworld Snap例程

我们在很多的语言开发中都少不了使用"Hello, the world!"来展示一个开发环境的成功与否,在今天的教程中,我们也毫不例外地利用这个例子来展示snap应用是如何构建的.虽然这个例子很简单,但是在我们进入例程的过程中,我们会慢慢地发现有关snap系统的一些特性,这样可以更好地帮助我们来了解这个系统.如果大家想了解16.04的桌面对snap的支持,请参阅文章"安装snap应用到Ubuntu 16.4桌面系统".


1)环境安装


我们仿照在文章"安装snap应用到Ubuntu 16.4桌面系统"中的"安装"一节进行安装.记得一定要选上"universe"才可以安装成功.



2)从Snap商店中安装hello-world应用


我们可以通过如下的命令在Snap商店中找到这个应用:

$ sudo snap install hello-world --force-dangerous

在安装完这个应用后,我们可以在如下的目录中找到关于这个应用的snap.yaml文件.这个文件很类似于一个项目的snapcraft.yaml文件.只是它里面没有什么parts的部分.

liuxg@liuxg:/snap/hello-world/current/meta$ ls 
gui  snap.yaml

其中snap.yaml的内容如下:

snap.yaml

name: hello-world
version: 6.3
architectures: [ all ]
summary: The 'hello-world' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox
    * hello-world.evil - show how snappy sandboxes binaries
    * hello-world.sh   - enter interactive shell that runs in app sandbox
    * hello-world      - simply output text
apps:
 env:
   command: bin/env
 evil:
   command: bin/evil
 sh:
   command: bin/sh
 hello-world:
   command: bin/echo

可以看出来它的格式非常接近我们的snapcraft.yaml项目文件.这里文件中指出的env,evil,sh及echo命令,我们可以在如下的目录中找到:

liuxg@liuxg:/snap/hello-world/current/bin$ ls
echo  env  evil  sh

当然,我们可以使用我们的vi编辑器来查看里面的具体的内容.整个的文件架构如下:

liuxg@liuxg:/snap/hello-world/current$ tree -L 3
.
├── bin
│   ├── echo
│   ├── env
│   ├── evil
│   └── sh
└── meta
    ├── gui
    │   └── icon.png
    └── snap.yaml


3)创建我们自己的hello-world项目


由于一些原因,我们在网上已经找不到这个项目的源码了.由于这个项目非常简单,我们可以通过我们自己的方法来重建这个项目:我们把上面的snap.yaml进行修改为我们所需要的snapcraft.yaml,并把我们所需要的文件考入到我们需哟啊的目录.这样我们就可以很容易地重构这个项目.这样做的目的是我们想通过修改这个项目来展示一些我们想看到的特性.

对于开发者来说,如果你没有一个现成的snapcraft.yaml文件供你参考的话,你可以直接使用如下的命令来生产一个模版来进行编辑:

$ snapcraft init

这样在我们的项目的根目录下就会生产一个snapcraft.yaml的样板件.我们可以利用这个文件来作为开始进行编辑:

snapcraft.yaml

name: my-snap  # the name of the snap
version: 0  # the version of the snap
summary: This is my-snap's summary  # 79 char long summary
description: This is my-snap's description  # a longer description for the snap
confinement: devmode  # use "strict" to enforce system access only via declared interfaces

parts:
    my-part:  # Replace with a part name of your liking
        # Get more information about plugins by running
        # snapcraft help plugins
        # and more information about the available plugins
        # by running
        # snapcraft list-plugins
        plugin: nil

经过我们的重构,我们终于完成了我们的第一个snap应用.目录架构如下:

liuxg@liuxg:~/snappy/desktop/helloworld$ tree -L 3
.
├── bin
│   ├── echo
│   ├── env
│   ├── evil
│   └── sh
├── setup
│   └── gui
│       └── icon.png
└── snapcraft.yaml

在这里snapcraft是我们的项目文件,它被用于来打包我们的应用.它的内容如下:

snapcraft.yaml


name: hello-xiaoguo
version: 1.0
architectures: [ all ]
summary: The 'hello-world' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox
    * hello-world.evil - show how snappy sandboxes binaries
    * hello-world.sh   - enter interactive shell that runs in app sandbox
    * hello-world      - simply output text
confinement: strict

apps:
 env:
   command: bin/env
 evil:
   command: bin/evil
 sh:
   command: bin/sh
 hello-world:
   command: bin/echo

parts:
 hello:
  plugin: copy
  files:
    ./bin: bin

从内容上看,它和我们之前看到的snap几乎是一样的.这里我们使用了copy plugin.如果大家想对snapcraft.yaml中的每一项搞清楚,建议大家阅读文章"Metadata YAML file".在其中,它也描述了哪些项是必须的.

我们可以通过如下的命令来查询目前已经支持的所有的plugins:

liuxg@liuxg:~/snappy/desktop/helloworld$ snapcraft list-plugins
ant        catkin  copy  gulp  kbuild  make   nil     python2  qmake  tar-content
autotools  cmake   go    jdk   kernel  maven  nodejs  python3  scons

事实上,在snapcraft下的plugin架构也是开放的,开发者可以开发自己所需要的plugin.
在我们打包我们的应用之前,我们必须确保在bin目录下的所有文件都是可以执行的:

liuxg@liuxg:~/snappy/desktop/helloworld/bin$ ls -al
total 24
drwxrwxr-x 2 liuxg liuxg 4096 7月  13 00:31 .
drwxrwxr-x 4 liuxg liuxg 4096 7月  18 10:31 ..
-rwxrwxr-x 1 liuxg liuxg   31 7月  12 05:20 echo
-rwxrwxr-x 1 liuxg liuxg   27 7月  12 05:20 env
-rwxrwxr-x 1 liuxg liuxg  274 7月  12 05:20 evil
-rwxrwxr-x 1 liuxg liuxg  209 7月  12 05:20 sh

否则的话,我们需要使用如下的命令来完成:

$ chmod a+x echo

至此,我们已经完成了一个最基本的hello-world项目.


4)编译并执行我们的应用


在我们的项目的根目录下,我们直接打入如下的命令:

$ snapcraft

如果一切顺利的话,我们可以在项目的根目录下看到如下的文件及目录:

liuxg@liuxg:~/snappy/desktop/helloworld$ tree -L 2
.
├── bin
│   ├── echo
│   ├── env
│   ├── evil
│   └── sh
├── hello-xiaoguo_1.0_all.snap
├── parts
│   └── hello
├── prime
│   ├── bin
│   ├── command-env.wrapper
│   ├── command-evil.wrapper
│   ├── command-hello-world.wrapper
│   ├── command-sh.wrapper
│   └── meta
├── setup
│   └── gui
├── snapcraft.yaml
└── stage
    └── bin

我们可以看见一个生产的snap文件.一个snap的文件名依赖于如下的规则:
<name>_<version>_<arch>.snap
在我们这个例程中,我们指定architecture为all.

当然,我们也看到了一下其它的文件目录:parts, stage及prime.这些目录是在打包过程中自动生成的.有关更多关于snapcraft的帮助信息,我们可以通过如下的命令来获得:

liuxg@liuxg:~/snappy/desktop/helloworld$ snapcraft --help
...

The available lifecycle commands are:
  clean        Remove content - cleans downloads, builds or install artifacts.
  cleanbuild   Create a snap using a clean environment managed by lxd.
  pull         Download or retrieve artifacts defined for a part.
  build        Build artifacts defined for a part. Build systems capable of
               running parallel build jobs will do so unless
               "--no-parallel-build" is specified.
  stage        Stage the part's built artifacts into the common staging area.
  prime        Final copy and preparation for the snap.
  snap         Create a snap.

Parts ecosystem commands
  update       Updates the parts listing from the cloud.
  define       Shows the definition for the cloud part.
  search       Searches the remotes part cache for matching parts.

Calling snapcraft without a COMMAND will default to 'snap'

...

在上面我们可以看出,打包一个snap应用所必须的生命周期:pullbuildstageprimesnap.对于一个snap的项目来说,它的每个part的文件可以存在于一下网站上,比如git,bzr或tar.在打包的过程中,snapcraft可以将它们自动从网上下载下来,并存于parts目录下.通过build,把每个part的安装的文件至于每个part下的一个叫做install的子目录中.在stage过程中,它分别把每个part的文件(parts/<part_name>/install)集中起来,并存于一个统一的文件架构中.prime是最终把stage中可以用打包的文件放在一起,这样我们可以在最后的过程中来通过snap过程来打包为snap包.最终的.snap包文件其实就是把prime目录下的文件进行squashfs处理.更多关于这些生命周期的描述,大家可以参阅文章"Snapcraft Overview".我们也可以参阅视频"Snapcraft操作演示--教你如何snap一个应用".

如果我们想在编译完我们的应用后,不需要留下任何的所安装的包,我们可以使用cleanbuild.它会创建一个lxd容器,在编译完后,后自动销毁:

$ snapcraft cleanbuild

在编译完后,我们可以发现所生产的.snap文件,但是我们可能找不到我们看见的中间文件目录: parts, stage及prime.

如果大家想清除这些在打包过程中的中间文件,我们可以在命令行中打入如下的命令:

$ snapcraft clean

我们可以通过如下的命令来安装我们的应用:

$ sudo snap install hello-xiaoguo_1.0_all.snap --force-dangerous

在编译一个snap应用时,我们也可以分别对以上的步骤进行操作.在每次操作过后,我们会发现我们的目录结构将会发送改变.
$ snapcraft clean
$ snapcraft pull
$ snapcraft build
$ snapcraft stage
$ snapcraft prime
$ snapcraft snap prime/
如果我们通过上面的方法进行安装的话,每次安装都会生产一个新的版本,并且占用系统的硬盘资源.我们也可以通过如下的方式来进行安装:

$ sudo snap try prime/

这种方法的好处是,它不必要每次都生产一个新的版本.在安装时,它直接使用一个软链接的方式,把我们编译的prime目录直接进行mount.

这样我们就可以把我们的应用安装到我们的16.04的桌面系统中了.我们可以通过如下的命令来查看:

liuxg@liuxg:~/snappy/desktop/helloworld$ snap list
Name           Version               Rev  Developer  Notes
hello-world    6.3                   27   canonical  -
hello-xiaoguo  1.0                   x2              -
ubuntu-core    16.04+20160531.11-56  122  canonical  -

如果你已经看到了hello-xiaoguo的应用在上面列表中.那么恭喜你,你已经创建了人生第一个snap.从上面的列表中我们可以看出,它即有一个Version也有一个Rev版本.显然我们的hello-xiaoguo应用已经被成功安装到系统中了.那么我们怎么来运行我们的应用呢?

从我们的snapcraft.yaml文件中,我们可以看出来,我们的包名叫做hello-xiaoguo.在我们的这个包中,我们定义了四个应用:env, evil,sh及hello-world.那么我们运行它们时,我们分别需要使用如下的命令来运行:

$ hello-xiaoguo.env
$ hello-xiaoguo.evil
$ hello-xiaoguo.sh
$ hello-xiaoguo.hello-world

也即,我们使用<<包名>>.<<应用名>>格式来运行我们的应用.比如:

liuxg@liuxg:~/snappy/desktop/helloworld$ hello-xiaoguo.hello-world 
Hello World!

整个项目的源码在:https://github.com/liu-xiao-guo/helloworld-snap.另外我们特别指出的是,如果你的包的名称和应用名称是一样的,那么你只需要运行包的名称即可.比如你的包的名称是hello,你的应用的名称是hello,那么你直接在命令行中运行hello就可以运行你的应用.

我们可以使用如下的命令来查看我们的snap文件包里的内容:

$ unsquashfs -l hello-xiaoguo_1.0_all.snap

一个snap文件包其实就是一个压缩的squashfs文件.我们也可以通过如下的方式来得到包里所有的内容:
$ unsquashfs hello-xiaoguo_1.0_all.snap  
$ cd cd squashfs-root  
# Hack hack hack  
$ snapcraft snap  
这里我们就留给开发者们自己做练习.



5)Snap的运行环境


每个应用在被安装到系统的时候,系统将会为这个包里的每个应用生产一个相应的启动脚本.它们可以在如下的路径中找到:

liuxg@liuxg:/snap/bin$ ls -l
total 52
-rwxr-xr-x 1 root root 708 7月  20 15:37 hello-world
-rwxr-xr-x 1 root root 783 7月  21 15:13 hello-world-cli
-rwxr-xr-x 1 root root 683 7月  20 15:37 hello-world.env
-rwxr-xr-x 1 root root 687 7月  20 15:37 hello-world.evil
-rwxr-xr-x 1 root root 679 7月  20 15:37 hello-world.sh
-rwxr-xr-x 1 root root 743 7月  22 15:30 hello-xiaoguo.createfile
-rwxr-xr-x 1 root root 767 7月  22 15:30 hello-xiaoguo.createfiletohome
-rwxr-xr-x 1 root root 715 7月  22 15:30 hello-xiaoguo.env
-rwxr-xr-x 1 root root 719 7月  22 15:30 hello-xiaoguo.evil
-rwxr-xr-x 1 root root 747 7月  22 15:30 hello-xiaoguo.hello-world
-rwxr-xr-x 1 root root 711 7月  22 15:30 hello-xiaoguo.sh
-rwxr-xr-x 1 root root 726 7月  22 11:32 snappy-debug.security
-rwxr-xr-x 1 root root 798 7月  20 10:44 telegram-sergiusens.telegram

我们可以通过cat命令来查看这些脚本的具体的内容.
我们可以执行如下的命令:

$ hello-xiaoguo.env | grep SNAP

通过上面的指令,我们可以得到关于一个snap运行时的所有环境变量:

liuxg@liuxg:~$ hello-xiaoguo.env | grep SNAP
SNAP_USER_COMMON=/home/liuxg/snap/hello-xiaoguo/common
SNAP_LIBRARY_PATH=/var/lib/snapd/lib/gl:
SNAP_COMMON=/var/snap/hello-xiaoguo/common
SNAP_USER_DATA=/home/liuxg/snap/hello-xiaoguo/x2
SNAP_DATA=/var/snap/hello-xiaoguo/x2
SNAP_REVISION=x2
SNAP_NAME=hello-xiaoguo
SNAP_ARCH=amd64
SNAP_VERSION=1.0
SNAP=/snap/hello-xiaoguo/x2

这些和snap相关的环境变量可以在我们的应用中进行引用.这些变量的介绍如下:



在我们编程的过程中,我们不必要hard-code我们的变量.比如我们可以在我们的snapcraft中引用$SNAP,它表示我们当前的snap的安装的路径.同时,我们可以看到一个很重要的目录:

SNAP_DATA=/var/snap/hello-xiaoguo/x2

这个目录是我们的snap的私有的目录.我们的snap只能向这个目录或SNAP_USER_DATA进行读写的操作.否则,将会产生安全的错误信息.比如在我们的evil脚本中:

evil

#!/bin/sh

set -e
echo "Hello Evil World!"

echo "This example demonstrates the app confinement"
echo "You should see a permission denied error next"

echo "Haha" > /var/tmp/myevil.txt

echo "If you see this line the confinement is not working correctly, please file a bug"

我们通过这个脚本向/var/tmp/中的myevil.txt写入"Haha",它会造成安全的DENIED错误信息:

liuxg@liuxg:~$ hello-xiaoguo.evil
Hello Evil World!
This example demonstrates the app confinement
You should see a permission denied error next
/snap/hello-xiaoguo/x2/bin/evil: 9: /snap/hello-xiaoguo/x2/bin/evil: cannot create /var/tmp/myevil.txt: Permission denied

另外一种查找错误的方方法是打开我们的系统的log信息(/var/log/syslog

8307 comm="fswebcam" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Jul 19 13:27:18 liuxg kernel: [19665.330053] audit: type=1400 audit(1468906038.378:4309): apparmor="DENIED" operation="open" profile="snap.webcam-webui.webcam-webui" name="/etc/fonts/fonts.conf" pid=18307 comm="fswebcam" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Jul 19 13:27:25 liuxg gnome-session[2151]: (nm-applet:2584): nm-applet-WARNING **: ModemManager is not available for modem at /hfp/org/bluez/hci0/dev_F4_B7_E2_CC_F0_56
Jul 19 13:27:26 liuxg kernel: [19673.182647] audit: type=1400 audit(1468906046.230:4310): apparmor="DENIED" operation="mknod" profile="snap.hello-xiaoguo.evil" name="/var/tmp/myevil.txt" pid=18314 comm="evil" requested_mask="c" denied_mask="c" fsuid=1000 ouid=1000

或直接使用如下的命令:

liuxg@liuxg:~/snappy/desktop/helloworld$ cat /var/log/syslog | grep DENIED | grep hello-xiaoguo
Jul 19 13:25:25 liuxg kernel: [19552.926619] audit: type=1400 audit(1468905925.975:4276): apparmor="DENIED" operation="mknod" profile="snap.hello-xiaoguo.evil" name="/var/tmp/myevil.txt" pid=18273 comm="evil" requested_mask="c" denied_mask="c" fsuid=1000 ouid=1000
Jul 19 13:27:26 liuxg kernel: [19673.182647] audit: type=1400 audit(1468906046.230:4310): apparmor="DENIED" operation="mknod" profile="snap.hello-xiaoguo.evil" name="/var/tmp/myevil.txt" pid=18314 comm="evil" requested_mask="c" denied_mask="c" fsuid=1000 ouid=1000

这其中最根本的原因是因为我们的snap应用不能访问不属于他自己的目录.这是snap应用最根本的安全机制.我们可以看到上面的operation项.他指出了我们具体的错误操作.
对于snap应用的另外一个安全方面的问题是seccomp violation.对于一个叫做audit的应用来说,我们可以通过如下的方式来得到它的错误信息:

$ sudo grep audit /var/log/syslog

一个seccomp violation的例子可能如下:

audit: type=1326 audit(1430766107.122:16): auid=1000 uid=1000 gid=1000 ses=15 pid=1491 comm="env" exe="/bin/bash" sig=31 arch=40000028 syscall=983045 compat=0 ip=0xb6fb0bd6 code=0x0

那么这个violation的具体的意思是什么呢?我们可以通过如下的命令来查看这个错我的信息:

$ scmp_sys_resolver 983045
set_tls

请注意上面的983045来自于我们的错误信息中的"syscall=983045".这样,我们就可以定位我们的哪些系统调用有问题.

为了说明问题,我们创建一个createfile的脚本:

createfile

#!/bin/sh

set -e
echo "Hello a nice World!"

echo "This example demonstrates the app confinement"
echo "This app tries to write to its own user directory"

echo "Haha" > $HOME/test.txt

echo "Succeeded! Please find a file created at $HOME/test.txt"
echo "If do not see this, please file a bug"

在我们的snapcraft.yaml中加入:

 createfile:
   command: bin/createfile

重新打包我们的应用:

liuxg@liuxg:~/snappy/desktop/helloworld$ hello-xiaoguo.createfile 
Hello a nice World!
This example demonstrates the app confinement
This app tries to write to its own user directory
Succeeded! Please find a file created at /home/liuxg/snap/hello-xiaoguo/x3/test.txt
If do not see this, please file a bug

在这里,我们可以看见我们已经成功地在位置/home/liuxg/snap/hello-xiaoguo/x3/生产一个文件.$HOME的定义可以通过如下的方式获得:

liuxg@liuxg:~/snappy/desktop/helloworld$ hello-xiaoguo.env | grep home
GPG_AGENT_INFO=/home/liuxg/.gnupg/S.gpg-agent:0:1
SNAP_USER_COMMON=/home/liuxg/snap/hello-xiaoguo/common
ANDROID_NDK_ROOT=/home/liuxg/android-ndk-r10e
SNAP_USER_DATA=/home/liuxg/snap/hello-xiaoguo/x3
PWD=/home/liuxg/snappy/desktop/helloworld
HOME=/home/liuxg/snap/hello-xiaoguo/x3
XAUTHORITY=/home/liuxg/.Xauthority

细心的开发者也许会发现它的定义其实和变量SNAP_USER_DATA是一样的:

liuxg@liuxg:~/snappy/desktop/helloworld$ hello-xiaoguo.env | grep SNAP
SNAP_USER_COMMON=/home/liuxg/snap/hello-xiaoguo/common
SNAP_LIBRARY_PATH=/var/lib/snapd/lib/gl:
SNAP_COMMON=/var/snap/hello-xiaoguo/common
SNAP_USER_DATA=/home/liuxg/snap/hello-xiaoguo/x3
SNAP_DATA=/var/snap/hello-xiaoguo/x3
SNAP_REVISION=x3
SNAP_NAME=hello-xiaoguo
SNAP_ARCH=amd64
SNAP_VERSION=1.0
SNAP=/snap/hello-xiaoguo/x3

对于一个daemon应用来说,这个$HOME变量的值,是指向$SNAP_DATA而不是$SNAP_USER_DATA.这一点大家要记住.具体的描述可以参阅文章"Snap security policy and sandboxing".

关于security的调试是一件非常麻烦的是.庆幸的是,我们可以利用snappy-debug来帮我们调试.它可以帮我们解析我们到底是哪些地方有错误,并且帮我们解析系统调用的名称等.

$ sudo snap install snappy-debug
$ sudo /snap/bin/snappy-debug.security scanlog foo

具体的使用方法,我们可以参阅文章"Snap security policy and sandboxing".




6)创建一个可以向桌面home写入的app


在上一节的内容中,我们基本上已经看到了snap应用的安全性的一些方面.在我们的snap应用中,假如我们想向我们的桌面电脑中的home里写内容,那应该是怎么办呢?首先,我们创建一个如下的createfiletohome脚本:

createfiletohome:


#!/bin/sh

set -e
echo "Hello a nice World!"

echo "This example demonstrates the app confinement"
echo "This app tries to write to its own user directory"

echo "Haha" > /home/$USER/test.txt

echo "Succeeded! Please find a file created at $HOME/test.txt"
echo "If do not see this, please file a bug"

在上面我们向我们的用户的home目录写入我们需要的内容.如果我们重新编译并运行我们的应用,我们会发现:

liuxg@liuxg:~/snappy/desktop/helloworld$ hello-xiaoguo.createfiletohome 
Hello a nice World!
This example demonstrates the app confinement
This app tries to write to its own user directory
/snap/hello-xiaoguo/x1/bin/createfiletohome: 9: /snap/hello-xiaoguo/x1/bin/createfiletohome: cannot create /home/liuxg/test.txt: Permission denied

显然,它产生了一个错误的信息.我们不可以向我们的用户的home写入任何的内容.那么我们怎么才可以呢?

方法一: 我们使用如下的方法来安装我们的应用:

$ sudo snap install hello-xiaoguo_1.0_all.snap --devmode --force-dangerous

注意上面的--devmode,它表明在开发的过程中,我们的snap应用忽略任何的安全问题,也就是我们的snap将和以前开发的任何应用一样,不接受snap系统所带来的任何的安全的限制.这种方式一般适合在早期开发一个snap应用时使用.等到我们把功能完善后,我们再来完善我们的安全的部分.

针对使用--devmode打包的snap来说(我们也可以在snapcraft.yaml中的confinement设为devmode),我们必须注意的是: 

Snaps can be uploaded to the edge and beta channels only


另外特别值得指出的是,如果我们的snapcraft.yaml中的confinement被定义为devmode,那个这个应用将能被"snap find"发现而进行安装.作为开发者本身,你可以通过命令"snap install --devmode"来进行安装测试,普通用户不能使用"snap find/install"来对该应用进行操作.

方法二:我们重新把我们的snapcraft.yaml改写如下:

snapcraft.yaml

name: hello-xiaoguo
version: 1.0
architectures: [ all ]
summary: The 'hello-world' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox
    * hello-world.evil - show how snappy sandboxes binaries
    * hello-world.sh   - enter interactive shell that runs in app sandbox
    * hello-world      - simply output text
confinement: strict

apps:
 env:
   command: bin/env
 evil:
   command: bin/evil
 sh:
   command: bin/sh
 hello-world:
   command: bin/echo
 createfile:
   command: bin/createfile
 createfiletohome:
   command: bin/createfiletohome
   plugs: [home]

parts:
 hello:
  plugin: copy
  files:
    ./bin: bin

在上面,我们在createfiletohome应用下加入了home plug.它表明我们的应用可以访问用户的home目录.在snap系统,如果一个应用想访问另外一个受限的资源或访问一个被其它应用所拥有的资源时,我们可以通过interface来实现.更多的关于snap的interface方面的知识,大家可以参阅Interfaces链接.我们可以通过如下的命令来查看所有的interface:

liuxg@liuxg:~$ snap interfaces
Slot                 Plug
:camera              -
:cups-control        -
:firewall-control    -
:gsettings           -
:home                -
:locale-control      -
:log-observe         -
:modem-manager       -
:mount-observe       -
:network             -
:network-bind        -
:network-control     -
:network-manager     -
:network-observe     -
:opengl              -
:optical-drive       -
:ppp                 -
:pulseaudio          -
:snapd-control       -
:system-observe      -
:timeserver-control  -
:timezone-control    -
:unity7              -
:x11                 -

重新打包我们的应用:

liuxg@liuxg:~/snappy/desktop/helloworld$ hello-xiaoguo.createfiletohome 
Hello a nice World!
This example demonstrates the app confinement
This app tries to write to its own user directory
Succeeded! Please find a file created at /home/liuxg/snap/hello-xiaoguo/x1/test.txt
If do not see this, please file a bug

现在,我们再也没有上面显示的那个错误信息了.


7)应用的shell


虽然我们的hello应用非常简单,但是它确实展示了我们很多希望看到的snap应用的一面.在我们的应用中,我们还有一个应用叫做sh.它事件上是一个shell.我们可以通过如下的方式来启动它:

$ hello-xiaoguo.sh

当这个应用被启动后:

liuxg@liuxg:~$ hello-xiaoguo.sh
Launching a shell inside the default app confinement. Navigate to your
app-specific directories with:

  $ cd $SNAP
  $ cd $SNAP_DATA
  $ cd $SNAP_USER_DATA

bash-4.3$ env | grep snap
SNAP_USER_COMMON=/home/liuxg/snap/hello-xiaoguo/common
SNAP_LIBRARY_PATH=/var/lib/snapd/lib/gl:
SNAP_COMMON=/var/snap/hello-xiaoguo/common
SNAP_USER_DATA=/home/liuxg/snap/hello-xiaoguo/x4
SNAP_DATA=/var/snap/hello-xiaoguo/x4
PATH=/snap/hello-xiaoguo/x4/bin:/snap/hello-xiaoguo/x4/usr/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
HOME=/home/liuxg/snap/hello-xiaoguo/x4
XDG_DATA_DIRS=/usr/share/ubuntu:/usr/share/gnome:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop
SNAP=/snap/hello-xiaoguo/x4
bash-4.3$ 
我们可以通过这个shell来完成我们想要的任务.比如:

bash-4.3$ cd /home/liuxg
bash-4.3$ pwd
/home/liuxg
bash-4.3$ touch hello.text
touch: cannot touch 'hello.text': Permission denied

我们发现,由于snap安全的限制,我们不能向以前的系统那样,可以自由地在任何地方创建一个我们想要的文件了.还有更多的例子,我们留给你们自己来慢慢地体会.

如果大家对snapcraft想了解更多的话,你可以试一下如下的命令:

$ snapcraft tour
Snapcraft tour initialized in ./snapcraft-tour/
Instructions are in the README, or http://snapcraft.io/create/#tour

我们可以在我们的home里的snapcraft-tour目录中找到我们所需要学习的例程.









作者:UbuntuTouch 发表于2016/7/18 11:12:59 原文链接
阅读:1076 评论:0 查看评论

Read more
UbuntuTouch

[原]WebCam snap应用实例

相信大家好多人都已经使用过webcam.对webcam的使用并不陌生.在今天的实例中,我们来介绍webcam的snap设计.如果大家先前看过我的ubuntu core的IoT设计,可能对这个项目并不陌生.大家可以参阅我的文章"snapcraft动手实践 --- Web Camera".只可惜,在那篇文章中的webcam在当时的情况下,没法在kvm中的环境中在桌面上测试,除非我们安装一个snappy的电脑系统.今天的snap实战中,我们的主要设计和以前的几乎是一样的,只是有一些细微的变化(这是由于我们的snap设计在不断演进中造成的).

webcam设计的源码我们可以在如下的地址找到:

https://github.com/snapcore/snapcraft/tree/master/demos/webcam-webui

我们可以它通过如下的方法把源代码下下来:

$ git clone https://github.com/ubuntu-core/snapcraft
$ cd snapcraft/demos

关于webcam应用的介绍,大家可以在如下的地址找到:

https://developer.ubuntu.com/en/snappy/build-apps/your-first-snap/

我们首先来看一下这个应用的snapcraft.yaml:

snapcraft.yaml

name: webcam-webui
version: 1
summary: Webcam web UI
description: Exposes your webcam over a web UI
confinement: strict

apps:
  webcam-webui:
    command: bin/webcam-webui
    daemon: simple
    plugs: [network-bind]

parts:
  cam:
    plugin: go
    go-packages:
      - github.com/mikix/golang-static-http
    stage-packages:
      - fswebcam
    filesets:
      fswebcam:
        - usr/bin/fswebcam
        - lib
        - usr/lib
      go-server:
        - bin/golang-*
    stage:
      - $fswebcam
      - $go-server
    snap:
      - $fswebcam
      - $go-server
      - -usr/share/doc
  glue:
    plugin: copy
    files:
      webcam-webui: bin/webcam-webui

fswebcam的用户手册可以在链接找到.

注意:这是到目前为止的snapcraft.yaml文件,其中并没有camera plug.我们已经报告一个bug了.如果大家直接打包这个应用:

$ snapcraft

并安装这个应用:

$ sudo snap install webcam-webui_1_amd64.snap --force-dangerous

由于这个应用是一个daemon类型的应用.我们视为一个service,也即我们并不需要手动来启动该应用.在安装的时候,它就自动被运行.我可以在我们的浏览器中打开如下的地址:



很显然,我们的webserver已经在正常运行,但是我们看不见任何的camera输出.这是为什么呢?
我们使用如下的命令来查看我们的service运行的日志:

$ journalctl -u snap.webcam-webui.webcam-webui

我们得到了如下的结果:


我们可以看出来,其中的一些信息(记住要翻到最后的信息).显然它显示了:

7月 19 10:33:13 liuxg ubuntu-core-launcher[13442]: Trying source module v4l2...
7月 19 10:33:13 liuxg ubuntu-core-launcher[13442]: Error opening device: /dev/video0
7月 19 10:33:13 liuxg ubuntu-core-launcher[13442]: open: Permission denied

这说明什么问题呢?它表明我们遇到了安全的问题.我们的应用不能打开设备/dev/video0.那么我们怎么来更正我们的问题呢?我们来在terminal中打入如下的命令:

liuxg@liuxg:~$ snap interfaces
Slot                 Plug
:camera              -
:cups-control        -
:firewall-control    -
:gsettings           -
:home                -
:locale-control      -
:log-observe         -
:modem-manager       -
:mount-observe       -
:network             -
:network-bind        webcam-webui
:network-control     -
:network-manager     -
:network-observe     -
:opengl              -
:optical-drive       -
:ppp                 -
:pulseaudio          -
:snapd-control       -
:system-observe      -
:timeserver-control  -
:timezone-control    -
:unity7              -
:x11                 -

请注意我们最前面的一个Slot: camera.我们在想我们的这个应用不就是一个camera应用吗?如果我们希望访问camera我们必须设置camera plug,这样我们才可以真正地访问我们的这个设备.有了这样的想法,我们在我们的snapcraft.yaml中加入camera.更后的snapcraft.yaml的文件如下:

snapcraft.yaml


name: webcam-webui
version: 1
summary: Webcam web UI
description: Exposes your webcam over a web UI
confinement: strict

apps:
  webcam-webui:
    command: bin/webcam-webui
    daemon: simple
    plugs: [camera,network-bind]

parts:
  cam:
    plugin: go
    go-packages:
      - github.com/mikix/golang-static-http
    stage-packages:
      - fswebcam
    filesets:
      fswebcam:
        - usr/bin/fswebcam
        - lib
        - usr/lib
      go-server:
        - bin/golang-*
    stage:
      - $fswebcam
      - $go-server
    snap:
      - $fswebcam
      - $go-server
      - -usr/share/doc
  glue:
    plugin: copy
    files:
      webcam-webui: bin/webcam-webui

请注意,我们在plug里加入了camera.重新打包并安装我们的应用.我们再次运行:

liuxg@liuxg:~$ snap interfaces
Slot                 Plug
:camera              -
:cups-control        -
:firewall-control    -
:gsettings           -
:home                -
:locale-control      -
:log-observe         -
:modem-manager       -
:mount-observe       -
:network             -
:network-bind        webcam-webui
:network-control     -
:network-manager     -
:network-observe     -
:opengl              -
:optical-drive       -
:ppp                 -
:pulseaudio          -
:snapd-control       -
:system-observe      -
:timeserver-control  -
:timezone-control    -
:unity7              -
:x11                 -
liuxg@liuxg:~$ snap interfaces
Slot                 Plug
:camera              -
:cups-control        -
:firewall-control    -
:gsettings           -
:home                -
:locale-control      -
:log-observe         -
:modem-manager       -
:mount-observe       -
:network             -
:network-bind        webcam-webui
:network-control     -
:network-manager     -
:network-observe     -
:opengl              -
:optical-drive       -
:ppp                 -
:pulseaudio          -
:snapd-control       -
:system-observe      -
:timeserver-control  -
:timezone-control    -
:unity7              -
:x11                 -
-                    webcam-webui:camera

显然,这次,我们看见了:

-                    webcam-webui:camera

这里虽然显示了camera的plug,但是它并没有真正地连接起来.我们查看文档Interfaces.在这篇文章中:

camera

Can access the first video camera. Suitable for programs wanting to use the webcams.

Usage: common Auto-Connect: no

它显示了:Auto-Connect: no.我们需要手动连接.

$ sudo snap connect webcam-webui:camera ubuntu-core:camera

通过上面的命令,我们可以使得一个plug和一个slot进行手动连接.重新运行如下的命令:

liuxg@liuxg:~$ snap interfaces
Slot                 Plug
:camera              webcam-webui
:cups-control        -
:firewall-control    -
:gsettings           -
:home                -
:locale-control      -
:log-observe         -
:modem-manager       -
:mount-observe       -
:network             -
:network-bind        webcam-webui
:network-control     -
:network-manager     -
:network-observe     -
:opengl              -
:optical-drive       -
:ppp                 -
:pulseaudio          -
:snapd-control       -
:system-observe      -
:timeserver-control  -
:timezone-control    -
:unity7              -
:x11                 -

这次,我们看见了我们的应用webcam-webui已经和network-bin及camera建立起来连接了.我们再次打开我们的浏览器:






我们整个项目的源码在:https://github.com/liu-xiao-guo/webcam-snap

我们可以通过如下的方式来查询我们的service的运行情况:

$ systemctl status -l snap.webcam-webui.webcam-webui

liuxg@liuxg:~$ systemctl status snap.webcam-webui.webcam-webui
● snap.webcam-webui.webcam-webui.service - Service for snap application webcam-webui.webcam-webui
   Loaded: loaded (/etc/systemd/system/snap.webcam-webui.webcam-webui.service; enabled; vendor prese
   Active: active (running) since 二 2016-07-19 12:38:02 CST; 36min ago
 Main PID: 16325 (webcam-webui)
   CGroup: /system.slice/snap.webcam-webui.webcam-webui.service
           ├─16325 /bin/sh /snap/webcam-webui/x1/bin/webcam-webui
           ├─16327 golang-static-http
           └─17488 sleep 10

7月 19 13:14:31 liuxg ubuntu-core-launcher[16325]: Adjusting resolution from 384x288 to 960x540.
7月 19 13:14:31 liuxg ubuntu-core-launcher[16325]: --- Capturing frame...
7月 19 13:14:31 liuxg ubuntu-core-launcher[16325]: Captured frame in 0.00 seconds.
7月 19 13:14:32 liuxg ubuntu-core-launcher[16325]: --- Processing captured image...
7月 19 13:14:32 liuxg ubuntu-core-launcher[16325]: Fontconfig error: Cannot load default config file
7月 19 13:14:32 liuxg ubuntu-core-launcher[16325]: Fontconfig error: Cannot load default config file
7月 19 13:14:32 liuxg ubuntu-core-launcher[16325]: Fontconfig error: Cannot load default config file
7月 19 13:14:32 liuxg ubuntu-core-launcher[16325]: Unable to load font 'sans': fontconfig: Couldn't f
7月 19 13:14:32 liuxg ubuntu-core-launcher[16325]: Disabling the the banner.
7月 19 13:14:32 liuxg ubuntu-core-launcher[16325]: Writing JPEG image to 'shot.jpeg'.

我们可以通过如下的方式来停止一个service:
$ sudo systemctl stop snap.webcam-webui.webcam-webui

我们可以通过如下的方式来启动一个service:

$ sudo systemctl start snap.webcam-webui.webcam-webui





作者:UbuntuTouch 发表于2016/7/19 10:55:55 原文链接
阅读:425 评论:0 查看评论

Read more
UbuntuTouch

[原]Ubuntu Snappy及Snap包介绍(英文)

在今天的视频里,它介绍什么是Snappy Ubuntu系统及Snap包的格式.它介绍了snap系统及snap包的一些特点.


youku:Ubuntu Snappy及Snap包介绍

 https://www.youtube.com/watch?v=0ApRUndiXKU


youku: Let's Play Snapcraft #2 - FileZilla

作者:UbuntuTouch 发表于2016/7/25 12:59:24 原文链接
阅读:249 评论:0 查看评论

Read more
kevin gunn

If you’ve been following along, you’ll know that we’ve put some snap work in to show how you might use Mir as a framework to build a kiosk style product. This post touches on a couple of recent evolutions.

First, there’s been recent work in improving Mir’s API stability at the server level, to be a true toolkit for shells through Miral which you can read about here. And you can read about the latest Miral 0.3 release here. Part of Miral provides 2 default shell implementations. One is miral-shell and the other is miral-kiosk. Miral-kiosk, as the name suggests, is a very minimal shell, keeping the footprint and complexity low. Hence it’s perfect for targeting products requiring simple, single application user interfaces. So we’ve created a snap utilizing this, named “mir-kiosk”.

Eventually Miral will become part of Mir itself, we just need to work through supported trusted prompts in more complex shell use cases (which is happening as I type). But the point of this post, is demonstrating miral-kiosk in a snap. If anyone reading this is considering using Mir snaps for production in a kiosk style product, I would recommend miral-kiosk as the preferred method. The same confinement achieve before still exists and you can run the same example applications.

Second, with the advent of the content interface available in the latest snapd release we are moving out the Mir libraries into their own snap that can be leveraged by the shell and mir-clients. This will make sure the Mir libraries stay in sync with one another and there’s a little deduplication gain so there’s not a lot of snaps with copies of Mir libraries as stage packages. This snap’s name is “mir-libs”.

Both the mir-kiosk & mir-libs snaps are available in the snap store. It can be demonstrated using the same mir-client snap that’s been used before in other posts.

Now, to experience this you need to download the latest ubuntu-core image, which is Release Candidate 2 (RC2). Download the appropriate architecture of the mir-client snap and then copy that over to your running ubuntu-core image. You can then ssh into your device/VM and install in this particular order.

$ snap install mir-libs --channel=edge --devmode
$ snap install mir-kiosk --channel=edge --devmode
$ snap install mir-client_0.24.1_amd64.snap --devmode --dangerous

 

At this point you should witness PhotoViewer running on mir-kiosk using mir-libs via content interface on your device or VM.

One last note, you might notice I’ve added –devmode to the installation steps here, that is due to a small regression in the RC2 image, it’s a bug that’s actively being worked. Confinement is still maintained with the the mir-kiosk snap.

 

Read more

Plants Vs Zombies Heroes Hack | Gem Hack:

Plants Vs Zombies Heroes Gems Hack

I must be straightforward here; a CCG-form of Plants Vs Zombies didn’t at first catch my consideration in light of the fact that with Hearthstone ruling my CCG recess, I wasn’t generally certain the delicate propelled Plants Vs Zombies Heroes could truly offer anything past the fun part of the workmanship. Indeed, in the wake of playing the diversion for a couple of hours - including one hour on our Mobcrush channel - I’m happy to say I wasn’t right. The amusement is more enjoyable than I anticipated that it would be, has a lot of key profundity, and is sufficiently particular from Hearthstone to maintain a strategic distance from direct rivalry. Add to that the fun that the topic offers - which truly goes far towards purifying our sense of taste from all the dream workmanship - and I believe any reasonable person would agree that PvZ Heroes ought to be a champ once it leaves delicate dispatch.

When you begin playing the diversion, you soon understand that not at all like CCGs like Hearthstone, PvZ Heroes is a path based card amusement where your cronies ordinarily battle with the ones in the contradicting paths. Just by rolling out this improvement, the engineers have figured out how to make PvZ Heroes an alternate affair than Hearthstone since path based CCGs require distinctive techniques. The other intriguing technician in the amusement is the way the two outside paths either require particular plants or zombies (for example just land and/or water capable ones can be set in the far right one) or offer a reward to a particular class of plants/zombies. As you can as of now observe, the adorable subject and visuals conceal an amusement with a lot of key profundity. The fundamental gameplay is turn-based, with the mana expanding by one every turn and every player playing at least one cards a turn. There’s likewise the capacity to play “traps” later in the turn, making for some fun communications.

Another fascinating element of the amusement is an accentuation on “tribal” collaborations; for example there are Bean plants that can synergize with other Bean plants, Mushrooms that do likewise, and on the Zombie side you have Zombies that get buffed when a card with the Gravestone catchphrase is played et cetera. Every one of these collaborations (and there are numerous more than the ones I’ve specified here) offer numerous deckbuilding openings, and I can see that when players begin making sense of the different cards, we’ll see some tight and viable decks show up. Add to that the various Heroes on every side of the Zombie/Plant separate, which function admirably with particular sorts of cronies, and you can perceive how PvZ Heroes isn’t either a money snatch nor an oversimplified CGG.

In the couple of hours I’ve played the diversion, I’ve gotten many cards since after every fight you win, you get the opportunity to pick one of two cards. It’s likewise not that difficult to gather the 100 diamonds for a pack of three cards. Generally speaking, the adaptation framework doesn’t appear to be unreasonable, yet it’s ahead of schedule to say since CCGs will frequently give players a chance to get a lot of cards at an early stage in order to attract them. The diversion offers single-player crusades with Plants and Zombies and additionally multiplayer. As indicated by the engineers, more modes will go to the diversion (counting a creating framework). There are a couple issues with the UI right now, particularly with the deckbuilding viewpoint, yet so far I’ve very made the most of my time with the amusement. In the event that you need to look at some gameplay, you can watch the video from my stream on Mobcrush on Sunday.

Single word of caution: similarly as I can see (and to the extent individuals in the discussions can see), there’s no marking in and no Game Center support right now. That implies you most likely won’t have the capacity to exchange your accumulation and advance to another gadget. I’m speculating that will change later on, yet remember that until further notice.

The amusement is in delicate dispatch, and on the off chance that you need to look at it, you can take after our guide on the most proficient method to download delicate propelled diversions and afterward get the amusement from the connection underneath. Along these lines, on the off chance that you like CCGs and appreciate the Plants Vs Zombies workmanship and shenanigans, you ought to presumably try this amusement out.

Read more
Dustin Kirkland

If you haven't heard about last week's Dirty COW vulnerability, I hope all of your Linux systems are automatically patching themselves...


Why?  Because every single Linux-based phone, router, modem, tablet, desktop, PC, server, virtual machine, and absolutely everything in between -- including all versions of Ubuntu since 2007 -- was vulnerable to this face-palming critical security vulnerability.

Any non-root local user of a vulnerable system can easily exploit the vulnerability and become the root user in a matter of a few seconds.  Watch...


Coincidentally, just before the vulnerability was published, we released the Canonical Livepatch Service for Ubuntu 16.04 LTS.  The thousands of users who enabled canonical-livepatch on their Ubuntu 16.04 LTS systems with those first few hours received and applied the fix to Dirty COW, automatically, in the background, and without rebooting!

If you haven't already enabled the Canonical Livepatch Service on your Ubuntu 16.04 LTS systems, you should really consider doing so, with 3 easy steps:
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
  2. Install the canonical-livepatch snap
    $ sudo snap install canonical-livepatch 
  3. Enable the service with your token
    $ sudo canonical-livepatch enable [TOKEN]
And you’re done! You can check the status at any time using:

$ canonical-livepatch status --verbose

Let's retry that same vulnerability, on the same system, but this time, having been livepatched...


Aha!  Thwarted!

So that's the Ubuntu 16.04 LTS kernel space...  What about userspace?  Most of the other recent, branded vulnerabilities (Heartbleed, ShellShock, CRIME, BEAST) have been critical vulnerabilities in userspace packages.

As of Ubuntu 16.04 LTS, the unattended-upgrades package is now part of the default package set, so you should already have it installed on your Ubuntu desktops and servers.  If you don't already have it installed, you can install it with:

$ sudo apt install unattended-upgrades

And moreover, as of Ubuntu 16.04 LTS, the unattended-upgrades package automatically downloads and installs important security updates once per day, automatically patching critical security vulnerabilities and keeping your Ubuntu systems safe by default.  Older versions of Ubuntu (or Ubuntu systems that upgraded to 16.04) might need to enable this behavior using:

$ sudo dpkg-reconfigure unattended-upgrades


With that combination enabled -- (1) automatic livepatches to your kernel, plus (2) automatic application of security package updates -- Ubuntu 16.04 LTS is the most secure Linux distribution to date.  Period.

Mooooo,
:-Dustin

Read more
Stéphane Graber

LXD logo

Introduction

When LXD 2.0 shipped with Ubuntu 16.04, LXD networking was pretty simple. You could either use that “lxdbr0” bridge that “lxd init” would have you configure, provide your own or just use an existing physical interface for your containers.

While this certainly worked, it was a bit confusing because most of that bridge configuration happened outside of LXD in the Ubuntu packaging. Those scripts could only support a single bridge and none of this was exposed over the API, making remote configuration a bit of a pain.

That was all until LXD 2.3 when LXD finally grew its own network management API and command line tools to match. This post is an attempt at an overview of those new capabilities.

Basic networking

Right out of the box, LXD 2.3 comes with no network defined at all. “lxd init” will offer to set one up for you and attach it to all new containers by default, but let’s do it by hand to see what’s going on under the hood.

To create a new network with a random IPv4 and IPv6 subnet and NAT enabled, just run:

stgraber@castiana:~$ lxc network create testbr0
Network testbr0 created

You can then look at its config with:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.150.19.1/24
 ipv4.nat: "true"
 ipv6.address: fd42:474b:622d:259d::1/64
 ipv6.nat: "true"
managed: true
type: bridge
usedby: []

If you don’t want those auto-configured subnets, you can go with:

stgraber@castiana:~$ lxc network create testbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true
Network testbr0 created

Which will result in:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.0.3.1/24
 ipv4.nat: "true"
 ipv6.address: none
managed: true
type: bridge
usedby: []

Having a network created and running won’t do you much good if your containers aren’t using it.
To have your newly created network attached to all containers, you can simply do:

stgraber@castiana:~$ lxc network attach-profile testbr0 default eth0

To attach a network to a single existing container, you can do:

stgraber@castiana:~$ lxc network attach my-container default eth0

Now, lets say you have openvswitch installed on that machine and want to convert that bridge to an OVS bridge, just change the driver property:

stgraber@castiana:~$ lxc network set testbr0 bridge.driver openvswitch

If you want to do a bunch of changes all at once, “lxc network edit” will let you edit the network configuration interactively in your text editor.

Static leases and port security

One of the nice thing with having LXD manage the DHCP server for you is that it makes managing DHCP leases much simpler. All you need is a container-specific nic device and the right property set.

root@yak:~# lxc init ubuntu:16.04 c1
Creating c1
root@yak:~# lxc network attach testbr0 c1 eth0
root@yak:~# lxc config device set c1 eth0 ipv4.address 10.0.3.123
root@yak:~# lxc start c1
root@yak:~# lxc list c1
+------+---------+-------------------+------+------------+-----------+
| NAME |  STATE  |        IPV4       | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-------------------+------+------------+-----------+
|  c1  | RUNNING | 10.0.3.123 (eth0) |      | PERSISTENT | 0         |
+------+---------+-------------------+------+------------+-----------+

And same goes for IPv6 but with the “ipv6.address” property instead.

Similarly, if you want to prevent your container from ever changing its MAC address or forwarding traffic for any other MAC address (such as nesting), you can enable port security with:

root@yak:~# lxc config device set c1 eth0 security.mac_filtering true

DNS

LXD runs a DNS server on the bridge. On top of letting you set the DNS domain for the bridge (“dns.domain” network property), it also supports 3 different operating modes (“dns.mode”):

  • “managed” will have one DNS record per container, matching its name and known IP addresses. The container cannot alter this record through DHCP.
  • “dynamic” allows the containers to self-register in the DNS through DHCP. So whatever hostname the container sends during the DHCP negotiation ends up in DNS.
  • “none” is for a simple recursive DNS server without any kind of local DNS records.

The default mode is “managed” and is typically the safest and most convenient as it provides DNS records for containers but doesn’t let them spoof each other’s records by sending fake hostnames over DHCP.

Using tunnels

On top of all that, LXD also supports connecting to other hosts using GRE or VXLAN tunnels.

A LXD network can have any number of tunnels attached to it, making it easy to create networks spanning multiple hosts. This is mostly useful for development, test and demo uses, with production environment usually preferring VLANs for that kind of segmentation.

So say, you want a basic “testbr0” network running with IPv4 and IPv6 on host “edfu” and want to spawn containers using it on host “djanet”. The easiest way to do that is by using a multicast VXLAN tunnel. This type of tunnels only works when both hosts are on the same physical segment.

root@edfu:~# lxc network create testbr0 tunnel.lan.protocol=vxlan
Network testbr0 created
root@edfu:~# lxc network attach-profile testbr0 default eth0

This defines a “testbr0” bridge on host “edfu” and sets up a multicast VXLAN tunnel on it for other hosts to join it. In this setup, “edfu” will be the one acting as a router for that network, providing DHCP, DNS, … the other hosts will just be forwarding traffic over the tunnel.

root@djanet:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.lan.protocol=vxlan
Network testbr0 created
root@djanet:~# lxc network attach-profile testbr0 default eth0

Now you can start containers on either host and see them getting IP from the same address pool and communicate directly with each other through the tunnel.

As mentioned earlier, this uses multicast, which usually won’t do you much good when crossing routers. For those cases, you can use VXLAN in unicast mode or a good old GRE tunnel.

To join another host using GRE, first configure the main host with:

root@edfu:~# lxc network set testbr0 tunnel.nuturo.protocol gre
root@edfu:~# lxc network set testbr0 tunnel.nuturo.local 172.17.16.2
root@edfu:~# lxc network set testbr0 tunnel.nuturo.remote 172.17.16.9

And then the “client” host with:

root@nuturo:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.edfu.protocol=gre tunnel.edfu.local=172.17.16.9 tunnel.edfu.remote=172.17.16.2
Network testbr0 created
root@nuturo:~# lxc network attach-profile testbr0 default eth0

If you’d rather use vxlan, just do:

root@edfu:~# lxc network set testbr0 tunnel.edfu.id 10
root@edfu:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

And:

root@nuturo:~# lxc network set testbr0 tunnel.edfu.id 10
root@nuturo:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

The tunnel id is required here to avoid conflicting with the already configured multicast vxlan tunnel.

And that’s how you make cross-host networking easily with recent LXD!

Conclusion

LXD now makes it very easy to define anything from a simple single-host network to a very complex cross-host network for thousands of containers. It also makes it very simple to define a new network just for a few containers or add a second device to a container, connecting it to a separate private network.

While this post goes through most of the different features we support, there are quite a few more knobs that can be used to fine tune the LXD network experience.
A full list can be found here: https://github.com/lxc/lxd/blob/master/doc/configuration.md

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

This is the eleventh blog post in this series about LXD 2.0.

LXD logo

Introduction

First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number of issues that had to be resolved. Yet even after all that, I still wasn’t be able to get networking going properly.

I finally gave up on devstack and tried “conjure-up” to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked!

So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!).

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM.

Remember, we’re running a full OpenStack here, this thing isn’t exactly light!

Setting up the container

OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, we’ll use a privileged container.

We’ll configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed).

Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features.

lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch"
lxc config device add openstack mem unix-char path=/dev/mem

There is a small bug in LXD where it would attempt to load kernel modules that have already been loaded on the host. This has been fixed in LXD 2.5 and will be fixed in LXD 2.0.6 but until then, this can be worked around with:

lxc exec openstack -- ln -s /bin/true /usr/local/bin/modprobe

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get OpenStack going.

lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y
lxc exec openstack -- apt-add-repository ppa:juju/stable -y
lxc exec openstack -- apt update
lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install conjure-up -y

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec openstack -- lxd init

And that’s it for the container configuration itself, now we can deploy OpenStack!

Deploying OpenStack with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy OpenStack.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec openstack -- sudo -u ubuntu -i conjure-up
  • Select “OpenStack with NovaLXD”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Conjure-Up deploying OpenStack

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Access the dashboard and spawn a container

The dashboard runs inside a container, so you can’t just hit it from your web browser.
The easiest way around this is to setup a NAT rule with:

lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to <IP>

Where “<ip>” is the dashboard IP address conjure-up gave you at the end of the installation.

You can now grab the IP address of the “openstack” container (from “lxc info openstack”) and point your web browser to: http://<container ip>/horizon

This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and you’ll be greeted by the OpenStack dashboard!

oslxd-dashboard

You can now head to the “Project” tab on the left and the “Instances” page. To start a new instance using nova-lxd, click on “Launch instance”, select what image you want, network, … and your instance will get spawned.

Once it’s running, you can assign it a floating IP which will let you reach your instance from within your “openstack” container.

Conclusion

OpenStack is a pretty complex piece of software, it’s also not something you really want to run at home or on a single server. But it’s certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine.

Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves.

It’s also one of the very few cases where multiple level of container nesting actually makes sense!

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Alan Griffiths

MirAL-0.3

There’s a new MirAL release (0.3.0) available in ‘Zesty Zapus’ (Ubuntu 17.04). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

Unsurprisingly, given the project’s original goal, the ABI is unchanged. The changes in 0.3.0 fall into three categories:

  1. bugfixes;
  2. enabling keymap in newer Mir versions; and,
  3. additional features for shell developers to use.

Bugfixes

#1631958 Crash after closing Qt dialog

#1632325, #1633052 tooltips positioned wrong with mir-0.24

#1625849 [Xmir] Alt+` switch between different X11-apps not just windows.

Added miral-xrun as a better way to use Xmir

(unnumbered) miral-shell splash screen should be fullscreen.

(unnumbered) deduplicate logging of WindowSpecification::top_left

Enabling Keyboard Map in newer Mir versions

A new class miral::Keymap allows the keyboard map to be specified either programmatically or (by default) on the command line. Being in the UK I can get a familiar keyboard layout like this:

miral-shell --keymap gb

The class is also provided on Mir versions prior to 0.24.1, but does nothing.

Additional Features For Shell Developers To Use

#1629349 Shell wants way to associate initially requested window creation state with the window later created.

Shell code can now set a userdata property on the WindowSpecification in place_new_surface() and this is transferred to the WindowInfo.

Added miral/version.h to allow permit compile-time feature detection. If you want to detect different versions of MirAL at compile time you can, for example, write:

#include <miral/version.h>
#if MIRAL_VERSION >= MIR_VERSION_NUMBER(0, 3, 0)
#include <miral/keymap.h>
#endif

A convenient overload of WindowManagerTools::modify_window() that
doesn’t require the WindowInfo

Read more
Joseph Williams

Working to make Juju more accessible

In the middle of July the Juju team got together to work towards making Juju more accessible. For now the aim was to reach Level AA compliant, with the intention of reaching AAA in the future.

We started by reading through the W3C accessibility guidelines and distilling each principle into sentences that made sense to us as a team and documenting this into a spreadsheet.

We then created separate columns as to how this would affect the main areas across Juju as a product. Namely static pages on jujucharms.com, the GUI and the inspector element within the GUI.

 

 

image02

GUI live on jujucharms.com

 

 

image04

Inspector within the GUI

 

 

image03

Example of static page content from the homepage

 

 

image00

The Juju team working through the accessibility guidelines

 

 

Tackling this as a team meant that we were all on the same page as to which areas of the Juju GUI were affected by not being AA compliant and how we could work to improve it.

We also discussed the amount of design effort needed for each of the areas that isn’t AA compliant and how long we thought it would take to make improvements.

You can have a look at the spreadsheet we created to help us track the changes that we need to make to Juju to make more accessible:

 

 

image01

Spreadsheet created to track changes and improvements needed to be done

 

 

This workflow has helped us manage and scope the tasks ahead and clear up uncertainties that we had about which tasks done or which requirements need to be met to achieve the level of accessibility we are aiming for.

 

 

Read more
Grazina Borosko

The Yakkety Yak 16.10 is released and now you can download the new wallpaper by clicking here. It’s the latest part of the set for the Ubuntu 2016 releases following Xenial Xerus. You can read about our wallpaper visual design process here.

Ubuntu 16.10 Yakkety Yak

yakkety_yak_wallpaper_4096x2304

Ubuntu 16.10 Yakkety Yak (light version)

yakkety_yak_wallpaper_4096x2304_grey_version

Read more
facundo

Ni una menos


Hoy es un día histórico. Las mujeres de todo el país y otros paises de latinoamérica salen a las calles para pelear por sus derechos a ser humanos.

Tenía ganas de escribir algo por acá, pero la verdad es que si quieren saber más del tema (mucho mejor escrito y tratado de lo que puedo hacerlo yo) pueden espiar las cuentas de tuiter de Luciana Peker, Paula, Caro, o tantas otras personas que están con este tema mucho (mucho) más que yo.

Pero después me crucé con este post de V que reproduce algo que está tan bueno que tuve ganas de ponerlo acá (parece que es anónimo, no pude averiguar a quien darle créditos...).


¿Y por qué no "Ni uno menos"?

Porque los varones tenemos el privilegio de caminar tranquilos por las calles sin temor a ser piropeados con palabras obscenas y expresiones repulsivas. Se nos evita lo asqueroso de tener a quienes nos apoyen en los transportes públicos o se masturben en las camionetas dedicando su semen a nuestros cuerpos.

Porque nadie critica nuestra forma de vestir ni nos hablan de cuán cortas son nuestras bermudas o nos tratan de "andar calentando genitales" si se nos ve el boxer.

Porque no se nos pasa por la cabeza salir a bailar y terminar violados porque nos pusieron algo en nuestras bebidas, ni tenemos que ubicar a decenas de desubicados durante toda la noche que se piensan que son nuestros dueños y que tenemos que obedecer y ser sumisos.

Porque, al parecer, para la sociedad las bolsas de consorcio no nos quedan tan bien a nosotros como a ellas.

Porque cuando somos chicos nadie nos regala ni escobas ni bebés ni cocinitas de juguete para que "vayamos practicando".

Porque tenemos el privilegio de que mamá nos cocine, nuestras hermanas laven los platos y papá nos invite al sillón a ver cómodamente el partido.

Porque nuestros amigos no nos tienen que avisar si llegaron bien porque ya lo damos por hecho.

Porque tenemos el privilegio de que no se nos critique por acostarnos con cuantas personas querramos (es más, cuantas más sean más capos somos).

Porque las histéricas son ellas.

Porque nosotros somos más inteligentes y hasta cobramos más haciendo el mismo trabajo.

Porque si asciendo en el trabajo es por mi capacidad y no por haberme cogido a nadie.

Porque si no queremos ser papás nos desentendemos, nos borramos y ya fue todo. Ellas quieren abortar porque son asesinas y no se hacen cargo de lo que les corresponde, que es ser madres ante todo. Porque no se cuidaron y a nosotros no nos corresponde esa parte.

Porque soy bien macho y me burlo de las travas, me las cojo y las mato para reafirmar mi masculinidad.

Porque si me gustan los tipos nadie dice que es porque todavía no me cogí una buena concha.

Porque sé más de política y sé manejarme mejor en ese mundo. Porque si ella llega a diputada es porque había que llenar el cupo o ¿adiviná? sí: se acostó con alguno.

Porque yo no cotizo en el mercado de la prostitución tanto como ellas y no tengo miedo a ser secuestrado para terminar en un puterío haciendo con mi cuerpo algo que no quiero. Porque voy al puterío y soy un campeón y ser puta es una deshonra.

Porque si me mando una cagada, con un ramo de flores y unos bombones en el día de la mujer me convierto en un serñor caballeroso, en un hombre de verdad.

Sencillamente: Porque no te das una idea de lo que es ser ellas en un mundo tan desigual como este.

A ver si lo dejamos bien clarito: todavía no hablamos de "ni uno menos" porque estamos llenos de privilegios que deberíamos cuestionarnos una y mil veces antes de hablar de feminazis exageradas antihombres o hablar de "igualismo".Porque el día en que nos empecemos a plantear una nueva masculinidad, dejemos de criar machitos heteronormativos y patriarcales y nos demos el debate que el tema se merece, el día que dejen de matarlas y humillarlas, ahí sí vamos a poder hablar de otra manera.

El machismo nos ataca a todos en general, pero las mata a ellas en particular.

No seas cómplice.

Basta de violencia machista.

Read more
Alan Griffiths

adding wallpaper to egmde

Before we begin

My previous egmde post described how to build and install MirAL. If you’re using Ubuntu 16.10 (Yakkety) then that is no longer necessary as MirAL is in the archive. All you need is:

$ sudo apt install libmiral-dev

Otherwise, you can follow the instructions there.

The egmde source for this article is available at the same place:

$ git clone https://github.com/AlanGriffiths/egmde.git

The example code

In this article we add some simple, Ubuntu orange wallpaper. The previous egmde.cpp file is largely unchanged we just add a couple of headers and update the main program that looked like this:

int main(int argc, char const* argv[])
{
    miral::MirRunner runner{argc, argv};

    return runner.run_with(
        {
            set_window_managment_policy<ExampleWindowManagerPolicy>()
        });
}

Now it is:

int main(int argc, char const* argv[])
{
    miral::MirRunner runner{argc, argv};
    Wallpaper wallpaper;
    runner.add_stop_callback([&] { wallpaper.stop(); });
    return runner.run_with(
        {
            miral::StartupInternalClient{"wallpaper", std::ref(wallpaper)},
            set_window_managment_policy<ExampleWindowManagerPolicy>()
        });
}

The Wallpaper class is what we’ll be implementing here, StartupInternalClient starts it as an in-process Mir client and the verbose lambda incantations work-around the limitations of the current MirAL implementation.

The Wallpaper class uses a simple “Worker” class to pass work off to a separate thread. I’ll only show the header here as the methods as self-explanatory:

class Worker
{
public:
    ~Worker();
    void start_work();
    void enqueue_work(std::function<void()> const& functor);
    void stop_work();

};

The Wallpaper class

class Wallpaper : Worker
{
public:
    // These operators are the protocol for an "Internal Client"
    void operator()(miral::toolkit::Connection c) { start(c); }
    void operator()(std::weak_ptr<mir::scene::Session> const&){ }

    void start(miral::toolkit::Connection connection);
    void stop();
private:
    std::mutex mutable mutex;
    miral::toolkit::Connection connection;
    miral::toolkit::Surface surface;
    void create_surface();
};

The start and stop methods are fairly self-explanatory:

void Wallpaper::start(miral::toolkit::Connection connection)
{
    {
        std::lock_guard<decltype(mutex)> lock{mutex};
        this->connection = connection;
    }
    enqueue_work([this]{ create_surface(); });
    start_work();
}
void Wallpaper::stop()
{
    {
        std::lock_guard<decltype(mutex)> lock{mutex};
        surface.reset();
        connection.reset();
    }
    stop_work();
}

Most of the work happens in the create_surface() method that creates a surface of a type that will never get focus (and therefore will never be raised above anything else):

void Wallpaper::create_surface()
{
    std::lock_guard<decltype(mutex)> lock{mutex};
    auto const spec = SurfaceSpec::for_normal_surface(
        connection, 100, 100, mir_pixel_format_xrgb_8888)
        .set_buffer_usage(mir_buffer_usage_software)
        .set_type(mir_surface_type_gloss)
        .set_name("wallpaper");

    mir_surface_spec_set_fullscreen_on_output(spec, 0);

    surface = spec.create_surface();
    uint8_t pattern[4] = { 0x14, 0x48, 0xDD, 0xFF };

    MirGraphicsRegion graphics_region;
    MirBufferStream* buffer_stream = mir_surface_get_buffer_stream(surface);
    mir_buffer_stream_get_graphics_region(buffer_stream, &graphics_region);

    render_pattern(&graphics_region, pattern);
    mir_buffer_stream_swap_buffers_sync(buffer_stream);
}

This is unsophisticated, but the point is that the client API is available to do whatever rendering we like.

Now, when we run egmde we no longer get a boring black rectangle. Now we get an Ubuntu orange one.

Read more
Dustin Kirkland

Introducting the Canonical Livepatch Service
Howdy!

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.



I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from ubuntu.com/livepatch

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

      Q: What are the system requirements?

      A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (https://livepatch.canonical.com:443).  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

      Q: What about other architectures?

      A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

      Q: What about other flavors?

      A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

      Q: What about other releases of Ubuntu?

      A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

      Q: What about derivatives of Ubuntu?

      A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

      Q: How does Canonical test livepatches?

      A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

      Q: What kinds of updates will be provided by the Canonical Livepatch Service?

      A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

      Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

      A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

      Q: Can I rollback a Canonical Livepatch?

      A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

      Q: What about low and medium severity CVEs?

      A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

      Q: Why are Canonical Livepatches provided as a subscription service?

      A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

      Q: But I don’t want to buy UA support!

      A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

      Q: But I don’t have an Ubuntu SSO account!

      A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

      Q: But I don’t want login to ubuntu.com!

      A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

      Q: But I don't have Internet access to livepatch.canonical.com:443!

      A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

      Q: Where’s the source code?

      A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

      Q: What about Ubuntu Core?

      A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

      Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

      A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

      • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
      • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
      • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
      • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
      • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
      • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
      • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

      Q: What happens if I run into problems/bugs with Canonical Livepatches?

      A: Ubuntu Advantage customers will file a support request at support.canonical.com where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

      Q: Why does canonical-livepatch client/server have a proprietary license?

      A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

      Q: How do I build my own livepatches?

      A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:

      http://chrisarges.net/2015/09/21/livepatch-on-ubuntu.html

      Q: How do I get notifications of which CVEs are livepatched and which are not?

      A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

      Q: Isn't livepatching just a big ole rootkit?

      A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

      Keep the uptime!
      :-Dustin

      Read more
      Stéphane Graber

      LXD logo

      What are snaps?

      Snaps were introduced a little while back as a cross-distro package format allowing upstreams to easily generate and distribute packages of their application in a very consistent way, with support for transactional upgrade and rollback as well as confinement through AppArmor and Seccomp profiles.

      It’s a packaging format that’s designed to be upstream friendly. Snaps effectively shift the packaging and maintenance burden from the Linux distribution to the upstream, making the upstream responsible for updating their packages and taking action when a security issue affects any of the code in their package.

      The upside being that upstream is now in complete control of what’s in the package and can distribute a build of the software that matches their test environment and do so within minutes of the upstream release.

      Why distribute LXD as a snap?

      We’ve always cared about making LXD available to everyone. It’s available for a number of Linux distribution already with a few more actively working on packaging it.

      For Ubuntu, we have it in the archive itself, push frequent stable updates, maintain official backports in the archive and also maintain a number of PPAs to make our releases available to all Ubuntu users.

      Doing all that is a lot of work and it makes tracking down bugs that much harder as we have to care about a whole lot of different setups and combination of package versions.

      Over the next few months, we hope to move away from PPAs and some of our backports in favor of using our snap package. This will allow a much shorter turnaround time for new releases and give us more control on the runtime environment of LXD, making our lives easier when dealing with bugs.

      How to get the LXD snap?

      Those instructions have only been tested on fully up to date Ubuntu 16.04 LTS or Ubuntu 16.10 with snapd installed. Please use a system that doesn’t already have LXD containers as the LXD snap will not be able to take over existing containers.

      LXD snap example

      1. Make sure you don’t have a packaged version of LXD installed on your system.
        sudo apt remove --purge lxd lxd-client
      2. Create the “lxd” group and add yourself to it.
        sudo groupadd --system lxd
        sudo usermod -G lxd -a <username>
      3. Install LXD itself
        sudo snap install lxd

      This will get the current version of LXD from the “stable” channel.
      If your user wasn’t already part of the “lxd” group, you may now need to run:

      newgrp lxd

      Once installed, you can set it up and spawn your first container with:

      1. Configure the LXD daemon
        sudo lxd init
      2. Launch your first container
        lxd.lxc launch ubuntu:16.04 xenial

      Channels and updates

      The Ubuntu Snap store offers 4 different release “channels” for snaps:

      • stable
      • candidate
      • stable
      • edge

      For LXD, we currently use “stable”, “candidate” and “edge”.

      • “stable” contains the latest stable release of LXD.
      • “candidate” is a testing area for “stable”.
        We’ll push new releases there a couple of days before releasing to “stable”.
      • “edge” is the current state of our development tree.
        This channel is entirely automated with uploads triggered after the upstream CI confirms that the development tree looks good.

      You can switch between channels by using the “snap refresh” command:

      snap refresh lxd --edge

      This will cause your system to install the current version of LXD from the “edge” channel.

      Be careful when hopping channels though as LXD may break when moving back to an earlier version (going from edge to stable), especially when database schema changes occurred in between.

      Snaps automatically update, either on schedule (typically once a day) or through push notifications from the store. On top of that, you can force an update by running “snap refresh lxd”.

      Known limitations

      Those are all pretty major usability issues and will likely be showstoppers for a lot of people.
      We’re actively working with the Snappy team to get those issues addressed as soon as possible and will keep maintaining all our existing packages until such time as those are resolved.

      Extra information

      More information on snap packages can be found at: http://snapcraft.io
      Bug reports for the LXD snap: https://github.com/lxc/lxd-pkg-ubuntu/issues

      The main LXD website is at: https://linuxcontainers.org/lxd
      Development happens on Github at: https://github.com/lxc/lxd
      Mailing-list support happens on: https://lists.linuxcontainers.org
      IRC support happens in: #lxcontainers on irc.freenode.net
      Try LXD online: https://linuxcontainers.org/lxd/try-it

      PS: I have not forgotten about the remaining two posts in the LXD 2.0 series, the next post has been on hold for a while due to some issues with OpenStack/devstack.

      Read more
      Tom Macfarlane

      Ubuntu Core

      Recently the brand team designed new logos for Core and Ubuntu Core. Both of which will replace the existing Snappy logo and bring consistency across all Ubuntu Core branding, online and in print.

       

      db_core_logo-aw

       

      Guidelines for use

      Core

      Use the Core logo when the Ubuntu logo or the word Ubuntu appears within the same field of vision. For example: web pages, exhibition stands, brochure text.

      Ubuntu Core

      Use the Ubuntu Core logo in stand alone circumstances where there is no existing or supporting Ubuntu branding or any mention of Ubuntu within text. For example: third-party websites or print collateral, social media sites, roll-up banners.

      The Ubuntu Core logo is also used for third-party branding.

      The design process

      Extensive design exploration was undertaken considering: logotype arrangement, font weight, roundel designs – exploring the ‘core’ idea, concentric circles and the letter ‘C’ – and how all the elements came together as a logo.

      Logotype

      Options for how the logotype/wordmark is presented:

      • Following the design style set when creating the Ubuntu brandmark
      • Core in a lighter weight, reduced space between Ubuntu and Core
      • Ubuntu in the lighter weight, emphasis on Core
      • Core on its own

       

      db_core_logotype

       

      Roundels

      Core, circles and the letter ‘C’

       


      Design exploration using concentric circles of varying line numbers, spacing and line weights. Some options incorporating the Circle of Friends as an underlying grid to determine specific angles.

      Circle of Friends

       

      Design exploration using the Circle of Friends – in its entirety and stripped down.

      Lock-up

       

      db_core_lock-up

      How the logotype and roundel design sit together.

      Artwork

      Full sets of Core and Ubuntu Core logo artwork are now available at design.ubuntu.com/downloads.

      Read more
      Inayaili de León Persson

      A week in Vancouver with the Landscape team

      Earlier this month Peter and I headed to Vancouver to participate in a week-long Landscape sprint.

      The main goals of the sprint were to review the work that had been done in the past 6 months, and plan for the following cycle.

      IRL

      Landscape is a totally distributed team, so having regular face-to-face time throughout the year is important in order to maintain team spirit and a sense of connection.

      It is also important for us, from the design team, to meet in person the people that we have to work with every day, and that ultimately will implement the designs we create.

      I thought it was interesting to hear the Landscape team discuss candidly about how the previous cycle went, what went well and what could have been improved, and how every team member’s opinion was heard and taken into consideration for the following cycle.

       

      Landscape team in VancouverLandscape team discussing the previous cycle

       

      User interviews

      Peter and I took some time aside to interview some of the developers in 1-2-1 sessions, so they could talk us through what they thought could be improved in Landscape, and what worked well. As we talked to them, I wrote down key ideas on post it notes and Peter wrote down more thorough notes on his laptop. At the end of the interviews, we collated the findings into a Trello board, to identify patterns and try to prioritise design improvements for the next cycle.

      The city

      But the week was not all work!

      Every day we went out for lunch (unlike most sprints which provide the usual hotel food). This allowed us to explore a little bit of the city and its great culinary offerings. It was a great way to get to know the Landscape team a little bit better outside of work.

       

      Vancouver foodLots of great food in Vancouver

       

      Vancouver also has really great coffee places, and, even though I’m more of a tea person, I made sure to go to a few of them during the week.

       

      Vancouver coffeeNice Vancouver coffee

       

      I took a few days off after the sprint, so had some time to explore Vancouver with my family. We even saw a TV show being filmed in one of our favourite coffee shops!

       

      Exploring VancouverExploring Vancouver

       

      This was my first time in Canada, and I really enjoyed it: we had a great sprint and it was good to have some time to explore the city. Maybe I’ll be back some day!

      Read more