Canonical Voices

UbuntuTouch

基于开发者程路的项目:https://github.com/dawndiy/electronic-wechat-snap,我对该项目做了一些小的修改.但最终把electronic-wechat打包为snap应用.


1)方案一


在整个方案中,我们直接把一个稳定编译好的发布直接下载打包.这个项目的源码:

snapcraft.yaml


name: electronic-wechat
version: '1.4.0'
summary: A better WeChat on macOS and Linux. Built with Electron.
description: |
  Electronic WeChat is a unoffical WeChat client. A better WeChat on
  macOS and Linux. Built with Electron.
grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  electronic-wechat:
    command: desktop-launch $SNAP/wechat.wrapper
    plugs:
      - unity7
      - opengl
      - network
      - pulseaudio
      - home
      - browser-support
      - gsettings
      - x11

parts:
  electronic-wechat:
    plugin: dump
    source: https://github.com/geeeeeeeeek/electronic-wechat/releases/download/v1.4.0/linux-x64.tar.gz
    stage-packages:
      - libnss3
      - fontconfig-config
      - gnome-themes-standard
      - fonts-wqy-microhei
      - libasound2-data
      - fcitx-frontend-gtk2
      - overlay-scrollbar-gtk2
      - libatk-adaptor
      - libcanberra-gtk-module
    filesets:
      no-easy-install-files:
        - -usr/sbin/update-icon-caches
        - -README.md
    stage:
      - $no-easy-install-files
    prime:
      - $no-easy-install-files

  wechat-copy:
    plugin: dump
    source: .
    filesets:
      wechat.wrapper: wechat.wrapper
    after: 
      - electronic-wechat
      - desktop-gtk2

在这里,我们直接在地址https://github.com/geeeeeeeeek/electronic-wechat/releases/download/v1.4.0/linux-x64.tar.gz下载已经编译好的稳定的版本,并进行打包.这里我们可以利用dump plugin来帮我们进行安装.

2)方案二


我们可以利用最新的代码来编译,并打包.这个项目的源码在:


snapcraft.yaml


name: electronic-wechat
version: '1.4.0'
summary: A better WeChat on macOS and Linux. Built with Electron.
description: |
  Electronic WeChat is a unoffical WeChat client. A better WeChat on
  macOS and Linux. Built with Electron.
grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  electronic-wechat:
    command: desktop-launch $SNAP/wechat.wrapper
    plugs:
      - unity7
      - opengl
      - network
      - pulseaudio
      - home
      - browser-support
      - gsettings
      - x11

parts:
  electronic-wechat:
    plugin: nodejs
    source-type: git
    source: https://github.com/geeeeeeeeek/electronic-wechat/
    source-branch: production
    npm-run:
      - build:linux
    install: cp -r dist $SNAPCRAFT_PART_INSTALL
    stage-packages:
      - libnss3
      - fontconfig-config
      - gnome-themes-standard
      - fonts-wqy-microhei
      - libasound2-data
      - fcitx-frontend-gtk2
      - overlay-scrollbar-gtk2
      - libatk-adaptor
      - libcanberra-gtk-module
    filesets:
      no-easy-install-files:
        - -usr/sbin/update-icon-caches
        - -README.md
    stage:
      - $no-easy-install-files
    prime:
      - $no-easy-install-files

  wechat-copy:
    plugin: dump
    source: .
    filesets:
      wechat.wrapper: wechat.wrapper
    after: 
      - electronic-wechat
      - desktop-gtk2

最新的代码在地址https://github.com/geeeeeeeeek/electronic-wechat/可以找到.我们利用nodejs plugin来帮我们进行打包.在这里,我们使用了snapcraft的Scriplets来覆盖我们的nodejs plugin中的install:

   install: cp -r dist $SNAPCRAFT_PART_INSTALL

对于上面的npm-run那一句,我们甚至也可以删除,并用snapcraft中的Scriplets来实现:

build: npm run build:linux

我们在项目的根目录下打入如下的命令:

$ snapcraft

它将最终帮我们生成我们所需要的.snap文件.我们可以安装到我们的系统中:

$ sudo snap install electronic-wechat_1.4.0_amd64.snap --dangerous

liuxg@liuxg:~$ snap list
Name                 Version  Rev  Developer  Notes
core                 16.04.1  888  canonical  -
electronic-wechat    1.4.0    x1              -
hello-world          6.3      27   canonical  -
hello-xiaoguo        1.0      x1              -
snappy-debug         0.28     26   canonical  -
ubuntu-app-platform  1        22   canonical  -

我们可以看到electronic-wechat已经被成功安装到我的电脑中.运行我们的应用:








作者:UbuntuTouch 发表于2017/2/3 16:12:02 原文链接
阅读:302 评论:0 查看评论

Read more
UbuntuTouch

在今天的文章中,我们将介绍如何把一个HTML5的应用打包为一个snap应用。我们知道有很多的HTML5应用,但是我们如何才能把它们打包为我们的snap应用呢?特别是在Ubuntu手机手机开发的时候,有很多的已经开发好的HTML5游戏。我们可以通过我们今天讲的方法来把先前的click HTML5应用直接打包为snap应用,并可以在我们的Ubuntu桌面电脑上进行运行。当然,今天介绍的方法并不仅限于Ubuntu手机开发的HTML应用。这里的方法也适用于其它的HTML5应用。




1)HTML5应用


首先,我们看一下我之前做过的一个为Ubuntu手机而设计的一个HTML5应用。它的地址为:


你可以通过如下的方式得到这个代码:

bzr branch lp:~liu-xiao-guo/debiantrial/wuziqi

在这个应用中,我们只关心的是在它www目录里面的内容。这个项目的所有文件如下:

$ tree
.
├── manifest.json
├── wuziqi.apparmor
├── wuziqi.desktop
├── wuziqi.png
├── wuziqi.ubuntuhtmlproject
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    └── js
        └── app.js

我们希望把在www里面的内容能够最终打包到我们的snap应用中去。

2)打包HTML5应用为snap


为了能够把我们的HTML5应用打包为一个snap应用,我们可以在项目的根目录下打入如下的命令:

$ snapcraft init

上面的命令将在我们的当前的目录下生产一个新的snap目录,并在里面生一个叫做snapcraft.yaml的文件。这实际上是一个模版。我们可以通过修改这个snapcraft.yaml文件来把我们的应用进行打包。运行完上面的命令后,文件架构如下:

$ tree
.
├── manifest.json
├── snap
│   └── snapcraft.yaml
├── wuziqi.apparmor
├── wuziqi.desktop
├── wuziqi.png
├── wuziqi.ubuntuhtmlproject
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    └── js
        └── app.js

我们通过修改snapcraft.yaml文件,并最终把它变为:

snapcraft.yaml


name: wuziqi
version: '0.1'
summary: Wuziqi Game. It shows how to snap a html5 app into a snap
description: |
  This is a Wuziqi Game. There are two kind of chesses: white and black. Two players
  play it in turn. The first who puts the same color chesses into a line is the winner.

grade: stable
confinement: strict

apps:
  wuziqi:
    command: webapp-launcher www/index.html
    plugs:
      - browser-sandbox
      - camera
      - mir
      - network
      - network-bind
      - opengl
      - pulseaudio
      - screen-inhibit-control
      - unity7

plugs:
  browser-sandbox:
    interface: browser-support
    allow-sandbox: false
  platform:
    interface: content
    content: ubuntu-app-platform1
    target: ubuntu-app-platform
    default-provider: ubuntu-app-platform

parts:
  webapp:
    after: [ webapp-helper, desktop-ubuntu-app-platform ]
    plugin: dump
    source: .
    stage-packages:
      - ubuntu-html5-ui-toolkit
    organize:
      'usr/share/ubuntu-html5-ui-toolkit/': www/ubuntu-html5-ui-toolkit
    prime:
      - usr/*
      - www/*

这里的解释如下:
  • 由于这是一个HTML5的应用,我们可以通过webapp-helper来启动我们的应用。在我们的应用中我们使用被叫做webapp-helper的remote part
  • 由于在Ubuntu的手机中,web的底层部分是由Qt进行完成的,所以我们必须要把Qt也打包到我们的应用中。但是由于Qt库是比较大的,我们可以通过ubuntu-app-platform snap应用通过它提供的platform接口来得到这些Qt库。开发者可以参阅我们的文章https://developer.ubuntu.com/en/blog/2016/11/16/snapping-qt-apps/
  • 在我们的index.html文件中,有很多的诸如<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>。这显然和ubuntu-html5-ui-toolkit有关,所以,我们必须把ubuntu-html5-ui-toolkit这个包也打入到我们的应用中。这个我们通过stage-packages来安装ubuntu-html5-ui-toolkit包来实现
  • 我们通过organize把从ubuntu-html5-ui-toolkit中安装的目录ubuntu-html5-ui-toolkit重组到我们项目下的www目录中以便index.html文件引用
我们再来看看我们的原始的index.html文件:

index.html

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>An Ubuntu HTML5 application</title>
    <meta name="description" content="An Ubuntu HTML5 application">
    <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=0">

    <!-- Ubuntu UI Style imports - Ambiance theme -->
    <link href="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/css/appTemplate.css" rel="stylesheet" type="text/css" />

    <!-- Ubuntu UI javascript imports - Ambiance theme -->
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/fast-buttons.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/buttons.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/dialogs.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/page.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/pagestacks.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/tab.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/tabs.js"></script>

    <!-- Application script -->
    <script src="js/app.js"></script>
    <link href="css/app.css" rel="stylesheet" type="text/css" />

  </head>

  <body>
        <div class='test'>
          <div>
              <img src="images/w.png" alt="white" id="chess">
          </div>
          <div>
              <button id="start">Start</button>
          </div>
        </div>

        <div>
            <canvas width="640" height="640" id="canvas" onmousedown="play(event)">
                 Your Browser does not support HTML5 canvas
            </canvas>
        </div>
  </body>
</html>

从上面的代码中,在index.hml文件中它引用的文件是从/usr/share这里开始的。在一个confined的snap应用中,这个路径是不可以被访问的(因为一个应用只能访问自己安装在自己项目根目录下的文件)。为此,我们必须修改这个路径。我们必须把上面的/usr/share/的访问路径改变为相对于本项目中的www目录的访问路径:

    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/fast-buttons.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/buttons.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/dialogs.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/page.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/pagestacks.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/tab.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/tabs.js"></script>

这就是为什么我们在之前的snapcraft.yaml中看到的:

parts:
  webapp:
    after: [ webapp-helper, desktop-ubuntu-app-platform ]
    plugin: dump
    source: .
    stage-packages:
      - ubuntu-html5-ui-toolkit
    organize:
      'usr/share/ubuntu-html5-ui-toolkit/': www/ubuntu-html5-ui-toolkit
    prime:
      - usr/*
      - www/*

在上面,我们通过organize把ubuntu-html5-ui-toolkit安装后的目录重新组织并移到我的项目的www目录中,从而使得这里的文件可以直接被我们的项目所使用。我们经过打包后的文件架构显示如下:

$ tree -L 3
.
├── bin
│   ├── desktop-launch
│   └── webapp-launcher
├── command-wuziqi.wrapper
├── etc
│   └── xdg
│       └── qtchooser
├── flavor-select
├── meta
│   ├── gui
│   │   ├── wuziqi.desktop
│   │   └── wuziqi.png
│   └── snap.yaml
├── snap
├── ubuntu-app-platform
├── usr
│   ├── bin
│   │   └── webapp-container
│   └── share
│       ├── doc
│       ├── ubuntu-html5-theme -> ubuntu-html5-ui-toolkit
│       └── webbrowser-app
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    ├── js
    │   ├── app.js
    │   └── jquery.min.js
    └── ubuntu-html5-ui-toolkit
        └── 0.1

在上面,我们可以看出来ubuntu-html5-ui-toolkit现在处于在www文件目录下,可以直接被我们的项目所使用。

我们在项目的根目录下打入如下的命令:

$ snapcraft

如果一切顺利的话,我们可以得到一个.snap文件。我们可以通过如下的命令来进行安装:

$ sudo snap install wuziqi_0.1_amd64.snap --dangerous

安装完后,由于我们使用了content sharing的方法来访问Qt库,所以,我们必须安装如下的snap:

$ snap install ubuntu-app-platform 
$ snap connect wuziqi:platform ubuntu-app-platform:platform

执行上面的命令后,我们可以看到:

$ snap interfaces
Slot                          Plug
:account-control              -
:alsa                         -
:avahi-observe                -
:bluetooth-control            -
:browser-support              wuziqi:browser-sandbox
:camera                       -
:core-support                 -
:cups-control                 -
:dcdbas-control               -
:docker-support               -
:firewall-control             -
:fuse-support                 -
:gsettings                    -
:hardware-observe             -
:home                         -
:io-ports-control             -
:kernel-module-control        -
:libvirt                      -
:locale-control               -
:log-observe                  snappy-debug
:lxd-support                  -
:modem-manager                -
:mount-observe                -
:network                      downloader,wuziqi
:network-bind                 socketio,wuziqi
:network-control              -
:network-manager              -
:network-observe              -
:network-setup-observe        -
:ofono                        -
:opengl                       wuziqi
:openvswitch                  -
:openvswitch-support          -
:optical-drive                -
:physical-memory-control      -
:physical-memory-observe      -
:ppp                          -
:process-control              -
:pulseaudio                   wuziqi
:raw-usb                      -
:removable-media              -
:screen-inhibit-control       wuziqi
:shutdown                     -
:snapd-control                -
:system-observe               -
:system-trace                 -
:time-control                 -
:timeserver-control           -
:timezone-control             -
:tpm                          -
:uhid                         -
:unity7                       wuziqi
:upower-observe               -
:x11                          -
ubuntu-app-platform:platform  wuziqi
-                             wuziqi:camera
-                             wuziqi:mir

当然在我们的应用中,我们也使用了冗余的plug,比如上面的camera及mir等。我们可以看到wuziqi应用和其它Core及ubuntu-app-platform snap的连接情况。在确保它们都连接好之后,我们可以在命令行中打入如下的命令:

$ wuziqi

它将启动我们的应用。当然,我们也可以从我们的Desktop的dash中启动我们的应用:






作者:UbuntuTouch 发表于2017/2/13 10:16:55 原文链接
阅读:216 评论:0 查看评论

Read more
UbuntuTouch

我们在先前的文章"如何把一个qmake的Ubuntu手机应用打包为一个snap应用"中体会了如何把一个qmake的项目打包为一个snap应用.在今天的教程中,我们利用Qt Creator来创建一个项目,并最终把我们的应用打包为一个snap项目.在打包的过程中,我们可以体会在snapcraft中的scriplets


1)创建一个Qt Helloworld项目


首先,我们打开我们的Qt Creator:







这样我们就创建了一个最简单的一个helloworld应用.


2)创建snapcraft.yaml文件


我们在项目的根目录下,打入如下的命令:
$ snapcraft init
上面的命令将会为我们在当前目录下生成一个叫做snap的目录(snapcraft version 2.26,之前的版本没有创建这个snap目录).

liuxg@liuxg:~/snappy/desktop/qtapp$ tree -L 3
.
├── main.cpp
├── mainwindow.cpp
├── mainwindow.h
├── mainwindow.ui
├── qtapp.pro
├── qtapp.pro.user
├── README.md
└── snap
    └── snapcraft.yaml

所有文件的架构如上面所示.我们可以通过编辑修改这个snapcraft.yaml文件:

snapcraft.yaml

name: qthello 
version: '0.1' 
summary: a demo for qt hello app
description: |
  This is a qt app demo

grade: stable 
confinement: strict 

apps:
  qthello:
    command: desktop-launch $SNAP/opt/myapp/qtapp
    plugs: [home, unity7, x11]

parts:
  project:
    plugin: qmake
    source: .
    qt-version: qt5
    project-files: [qtapp.pro]
    install: |
      install -d $SNAPCRAFT_PART_INSTALL/opt/myapp
      install qtapp $SNAPCRAFT_PART_INSTALL/opt/myapp/qtapp

  integration:
    plugin: nil
    stage-packages:
     - libc6
     - libstdc++6
     - libc-bin
    after: [desktop-qt5]

在这里,我们必须指出的是:

    install: |
      install -d $SNAPCRAFT_PART_INSTALL/opt/myapp
      install qtapp $SNAPCRAFT_PART_INSTALL/opt/myapp/qtapp

由于在原始的qtapp.pro文件中,并没有相应的代码让我们如何去安装我们的qtapp应用文件.我们在这里使用了上面的install来安装我们的应用.根据在Scriplets里的描述:

“install”

The install scriptlet is triggered after the build step of a plugin.

这里的scripts将会在build之后自动被自动执行.它首先创建一个叫做myapp的目录,接着把我们在build目录中的二进制执行文件qtapp安装到myapp目录下.这样就最终形成了我们的snap包.

我们安装好qthello应用,并执行:






在这个snap应用中,我们把对Qt所有的库的依赖都打包到一个包里,这样我们最终的snap包的大小很大.如果开发者想减少这个Qt应用的大小的话,开发者可以参阅文章"利用ubuntu-app-platform提供的platform接口来减小Qt应用大小"来减小整个应用的大小.





作者:UbuntuTouch 发表于2017/2/3 14:25:12 原文链接
阅读:209 评论:0 查看评论

Read more
UbuntuTouch

[原]Ubuntu Core 配置

Core snap提供了一些配置的选项。这些选项可以允许我们定制系统的运行。就像和其它的snap一样,Core snap的配置选项可以通过snap set命令来实现:

$ snap set core option=value


选项目前的值可以通过snap get命令来获取:

$ snap get core option
value


下面我们来展示如何来禁止系统的ssh服务:

警告:禁止ssh将会使得我们不能在默认的方式下访问Ubuntu Core系统。如果我们不提供其它的方式来管理或登陆系统的话,你的系统将会是一个砖。建议你可以正在系统中设置一个用户名及密码,以防止你不能进入到系统。如果你有其它的方式进入到系统,也是可以的。当我们进入到Ubunutu Core系统中后,我们可以使用如下的命令来创建一个用户名及密码,这样我们可以通过键盘及显示器的方式来登陆。

$ sudo passwd <ubuntu-one id>
<password>

设置的选项接受如下的值:

  • false (默认):启动ssh服务。ssh服务将直接对连接请求起作用
  • true:禁止ssh服务。目前存在的ssh连接将继续保留,但是任何更进一步的连接将是不可能的
$ snap set core service.ssh.disable=true
当我们执行完上面的命令后,任何更进一步的访问将被禁止:

$ ssh liu-xiao-guo@192.168.1.106
ssh: connect to host 192.168.1.106 port 22: Connection refused

$ snap set core service.ssh.disable=false
执行上面的命令后将使得我们可以重新连接ssh。
我们可以通过如下的命令来获得当前的值:

$ snap get core service.ssh.disable
false

更多阅读:https://docs.ubuntu.com/core/en/reference/core-configuration

作者:UbuntuTouch 发表于2017/2/15 10:29:38 原文链接
阅读:170 评论:0 查看评论

Read more
UbuntuTouch

[转]Qt on Ubuntu Core

Are you working on an IoT, point of sale or digital signage device? Are you looking for a secure, supported solution to build it on? Do you have needs for graphic performance and complex UI? Did you know you could build great solutions using Qt on Ubuntu and Ubuntu Core? 

To find out how why not join this upcoming webinar. You will learn the following:

- Introduction to Ubuntu and Qt in IoT and digital signage
- Using Ubuntu and Ubuntu Core in your device
- Packaging your Qt app for easy application distribution 
- Dealing with hardware variants and GPUs


https://www.brighttalk.com/webcast/6793/246523?utm_source=China&utm_campaign=3)%20Device_FY17_IOT_Vertical_DS_Webinar_Qt&utm_medium=Social

作者:UbuntuTouch 发表于2017/2/27 13:07:38 原文链接
阅读:82 评论:0 查看评论

Read more
UbuntuTouch

我们在先前的文章“利用ubuntu-app-platform提供的platform接口来减小Qt应用大小”已经了解到如何运用platform interface来减小Qt应用的大小。这里面的实现原理就是利用content分享来实现的。在今天的教程中,我们来运用一个开发者自己开发的python的interpreter snap安装包来实现同样的东西。对于一些系统来说,如果想要用最新的python版本,或者是想让很多的python应用都使用同一个python的安装,而不用分别把python的环境打入到每一个snap应用的包中,我们可以采用今天使用的方法。

这个python interpreter的snap应用的整个源码在:

https://github.com/jhenstridge/python-snap-pkg

我们可以通过如下的方式来得到:

$ git clone https://github.com/jhenstridge/python-snap-pkg


整个项目的源码如下:

$ tree -L 3
.
├── examples
│   └── hello-world
│       ├── hello.py
│       ├── hello.sh
│       └── snap
├── README.md
├── snap
│   └── snapcraft.yaml
└── src
    └── sitecustomize.py

在上面的snap目录中就是描述如何把python通过content sharing interface分享出去以供其它的开发者使用。开发者已经把编译好的snap上传到我们的商店了。我们可以通过如下的方式来进行安装:

$  snap install --edge python36-jamesh

我们可以到examples/hello-world目录下直接打入如下的命令:

$ snapcraft 
Preparing to pull hello-world 
Pulling hello-world 
Preparing to build hello-world 
Building hello-world 
Staging hello-world 
Priming hello-world 
Snapping 'hello-world' |                                                             
Snapped hello-world_0.1_all.snap

我们可以看到生产的.snap文件。我们可以使用如下的命令:

$ sudo snap install --dangerous hello-world_0.1_all.snap

来安装这个应用。并使用如下的命令来进行连接和运行:

$ snap connect hello-world:python3 python36-jamesh:python3
$ hello-world
Hello world!

我们可以看到我们的hello-world应用被成功运行。

hello.py

print("Hello world!")

我们可以检查一下我们最后的hello-world_0.1_all.snap文件大小:

$ ls -alh
total 28K
drwxrwxr-x 3 liuxg liuxg 4.0K 3月   1 09:45 .
drwxrwxr-x 3 liuxg liuxg 4.0K 3月   1 09:19 ..
-rw-rw-r-- 1 liuxg liuxg   28 3月   1 09:19 .gitignore
-rw-rw-r-- 1 liuxg liuxg   22 3月   1 09:19 hello.py
-rwxrwxr-x 1 liuxg liuxg   60 3月   1 09:19 hello.sh
-rw-r--r-- 1 liuxg liuxg 4.0K 3月   1 09:39 hello-world_0.1_all.snap
drwxrwxr-x 2 liuxg liuxg 4.0K 3月   1 09:23 snap

整个的.snap文件只有小小的4k大小。这比较以前的那种方法,显然这种通过content sharing的方法能够大大减少我们的python应用的大小。当然这个共享的python包也可以为其它的python应用所使用。

你甚至可以通过如下的方式来安装自己喜欢的pip包:

$ python36-jamesh.pip3 install --user django

这个django包的内容将会被安装到 python36-jamesh包里的$SNAP_USER_COMMON目录之中





作者:UbuntuTouch 发表于2017/3/1 9:48:19 原文链接
阅读:73 评论:0 查看评论

Read more
UbuntuTouch

[原]虾米电台snap应用

这是一个基于开源项目的虾米电台应用。它的源码在:

https://github.com/timxx/xmradio

它的snap版本的源码为:

https://github.com/liu-xiao-guo/xmradio

这个项目的应用界面为:






你可以在你的16.04及以上的桌面系统里使用如下的命令进行安装:

$ sudo snap install xmradio --edge --devmode

经过改良,现在你可以直接在stable channel里下载了:

$sudo snap install xmradio

作者:UbuntuTouch 发表于2017/3/16 10:05:08 原文链接
阅读:86 评论:0 查看评论

Read more
UbuntuTouch

[原]在云上打包你的snap应用

如果你的应用已经在一个architecture(x86, arm)中开发好,你很想在另外一个architecture中进行编译,但是你苦于没有相应的硬件平台来编译。那你该怎么办呢?又或者你想把你的源码放到github中,你想通过一些方法进行自动编译你的代码,并发布到Ubuntu Store中。在几天的教程中,我们来展示一些在云上帮我们编译的一些方法。


在Launchpad上进行编译


我们可以把我们的代码放到https://launchpad.net/。比如我已经创建好了一个我自己的项目:


我们打开这个页面。我们可以在该页面的下面找到一个叫做“Create snap package”的链接:



通过这个链接,我们可以把我们的项目直接进行编译。我们可以选择我们需要的architecture,并最终生产我们所需要的.snap文件。

通过build.snapcraft.io来打包


我们可以在http://build.snapcraft.io/网站上选择我们自己的repo。



通过设置,并选择自己的repo,我们可以通过这个网站来帮我们生产armhf及amd64的snap包。它还可以帮我们发布到Ubuntu Store里。






如果大家对这个感兴趣的话,可以试一下:)

作者:UbuntuTouch 发表于2017/3/3 7:56:15 原文链接
阅读:116 评论:0 查看评论

Read more
UbuntuTouch

[原]GoldenDict字典Snap应用

GoldenDictionary是一个非常好的在Linux上运行的应用软件。它的源码在:

https://github.com/goldendict/goldendict

Snap的源码在:

https://github.com/liu-xiao-guo/goldendict






你可以在你的16.04及以上的系统上安装:

sudo snap install goldendictionary


作者:UbuntuTouch 发表于2017/3/22 9:18:35 原文链接
阅读:39 评论:0 查看评论

Read more
UbuntuTouch

如果我们打开我们的Ubuntu Core安装的Core应用,在这个Core应用的安装目录中,我们会发现一个应用叫做xdg-open:

/snap/core/current/usr/local/bin$ ls
apt  apt-cache  apt-get  no-apt  xdg-open

关于xdg-open的更多描述可以在地址:https://linux.die.net/man/1/xdg-open找到。我们可以利用它来打开我们的所需要的文件或url。现在我们来利用它来启动一个应用,比如一个网站。为此,我们的snapcraft.yaml文件如下:


snapcraft.yaml


name: google
version: "1"
summary: this is a test program for launching a website using browser
description: |
     Launch google website using xdg-open
grade: stable
confinement: strict
architectures: [amd64]

apps:
   google:
     command: run.sh "http://www.google.com"
     plugs: [network, network-bind, x11, home, unity7, gsettings]

parts:
   files:
    plugin: dump
    source: scripts
    organize:
     run.sh: bin/run.sh

   integration:
    plugin: nil
    after: [desktop-gtk2]


scripts

#!/bin/sh

PATH="$PATH:/usr/local/bin"

xdg-open $1

我在Ubuntu Desktop上安装一个debian包:

$ sudo apt install snapd-xdg-open

打包完我们的应用并安装,运行:

$ google


我们可以看出google网站被成功启动。


作者:UbuntuTouch 发表于2017/3/6 13:36:38 原文链接
阅读:124 评论:0 查看评论

Read more
UbuntuTouch

[原]网易云音乐snap

对于喜欢音乐的用户来讲,在16.04上可以安装snap版的播放器了。刚试了一下,效果还是不错的。源码在:

https://github.com/liu-xiao-guo/netease-music






你可以在16.04及以上的系统上安装如下的指令进行安装:

sudo snap install netease-music —devmode —beta



作者:UbuntuTouch 发表于2017/3/14 14:43:15 原文链接
阅读:254 评论:0 查看评论

Read more
UbuntuTouch

[原]simplescreenrecorder snap应用

simeplscreenrecorder是一个工具应用软件。它可以用来帮我们录下我们的屏幕。这个项目的源码在:

https://github.com/MaartenBaert/ssr

snap版本的软件在:

https://github.com/liu-xiao-guo/simplescreenrecorder





你可以使用如下的命令来进行安装:

$ sudo snap install simplescreenrecorder



作者:UbuntuTouch 发表于2017/3/24 10:12:14 原文链接
阅读:18 评论:0 查看评论

Read more
UbuntuTouch

[原]酷我音乐盒snap应用

酷我音乐盒是一个音乐资源非常丰富的音乐播放器。它的源码在:


https://github.com/LiuLang/kwplayer


尽管它目前不被维护,但还是一款不错的应用。它的snap项目在:


https://github.com/liu-xiao-guo/kwplayer








你可以在你的16.04及以上的系统上安装这个应用:

$ sudo snap install kwplayer --beta --devmode

作者:UbuntuTouch 发表于2017/3/17 8:39:26 原文链接
阅读:70 评论:0 查看评论

Read more
UbuntuTouch

[原]moonplayer snap视频播放器

这是一个基于开源的一个视频播放器。它可以播放在优酷及土豆网上的视频。质量还是不错的。我把它打成了一个snap应用。供大家参考。snap包的好处就是你不需要安装任何其它的依赖。只需要安装一个包就可以了。看看市面上的好多debian应用都需要安装很多依赖的包才可以正确运行。而且可能很多的包还有不兼容的问题,从而导致安装不成功。


应用的源码在:

https://github.com/coslyk/moonplayer

打包成snap的代码在:

https://github.com/liu-xiao-guo/moonplayer






如果你有16.04及以上的电脑的话,可以直接使用如下的命令来安装:

sudo snap install moonplayer

作者:UbuntuTouch 发表于2017/3/15 10:27:43 原文链接
阅读:167 评论:0 查看评论

Read more
UbuntuTouch

有道字典对很多的人来说非常有用。也有很多人喜欢命令行来进行查字典。在今天,我们来展示有道字典的命令行snap应用。

这个应用的源码在:

https://github.com/longcw/youdao

它的snap应用源码在:

https://github.com/liu-xiao-guo/youdao-cli




你可以在你的16.04及以上的桌面上按照如下的命令来进行安装:

$ sudo snap install yd



作者:UbuntuTouch 发表于2017/3/23 9:41:26 原文链接
阅读:27 评论:0 查看评论

Read more
UbuntuTouch

[原]gftp snap应用

这个项目的所有的资料可以在地址:

https://www.gftp.org/

找到。snap版的资料在:

https://github.com/liu-xiao-guo/gftp



作者:UbuntuTouch 发表于2017/3/23 16:05:34 原文链接
阅读:26 评论:0 查看评论

Read more
Leo Arias

Call for testing: MySQL

I promised that more interesting things were going to be available soon for testing in Ubuntu. There's plenty coming, but today here is one of the greatest:

$ sudo snap install mysql --channel=8.0/beta

screenshot of mysql snap running

Lars Tangvald and other people at MySQL have been working on this snap for some time, and now they are ready to give it to the community for crowd testing. If you have some minutes, please give them a hand.

We have a testing guide to help you getting started.

Remember that this should run in trusty, xenial, yakkety, zesty and in all flavours of Ubuntu. It would be great to get a diverse pool of platforms and test it everywhere.

In here we are introducing a new concept: tracks. Notice that we are using --channel=8.0/beta, instead of only --beta as we used to do before. That's because mysql has two different major versions currently active. In order to try the other one:

$ sudo snap install mysql --channel=5.7/beta

Please report back your results. Any kind of feedback will be highly appreciated, and if you have doubts or need a hand to get started, I'm hanging around in Rocket Chat.

Read more
Stéphane Graber

This is the twelfth and last blog post in this series about LXD 2.0.

LXD logo

Introduction

This is finally it! The last blog post in this series of 12 that started almost a year ago.

If you followed the series from the beginning, you should have been using LXD for quite a bit of time now and be pretty familiar with its day to day operation and capabilities.

But what if something goes wrong? What can you do to track down the problem yourself? And if you can’t, what information should you record so that upstream can track down the problem?

And what if you want to fix issues yourself or help improve LXD by implementing the features you need? How do you build, test and contribute to the LXD code base?

Debugging LXD & filing bug reports

LXD log files

/var/log/lxd/lxd.log

This is the main LXD log file. To avoid filling up your disk very quickly, only log messages marked as INFO, WARNING or ERROR are recorded there by default. You can change that behavior by passing “–debug” to the LXD daemon.

/var/log/lxd/CONTAINER/lxc.conf

Whenever you start a container, this file is updated with the configuration that’s passed to LXC.
This shows exactly how the container will be configured, including all its devices, bind-mounts, …

/var/log/lxd/CONTAINER/forkexec.log

This file will contain errors coming from LXC when failing to execute a command.
It’s extremely rare for anything to end up in there as LXD usually handles errors much before that.

/var/log/lxd/CONTAINER/forkstart.log

This file will contain errors coming from LXC when starting the container.
It’s extremely rare for anything to end up in there as LXD usually handles errors much before that.

CRIU logs (for live migration)

If you are using CRIU for container live migration or live snapshotting there are additional log files recorded every time a CRIU dump is generated or a dump is restored.

Those logs can also be found in /var/log/lxd/CONTAINER/ and are timestamped so that you can find whichever matches your most recent attempt. They will contain a detailed record of everything that’s dumped and restored by CRIU and are far better for understanding a failure than the typical migration/snapshot error message.

LXD debug messages

As mentioned above, you can switch the daemon to doing debug logging with the –debug option.
An alternative to that is to connect to the daemon’s event interface which will show you all log entries, regardless of the configured log level (even works remotely).

An example for “lxc init ubuntu:16.04 xen” would be:
lxd.log:

INFO[02-24|18:14:09] Starting container action=start created=2017-02-24T23:11:45+0000 ephemeral=false name=xen stateful=false used=1970-01-01T00:00:00+0000
INFO[02-24|18:14:10] Started container action=start created=2017-02-24T23:11:45+0000 ephemeral=false name=xen stateful=false used=1970-01-01T00:00:00+0000

lxc monitor –type=logging:

metadata:
  context: {}
  level: dbug
  message: 'New events listener: 9b725741-ffe7-4bfc-8d3e-fe620fc6e00a'
timestamp: 2017-02-24T18:14:01.025989062-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.341283344-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StorageCoreInit
timestamp: 2017-02-24T18:14:09.341536477-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/containers/xen
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.347709394-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: PUT
    url: /1.0/containers/xen/state
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.357046302-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'New task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:09.358387853-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Started task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:09.358578599-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/operations/2e2cf904-c4c4-4693-881f-57897d602ad3/wait
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.366213106-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolInit
timestamp: 2017-02-24T18:14:09.369636451-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolCheck
timestamp: 2017-02-24T18:14:09.369771164-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.424696767-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
    name: xen
  level: dbug
  message: ContainerUmount
timestamp: 2017-02-24T18:14:09.432723719-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.721067917-05:00
type: logging


metadata:
  context:
    action: start
    created: 2017-02-24 23:11:45 +0000 UTC
    ephemeral: "false"
    name: xen
    stateful: "false"
    used: 1970-01-01 00:00:00 +0000 UTC
  level: info
  message: Starting container
timestamp: 2017-02-24T18:14:09.749808518-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.792551375-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StorageCoreInit
timestamp: 2017-02-24T18:14:09.792961032-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /internal/containers/23/onstart
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.800803501-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolInit
timestamp: 2017-02-24T18:14:09.803190248-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolCheck
timestamp: 2017-02-24T18:14:09.803251188-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.803306055-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Scheduler: container xen started: re-balancing'
timestamp: 2017-02-24T18:14:09.965080432-05:00
type: logging


metadata:
  context:
    action: start
    created: 2017-02-24 23:11:45 +0000 UTC
    ephemeral: "false"
    name: xen
    stateful: "false"
    used: 1970-01-01 00:00:00 +0000 UTC
  level: info
  message: Started container
timestamp: 2017-02-24T18:14:10.162965059-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Success for task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:10.163072893-05:00
type: logging

The format from “lxc monitor” is a bit different from what you’d get in a log file where each entry is condense into a single line, but more importantly you see all those “level: dbug” entries

Where to report bugs

LXD bugs

The best place to report LXD bugs is upstream at https://github.com/lxc/lxd/issues.
Make sure to fill in everything in the bug reporting template as that information saves us a lot of back and forth to reproduce your environment.

Ubuntu bugs

If you find a problem with the Ubuntu package itself, failing to install, upgrade or remove. Or run into issues with the LXD init scripts. The best place to report such bugs is on Launchpad.

On an Ubuntu system, you can do so with: ubuntu-bug lxd
This will automatically include a number of log files and package information for us to look at.

CRIU bugs

Bugs that are related to CRIU which you can spot by the usually pretty visible CRIU error output should be reported on Launchpad with: ubuntu-bug criu

Do note that the use of CRIU through LXD is considered to be a beta feature and unless you are willing to pay for support through a support contract with Canonical, it may take a while before we get to look at your bug report.

Contributing to LXD

LXD is written in Go and hosted on Github.
We welcome external contributions of any size. There is no CLA or similar legal agreement to sign to contribute to LXD, just the usual Developer Certificate of Ownership (Signed-off-by: line).

We have a number of potential features listed on our issue tracker that can make good starting points for new contributors. It’s usually best to first file an issue before starting to work on code, just so everyone knows that you’re doing that work and so we can give some early feedback.

Building LXD from source

Upstream maintains up to date instructions here: https://github.com/lxc/lxd#building-from-source

You’ll want to fork the upstream repository on Github and then push your changes to your branch. We recommend rebasing on upstream LXD daily as we do tend to merge changes pretty regularly.

Running the testsuite

LXD maintains two sets of tests. Unit tests and integration tests. You can run all of them with:

sudo -E make check

To run the unit tests only, use:

sudo -E go test ./...

To run the integration tests, use:

cd test
sudo -E ./main.sh

That latter one supports quite a number of environment variables to test various storage backends, disable network tests, use a ramdisk or just tweak log output. Some of those are:

  • LXD_BACKEND: One of “btrfs”, “dir”, “lvm” or “zfs” (defaults to “dir”)
    Lets your run the whole testsuite with any of the LXD storage drivers.
  • LXD_CONCURRENT: “true” or “false” (defaults to “false”)
    This enables a few extra concurrency tests.
  • LXD_DEBUG: “true” or “false” (defaults to “false”)
    This will log all shell commands and run all LXD commands in debug mode.
  • LXD_INSPECT: “true” or “false” (defaults to “false”)
    This will cause the testsuite to hang on failure so you can inspect the environment.
  • LXD_LOGS: A directory to dump all LXD log files into (defaults to “”)
    The “logs” directory of all spawned LXD daemons will be copied over to this path.
  • LXD_OFFLINE: “true” or “false” (defaults to “false”)
    Disables any test which relies on outside network connectivity.
  • LXD_TEST_IMAGE: path to a LXD image in the unified format (defaults to “”)
    Lets you use a custom test image rather than the default minimal busybox image.
  • LXD_TMPFS: “true” or “false” (defaults to “false”)
    Runs the whole testsuite within a “tmpfs” mount, this can use quite a bit of memory but makes the testsuite significantly faster.
  • LXD_VERBOSE: “true” or “false” (defaults to “false”)
    A less extreme version of LXD_DEBUG. Shell commands are still logged but –debug isn’t passed to the LXC commands and the LXD daemon only runs with –verbose.

The testsuite will alert you to any missing dependency before it actually runs. A test run on a reasonably fast machine can be done under 10 minutes.

Sending your branch

Before sending a pull request, you’ll want to confirm that:

  • Your branch has been rebased on the upstream branch
  • All your commits messages include the “Signed-off-by: First Last <email>” line
  • You’ve removed any temporary debugging code you may have used
  • You’ve squashed related commits together to keep your branch easily reviewable
  • The unit and integration tests all pass

Once that’s all done, open a pull request on Github. Our Jenkins will validate that the commits are all signed-off, a test build on MacOS and Windows will automatically be performed and if things look good, we’ll trigger a full Jenkins test run that will test your branch on all storage backends, 32bit and 64bit and all the Go versions we care about.

This typically takes less than an hour to happen, assuming one of us is around to trigger Jenkins.

Once all the tests are done and we’re happy with the code itself, your branch will be merged into master and your code will be in the next LXD feature release. If the changes are suitable for the LXD stable-2.0 branch, we’ll backport them for you.

Conclusion

I hope this series of blog post has been helpful in understanding what LXD is and what it can do!

This series’ scope was limited to the LTS version of LXD (2.0.x) but we also do monthly feature releases for those who want the latest features. You can find a few other blog posts covering such features listed in the original LXD 2.0 series post.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Dustin Kirkland

Introducting the Canonical Livepatch Service
Howdy!

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.



I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from ubuntu.com/livepatch

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

      Q: What are the system requirements?

      A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (https://livepatch.canonical.com:443).  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

      Q: What about other architectures?

      A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

      Q: What about other flavors?

      A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

      Q: What about other releases of Ubuntu?

      A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

      Q: What about derivatives of Ubuntu?

      A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

      Q: How does Canonical test livepatches?

      A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

      Q: What kinds of updates will be provided by the Canonical Livepatch Service?

      A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

      Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

      A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

      Q: Can I rollback a Canonical Livepatch?

      A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

      Q: What about low and medium severity CVEs?

      A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

      Q: Why are Canonical Livepatches provided as a subscription service?

      A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

      Q: But I don’t want to buy UA support!

      A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

      Q: But I don’t have an Ubuntu SSO account!

      A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

      Q: But I don’t want login to ubuntu.com!

      A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

      Q: But I don't have Internet access to livepatch.canonical.com:443!

      A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

      Q: Where’s the source code?

      A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

      Q: What about Ubuntu Core?

      A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

      Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

      A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

      • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
      • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
      • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
      • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
      • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
      • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
      • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

      Q: What happens if I run into problems/bugs with Canonical Livepatches?

      A: Ubuntu Advantage customers will file a support request at support.canonical.com where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

      Q: Why does canonical-livepatch client/server have a proprietary license?

      A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

      Q: How do I build my own livepatches?

      A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:

      http://chrisarges.net/2015/09/21/livepatch-on-ubuntu.html

      Q: How do I get notifications of which CVEs are livepatched and which are not?

      A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

      Q: Isn't livepatching just a big ole rootkit?

      A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

      Keep the uptime!
      :-Dustin

      Read more
      Alan Griffiths

      miral-workspaces

      “Workspaces” have arrived on MirAL trunk (lp:miral).

      We won’t be releasing 1.3 with this feature just yet (as we want some experience with this internally first). But if you build from source there’s an example to play with (bin/miral-app).

      As always, bug reports and other suggestions are welcome.

      Note that the miral-shell doesn’t have transitions and other effects like fully featured desktop environments.

      Read more