Canonical Voices

liam zheng

我们与我们的物联网业务主管 Maarten 进行了一次有趣的聊天。大家畅谈了投影仪与支持应用的软件无线电 (limeSDR.org) 相遇会发生什么。下面就是它们的结合带来的一些很酷的玩法,包括通过手势与无人机交流、进入无线电对讲机系统以及开辟实际上并不存在的覆盖区域。

我们与我们的物联网业务主管 Maarten 进行了一次有趣的聊天。大家畅谈了投影仪与支持应用的软件无线电 (limeSDR.org) 相遇会发生什么。下面就是它们的结合带来的一些很酷的玩法,包括通过手势与无人机交流、进入无线电对讲机系统以及开辟实际上并不存在的覆盖区域。

  • 使用软件无线电 (SDR) 接收电视广播,并使用投影仪作为电视屏幕

  • 添加 WiFi 应用但不必多交一部设备的电费

  • 添加 4G 应用,通过 REST API 获取电信提供商的频谱,并为建筑物内没有信号的地方提供信号覆盖

  • 探测进入房间的人是否随身携带了手机并开启投影仪,在周围没有人时关闭投影仪。如果照明、供暖等设备可通过 API 来控制,那么这些设备也包括在内

  • 创建你自己的物联网无线协议,从而可以连接房子周围的感应器,并将带有实时数据的仪表板投影到屏幕上

  • 使用支持蓝牙的手势感应器探测手的动作,通过手势与无人机或玩具车交流并控制它们,将无人机或玩具车看到的东西投影到屏幕上观看

  • 使用 3 台投影仪,根据每个人的蓝牙或手机信号以三角法定位每个人在办公室内的行动位置。连接他们的在线日历,按照他们的日程安排自动打开 Webex 或 Google Hangout。连接室内的无线监控摄像头,将监控画面传送给其他会议邀请。使用内置麦克风收听会议所讲的全部内容

  • 通过港口的 AIS 信号传输设备跟踪船舶航行或通过 Mode S 跟踪飞机飞行,并通过投影仪显示在 Google Map 上绘制的船舶和飞机航线

  • 通过直接式胎压监测系统 (TPMS) 将车辆状态的实时数据投影到屏幕上

  • 与诸如无线电对讲机的各类 2.4GHz 设备通话,以及通过投影仪连接婴儿监视器来显示婴儿当时的活动

  • 构建你自己的低功率 WAN 接收器解决方案(例如 LoRA),并在屏幕上显示接收到的内容

 

Read more
Grazina Borosko

April marks the release of Xerus 16.4 and with it we bring a new design of our iconic wallpaper. This post will take you through our design process and how we have integrated our Suru visual language.

Evolution

The foundation of our recent designs are based on our Suru visual language, which encompasses our core brand values, bringing consistency across the Ubuntu brand.

Our Suru language is influenced by the minimalist nature of Japanese culture. We have taken elements of their Zen culture that give us a precise yet simplistic rhythm and used it in our designs. Working with paper metaphors we have drawn inspiration from the art of origami that provides us with a solid and tangible foundation to work from. Paper is also transferable, meaning it can be used in all areas of our brand in two and three dimensional forms.

Design process

We started by looking at previously released wallpapers across Ubuntu to see how each has evolved from each other. After seeing the previous designs we started to apply our new Suru patterns, which inspired us to move in a new direction.

Ubuntu 14.10 ‘Utopic Unicorn’’

wallpaper_unicorn

Ubuntu 15.04 ‘Vivid Vervet’

suru-desktop-wallpaper-ubuntu-vivid (1)

Ubuntu 15.10 ‘Wily Werewolf’

ubuntu-1510-wily-werewolf-wallpaper

Step-by-step process

Step 1. Origami animal

Since every new Ubuntu release is named after animal, the Design Team wanted to bring this idea closer to the wallpaper and the Suru language. The folds are part of the origami animal and become the base of which we start our design process.

Origarmi

Step.2 Searching for the shape

We started to look at different patterns by using various techniques with origami paper. We zoomed into particular folds of the paper, experimented with different light sources, photography, and used various effects to enhance the design.

The idea was to bring actual origami to the wallpaper as much as possible. We had to think about composition that would work across all screen sizes, especially desktop. As the wallpaper is a prominent feature in a desktop environment, we wanted to make sure that it was user friendly, allowing users to find documents and folders located on the computer screen easily. The main priority was to not let the design get in the way of everyday usage, but enhance it aesthetically and provide a great user experience.

After all the experiments with fold patterns and light sources, we started to look at colour. We wanted to integrate both the Ubuntu orange and Canonical aubergine to balance the brightness and played with gradient levels.

We balanced the contrast of the wallpaper color palette by using a long and subtle gradient that kept the bright and feel look. This made the wallpaper became brighter and more colorful.

Step.3 Final product

The result was successful. The new concept and usage of Suru language helped to create a brighter wallpaper that fitted into our overall visual aesthetic. We created a three-dimensional look and feel that gives the feeling of actual origami appearance. The wallpaper is still recognizable as Ubuntu, but at the same time looks fresh and different.

Ubuntu 16.04 Xenial Xerus

Xerus - purple

Ubuntu 16.04 Xenial Xerus ( light version)

Xerus - Grey

What is next?

The Design Team is now looking at ways to bring the Suru language into animation and fold usage. The idea is to bring an overall seamless and consistent experience to the user, whilst reflecting our tone of voice and visual identity.

Read more
liam zheng

大会来了!三星年度大型开发者峰会即将于4月27日至28日在旧金山揭幕,旧金山是一个极具吸引力的地方,很多开发者、创新者聚集在这里讨论最新的技术和未来创新。

大会来了!三星年度大型开发者峰会即将于4月27日至28日在旧金山揭幕,旧金山是一个极具吸引力的地方,很多开发者、创新者聚集在这里讨论最新的技术和未来创新。

今年,物联网(IoT)将是一个巨大的焦点,特别是智能家庭和针对物联网设计的芯片——ARTIK,届时我们的同事Didier也会出现在大会现场做演示,也会发表关于ARTIK的演讲。

我们的演示将是一个可以让你方便在家庭网关(Home gateway)上部署其他应用的app。让你直观感受智能手机和家庭机器人之间是如何交流、通信从而为你服务的。当你到家时,机器人将用摄像头来识别是你回来了,同时麦克风也可以提供语音交互。

除此,你还可以部署其他app在家庭网关上,包括一个接入点(家庭WiFi),视频服务,本地的服务器(比如Skype和Google Hangout的数据都存储在本地),自动化家用设备等。

并且,我们的往期博客也有关于本次活动的内容,比如这里

Read more
liam zheng

空中移动基站来到英国

英国最大的移动运营商 EE(现已归属 BT 旗下)近日宣布,将与新一轮开源移动网络技术大潮领导者之一的 Lime Micro 以及 Canonical (Ubuntu) 合作,共同确保英国实现更好的移动网络覆盖。

英国最大的移动运营商 EE(现已归属 BT 旗下)近日宣布,将与新一轮开源移动网络技术大潮领导者之一的 Lime Micro 以及 Canonical (Ubuntu) 合作,共同确保英国实现更好的移动网络覆盖。

EE 正在大力投资网络建设,争取到 2020 年实现 95% 的 4G 网络地理覆盖率。但是,并不是所有地区都可以或者都适合建造新的大型信号塔。他们想充分利用现有基础设施,例如灯塔、高层建筑、山岭等等。另外的一大难题是,依靠现有手段覆盖偏远地区在经济上或技术上是不可行的。正因为如此,EE 才选择与技术创新公司合作开发成本更低、体积更小、弹性更高、效果更好的解决方案。

Lime Micro 即将以众筹形式推出首款支持应用的开源软件无线电,即 LimeSDR。通过 4G 应用,LimeSDR 将构成一个功能完善的基站的基础。将这个基站安装到热气球或无人机上,可以覆盖通过其他方式难以到达的地区。另外,将这些基站嵌入自动售货机、银行设施、智能灯杆、数字标牌等其他用途的设备内部,还将降低部署连接的成本。

EE 希望偏远社区也能加入网络建设当中。这些社区可以提出他们需要的功能,甚至可以参与网络的维护。这可以让社区与运营商以全新的方式合作。并且有了当地人的支持,甚至可以减少受过培训的技术人员长途跋涉解决问题的需要。例如,如果一个基站只是需要重新启动,那么派遣一位工程师从爱丁堡跋涉 300 英里前往诸岛将是一件很不经济(或者不切实际)的事情。

EE 将在英国多所大学举办挑战赛,鼓励大家集思广益,就如何连接未联网地区和降低运营成本提出更多创新和开源的想法。任何有好创意的人都可以参加。Ubuntu Core 具有开源、支持应用和随时可用于生产的特点。你只要购买一个 LimeSDR,下载 Ubuntu Core,就能向世界展示你的应用或设备可以怎样降低网络运营成本。帮助未联网地区实现网络覆盖将促进经济发展,因此你的努力将会造福我们的社会。我们非常期待看到你如何将无线网络的未来变得更加美好.

LimeSDR Crowd Supply 宣传视频(优酷)

Read more
Dustin Kirkland

I'm delighted to share the slides from our joint IBM and Canonical webinar about Ubuntu on IBM POWER8 and LinuxOne servers.  You can download the PDF here, or tab through the slides embedded below.  The audio/video recording should be available tomorrow.



Cheers,
:-Dustin

Read more
Tim Penhey (thumper)

It has been too long

Well, it has certainly been a lot longer since I wrote a post than I thought.

My work at Canonical still has me on the Juju team. Juju has come a long way in the last few years, and we are on the final push for the 2.0 version. This was initially intended to come out with the Xenial release, but unfortunately was not ready. Xenial has 2.0-beta4 right now, soon to be beta 6. Hoping that real soon now we'll step through the release candidates to a final release. This will be SRU'ed into both Xenial and Trusty.

I plan to do some more detailed posts on some of the Go utility libraries that have come out of the Juju work. In particular, talking again about loggo which I moved under the "github.com/juju" banner, and the errors package.

Recent work has had me look at the database agnostic model representations for migrating models from one controller to another, and also at gomaasapi - the Go library for talking with MAAS. Perhaps more on that later.

Read more
Jamie Young

Ubuntu orange update

Recently, you may have seen our new colour palette update in the SDK. One notable change is the new hex code we’ve assigned to Ubuntu Orange for screen use. The new colour is #E95420.

We have a post coming soon that will delve deeper into our new palette but for now we just wanted to make sure this change is reflected on our website while at the same time touching on it through our blog. Our Suru visual language has evolved to have a lighter feel and we’ve adjusted the hex value in order to fit in with the palette as a whole.

We’ve updated our brand colour guidelines to take into account this change as well. You can find the new hex as well as all the tints of this colour that we recommend using in your design work.

Read more
April Wang

Ubuntu 手机平台的快速发展及其向物联网IoT和各种设备的广泛延伸,要求我们必须解决安全性和可靠性的实际问题。这些挑战同样存在于台式机和服务器系统平台,并令想使用长期支持版本来运行较新软件的用户和开发者感到格外烦恼。这促使我们开发了 snap 打包格式和工具。

在 Ubuntu 16.04 LTS 中,用户将可以同时安装 snap 包和传统的 deb 包。这两种打包格式可以和谐共存,并使我们能够保持现有的操作系统开发和升级流程。这种做法巩固了我们与 Debian 社区的关系,同时也使广大开发者和社区可以根据需要为 Ubuntu 用户发布 deb 应用或 snap 应用。

利用 snap 软件包,开发者可以为 Ubuntu 16.04 LTS 带来更新版本的应用。较新版本的 KDE、GNOME、浏览器或其他桌面环境应用通常很容易在较旧版本的 LTS 上构建,但是打包和提供更新的复杂程度令我们过去无法提供这类应用。

snap 包的安全机制让我们可以开放平台并在各个分支版本间更快地迭代,这是因为 snap 应用与系统的其余部分是隔离的。用户可以放心安装 snap 而不必担心会对系统或其他应用造成影响。同样,开发者可以自己决定为其应用打包的库的具体版本,从而更好地掌控更新周期。事务性更新使 snap 软件包的部署更加稳定和可靠。

通过在 Ubuntu 16.04 LTS 中加入 snap 软件包支持,我们使 Ubuntu 开发者拥有统一的体验,无论他们是为 PC、服务器、手机还是物联网设备开发软件。Snapcraft 是面向 snap 开发推出的一款工具,可帮助开发者轻松打包应用和依赖项。以 snap 软件包为目标的开发者将获得一个极佳的环境来直接在桌面上编写和测试他们的应用,而不必使用某个设备或虚拟机。

付费应用的开发者通常不得不管理各种库之间的依赖关系和兼容性,特别是在较旧版本的 Ubuntu 上,这令他们颇感烦恼。出于这个原因,这些应用将于 2016 年秋季从 deb 迁移到 snap。Canonical 将与开发者社区一同努力,在未来几个月内提供工具、培训和文档来支持大家顺利完成过渡。

虽然这对于 Ubuntu 社区来说一项重大的新功能,但这并不会使我们丧失传统。16.04 及后续版本仍将继续支持数以万计的所有 .deb 格式的应用和软件包,特别是将会继续提供 deb 档案文件,以供所有人使用和分发软件。

如果您想进一步了解在经典 Ubuntu 上使用 snap 的更多信息或对此有任何疑问,请访问 ubuntuonair.com 参加我们近期举办的以下活动:

Snaps on classic Ubuntu Q&A with Olli Ries,4 月 14 日星期四 15:00(UTC 时间)

Snappy Clinic,4 月 26 日星期二 15:00(UTC 时间)

像往常一样,您可以通过 snappy-app-devel@lists.ubuntu.com 邮件列表参与讨论,或在 Ask Ubuntu 上提出和解答有关 snappy 的问题。

原文作者:Olli Ries  最早发布时间:2016 年 4 月 13 日,原文网址

Read more
Stéphane Graber

This is the ninth blog post in this series about LXD 2.0.

LXD logo

Introduction

One of the very exciting feature of LXD 2.0, albeit experimental, is the support for container checkpoint and restore.

Simply put, checkpoint/restore means that the running container state can be serialized down to disk and then restored, either on the same host as a stateful snapshot of the container or on another host which equates to live migration.

Requirements

To have access to container live migration and stateful snapshots, you need the following:

  • A very recent Linux kernel, 4.4 or higher.
  • CRIU 2.0, possibly with some cherry-picked commits depending on your exact kernel configuration.
  • Run LXD directly on the host. It’s not possible to use those features with container nesting.
  • For migration, the target machine must at least implement the instruction set of the source, the target kernel must at least offer the same syscalls as the source and any kernel filesystem which was mounted on the source must also be mountable on the target.

All the needed dependencies are provided by Ubuntu 16.04 LTS, in which case, all you need to do is install CRIU itself:

apt install criu

Using the thing

Stateful snapshots

A normal container snapshot looks like:

stgraber@dakara:~$ lxc snapshot c1 first
stgraber@dakara:~$ lxc info c1 | grep first
 first (taken at 2016/04/25 19:35 UTC) (stateless)

A stateful snapshot instead looks like:

stgraber@dakara:~$ lxc snapshot c1 second --stateful
stgraber@dakara:~$ lxc info c1 | grep second
 second (taken at 2016/04/25 19:36 UTC) (stateful)

This means that all the container runtime state was serialized to disk and included as part of the snapshot. Restoring one such snapshot is done as you would a stateless one:

stgraber@dakara:~$ lxc restore c1 second
stgraber@dakara:~$

Stateful stop/start

Say you want to reboot your server for a kernel update or similar maintenance. Rather than have to wait for all the containers to start from scratch after reboot, you can do:

stgraber@dakara:~$ lxc stop c1 --stateful

The container state will be written to disk and then picked up the next time you start it.

You can even look at what the state looks like:

root@dakara:~# tree /var/lib/lxd/containers/c1/rootfs/state/
/var/lib/lxd/containers/c1/rootfs/state/
├── cgroup.img
├── core-101.img
├── core-102.img
├── core-107.img
├── core-108.img
├── core-109.img
├── core-113.img
├── core-114.img
├── core-122.img
├── core-125.img
├── core-126.img
├── core-127.img
├── core-183.img
├── core-1.img
├── core-245.img
├── core-246.img
├── core-50.img
├── core-52.img
├── core-95.img
├── core-96.img
├── core-97.img
├── core-98.img
├── dump.log
├── eventfd.img
├── eventpoll.img
├── fdinfo-10.img
├── fdinfo-11.img
├── fdinfo-12.img
├── fdinfo-13.img
├── fdinfo-14.img
├── fdinfo-2.img
├── fdinfo-3.img
├── fdinfo-4.img
├── fdinfo-5.img
├── fdinfo-6.img
├── fdinfo-7.img
├── fdinfo-8.img
├── fdinfo-9.img
├── fifo-data.img
├── fifo.img
├── filelocks.img
├── fs-101.img
├── fs-113.img
├── fs-122.img
├── fs-183.img
├── fs-1.img
├── fs-245.img
├── fs-246.img
├── fs-50.img
├── fs-52.img
├── fs-95.img
├── fs-96.img
├── fs-97.img
├── fs-98.img
├── ids-101.img
├── ids-113.img
├── ids-122.img
├── ids-183.img
├── ids-1.img
├── ids-245.img
├── ids-246.img
├── ids-50.img
├── ids-52.img
├── ids-95.img
├── ids-96.img
├── ids-97.img
├── ids-98.img
├── ifaddr-9.img
├── inetsk.img
├── inotify.img
├── inventory.img
├── ip6tables-9.img
├── ipcns-var-10.img
├── iptables-9.img
├── mm-101.img
├── mm-113.img
├── mm-122.img
├── mm-183.img
├── mm-1.img
├── mm-245.img
├── mm-246.img
├── mm-50.img
├── mm-52.img
├── mm-95.img
├── mm-96.img
├── mm-97.img
├── mm-98.img
├── mountpoints-12.img
├── netdev-9.img
├── netlinksk.img
├── netns-9.img
├── netns-ct-9.img
├── netns-exp-9.img
├── packetsk.img
├── pagemap-101.img
├── pagemap-113.img
├── pagemap-122.img
├── pagemap-183.img
├── pagemap-1.img
├── pagemap-245.img
├── pagemap-246.img
├── pagemap-50.img
├── pagemap-52.img
├── pagemap-95.img
├── pagemap-96.img
├── pagemap-97.img
├── pagemap-98.img
├── pages-10.img
├── pages-11.img
├── pages-12.img
├── pages-13.img
├── pages-1.img
├── pages-2.img
├── pages-3.img
├── pages-4.img
├── pages-5.img
├── pages-6.img
├── pages-7.img
├── pages-8.img
├── pages-9.img
├── pipes-data.img
├── pipes.img
├── pstree.img
├── reg-files.img
├── remap-fpath.img
├── route6-9.img
├── route-9.img
├── rule-9.img
├── seccomp.img
├── sigacts-101.img
├── sigacts-113.img
├── sigacts-122.img
├── sigacts-183.img
├── sigacts-1.img
├── sigacts-245.img
├── sigacts-246.img
├── sigacts-50.img
├── sigacts-52.img
├── sigacts-95.img
├── sigacts-96.img
├── sigacts-97.img
├── sigacts-98.img
├── signalfd.img
├── stats-dump
├── timerfd.img
├── tmpfs-dev-104.tar.gz.img
├── tmpfs-dev-109.tar.gz.img
├── tmpfs-dev-110.tar.gz.img
├── tmpfs-dev-112.tar.gz.img
├── tmpfs-dev-114.tar.gz.img
├── tty.info
├── unixsk.img
├── userns-13.img
└── utsns-11.img

0 directories, 154 files

Restoring the container can be done with a simple:

stgraber@dakara:~$ lxc start c1

Live migration

Live migration is basically the same as the stateful stop/start above, except that the container directory and configuration happens to be moved to another machine too.

stgraber@dakara:~$ lxc list c1
+------+---------+-----------------------+----------------------------------------------+------------+-----------+
| NAME |  STATE  |          IPV4         |                     IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+-----------------------+----------------------------------------------+------------+-----------+
| c1   | RUNNING | 10.178.150.197 (eth0) | 2001:470:b368:4242:216:3eff:fe19:27b0 (eth0) | PERSISTENT | 2         |
+------+---------+-----------------------+----------------------------------------------+------------+-----------+

stgraber@dakara:~$ lxc list s-tollana:
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

stgraber@dakara:~$ lxc move c1 s-tollana:

stgraber@dakara:~$ lxc list c1
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

stgraber@dakara:~$ lxc list s-tollana:
+------+---------+-----------------------+----------------------------------------------+------------+-----------+
| NAME |  STATE  |          IPV4         |                     IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+-----------------------+----------------------------------------------+------------+-----------+
| c1   | RUNNING | 10.178.150.197 (eth0) | 2001:470:b368:4242:216:3eff:fe19:27b0 (eth0) | PERSISTENT | 2         |
+------+---------+-----------------------+----------------------------------------------+------------+-----------+

Limitations

As I said before, checkpoint/restore of containers is still pretty new and we’re still very much working on this feature, fixing issues as we are made aware of them. We do need more people trying this feature and sending us feedback, I would however not recommend using this in production just yet.

The current list of issues we’re tracking is available on Launchpad.

We expect a basic Ubuntu container with a few services to work properly with CRIU in Ubuntu 16.04. However more complex containers, using device passthrough, complex network services or special storage configurations are likely to fail.

Whenever possible, CRIU will fail at dump time, rather than at restore time. In such cases, the source container will keep running, the snapshot or migration will simply fail and a log file will be generated for debugging.

In rare cases, CRIU fails to restore the container, in which case the source container will still be around but will be stopped and will have to be manually restarted.

Sending bug reports

We’re tracking bugs related to checkpoint/restore against the CRIU Ubuntu package on Launchpad. Most of the work to fix those bugs will then happen upstream either on CRIU itself or the Linux kernel, but it’s easier for us to track things this way.

To file a new bug report, head here.

Please make sure to include:

  • The command you ran and the error message as displayed to you
  • Output of “lxc info” (*)
  • Output of “lxc info <container name>”
  • Output of “lxc config show –expanded <container name>”
  • Output of “dmesg” (*)
  • Output of “/proc/self/mountinfo” (*)
  • Output of “lxc exec <container name> — cat /proc/self/mountinfo”
  • Output of “uname -a” (*)
  • The content of /var/log/lxd.log (*)
  • The content of /etc/default/lxd-bridge (*)
  • A tarball of /var/log/lxd/<container name>/ (*)

If reporting a migration bug as opposed to a stateful snapshot or stateful stop bug, please include the data for both the source and target for any of the above which has been marked with a (*).

Extra information

The CRIU website can be found at: https://criu.org

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
facundo

Algunos te pedimos perdón


Si el agua que tomo se pudre, se pudre, me pudro por dentro también.
Si el aire que respiro se pudre, se pudre, se pudre mi forma de ser.

Agoniza montaña vacía de su mineral, de su corazón.
La represa que linda energía se muere otro río, se muere la vida.

Pachamama, Madre Tierra, Madre de todos los colores.
Pachamama, Madre Tierra, Madre de todos los sabores.

Hay bosques que daban oxígeno y sombra y ahora ya ni se ven.
La Tierra se retuerce por dentro y hay tantas flores que ya no crecen.

Pachamama, Madre Tierra, Madre de todos los colores.
Pachamama, Madre Tierra, Madre de todos los sabores.

Algunos te pedimos perdón.

("Pachamama", Arbolito)

Read more
niemeyer

As much anticipated, this week Ubuntu 16.04 LTS was released with integrated support for snaps on classic Ubuntu.

Snappy 2.0 is a modern software platform, that includes the ability to define rich interfaces between snaps that control their security and confinement, comprehensive observation and control of system changes, completion and undoing of partial system changes across restarts/reboots/crashes, macaroon-based authentication for local access and store access, preliminary development mode, a polished filesystem layout and CLI experience, modern sequencing of revisions, and so forth.

The previous post in this series described the reassuring details behind how snappy does system changes. This post will now cover Snappy interfaces, the mechanism that controls the confinement and integration of snaps with other snaps and with the system itself.

A snap interface gives one snap the ability to use resources provided by another snap, including the operating system snap (ubuntu-core is itself a snap!). That’s quite vague, and intentionally so. Software interacts with other software for many reasons and in diverse ways, and Snappy is a platform that has to mediate all of that according to user needs.

In practice, though, the mechanism is straightforward and pleasant to deal with. Without any snaps in the system, there are no interfaces available:

% sudo snap interfaces
error: no interfaces found

If we install the ubuntu-core snap alone (done implicitly when the first snap is installed), we can already see some interface slots being provided by it, but no plugs connected to them:

% sudo snap install ubuntu-core
75.88 MB / 75.88 MB [=====================] 100.00 % 355.56 KB/s 

% snap interfaces
Slot                 Plug
:firewall-control    -
:home                -
:locale-control      -
(...)
:opengl              -
:timeserver-control  -
:timezone-control    -
:unity7              -
:x11                 -

The syntax is <snap>:<slot> and <snap>:<plug>. The lack of a snap name is a shorthand notation for slots and plugs on the operating system snap.

Now let’s install an application:

% sudo snap install ubuntu-calculator-app
120.01 MB / 120.01 MB [=====================] 100.00 % 328.88 KB/s 

% snap interfaces
Slot                 Plug
:firewall-control    -
:home                -
:locale-control      -
(...)
:opengl              ubuntu-calculator-app
:timeserver-control  -
:timezone-control    -
:unity7              ubuntu-calculator-app
:x11                 -

At this point the application should work fine. But let’s instead see what happens if we take away one of these interfaces:

% sudo snap disconnect \
             ubuntu-calculator-app:unity7 ubuntu-core:unity7 

% /snap/bin/ubuntu-calculator-app.calculator
QXcbConnection: Could not connect to display :0

The application installed depends on unity7 to be able to display itself properly, which is itself based on X11. When we disconnected the interface that gave it permission to be accessing these resources, the application was unable to touch them.

The security minded will observe that X11 is not in fact a secure protocol. A number of system abuses are possible when we hand an application this permission. Other interfaces such as home would give the snap access to every non-hidden file in the user’s $HOME directory (those that do not start with a dot), which means a malicious application might steal personal information and send it over the network (assuming it also defines a network plug).

Some might be surprised that this is the case, but this is a misunderstanding about the role of snaps and Snappy as a software platform. When you install software from the Ubuntu archive, that’s a statement of trust in the Ubuntu and Debian developers. When you install Google’s Chrome or MongoDB binaries from their respective archives, that’s a statement of trust in those developers (these have root on your system!). Snappy is not eliminating the need for that trust, as once you give a piece of software access to your personal files, web camera, microphone, etc, you need to believe that it won’t be using those allowances maliciously.

The point of Snappy’s confinement in that picture is to enable a software ecosystem that can control exactly what is allowed and to whom in a clear and observable way, in addition to the same procedural care that we’ve all learned to appreciate in the Linux world, not instead of it. Preventing people from using all relevant resources in the system would simply force them to use that same software over less secure mechanisms instead of fixing the problem.

And what we have today is just the beginning. These interfaces will soon become much richer and more fine grained, including resource selection (e.g. which serial port?), and some of them will disappear completely in favor of more secure choices (Unity 8, for instance).

These are exciting times for Ubuntu and the software world.

@gniemeyer

Read more
Daniel Holbach

Ubuntu 16.04 is out!

Ubuntu 16.04 – yet another LTS?

Ubuntu 16.04 LTS, aka the Xenial Xerus, has just been released. It’s incredible that it’s already the 24th Ubuntu release and the 6th LTS release. If you have been around for a while and need a blast from the past, check out this video:

Click here to view it on youtube. It’s available in /usr/share/example-content on a default desktop install as well.

You would think that after such a long time releases get somewhat inflationary and less important and while I’d very likely always say on release day “yes, this one is the best of all so far”, Ubuntu 16.04 is indeed very special to me.

Snappy snappy snappy

Among the many pieces of goodness to come to your way is the snapd package. It’s installed by default on many flavours of Ubuntu including Ubuntu Desktop and is your snappy connection to myApps.

Snappy Ubuntu Core 2.0 landing just in time for the 16.04 LTS release only happened due to the great and very hard work of many teams and individuals. I also see it as the implementation of lots of feedback we have been getting from third party app developers, ISVs and upstream projects over the years. Basically what all of them wanted was in a nutshell: a solid Ubuntu base, flexibility in handling their app and the relevant stack, independence from distro freezes, dead-simple packaging, bullet-proof upgrades and rollbacks, and an app store model established with the rise of the smartphones. Snappy Ubuntu Core is exactly that and more. What it also brings to Ubuntu is a clear isolation between apps and a universal trust model.

As most of you know, I’ve been trying to teach people how to do packaging for Ubuntu for years and it continued to improve and get easier, but all in all, it still is hard to get right. snapcraft makes this so much easier. It’s just beautiful. If you have been doing some packaging in the past, just take a look at some of the examples.

Landing a well-working and stable snapd with clear-cut and stable set of APIs was the most important aspect, especially considering that almost everyone will be basing their work on 16.04 LTS, which is going to be supported for five years. This includes being able to use snapcraft on the LTS.

Today you can build a snap, upload it to the store using snapcraft upload, having it automatically reviewed and published by the store and Desktop users can install it on their system. This brings you in a position where you can easily share your software with millions of users, without having to wait for somebody to upload it to the distro for you, without having your users add yet another PPA, etc.

So, what’s still missing? Quite a few things actually. Because you have to bundle your dependencies, packages are still quite big. This will change as soon as the specifics of OS and library snaps are figured out. Apart from that many new interfaces will need to be added to make Ubuntu Core really useful and versatile. There are also still a few bugs which need figuring out.

If you generally like what you’re reading here, come and talk to us. Introduce yourselves, talk to us and we’ll figure out if anything you need it still missing.

If you’re curious you can also check out some blog posts written by people who worked on this relentlessly in the last weeks:

Thanks a lot everyone – I thoroughly enjoyed working with you on this and I’m looking forward to all the great things we are going to deliver together!

Bring your friends, bring your questions!

The Community team moved the weekly Ubuntu Community Q&A to be tomorrow, Friday 2016-04-22 15:00 UTC on https://ubuntuonair.com as usual. If you have questions, tune in and bring your friends as well!

Read more
Community Team

The squirrel has landed!

Today, we are proud to bring you our 6th Long Term Support release: Ubuntu 16.04 LTS. It is the sum of the work of thousands of people collaborating all over the world, working tirelessly for the last six months and we'd like to share a few highlights.

Ubuntu 16.04 LTS is here!

Software distribution

You have probably already heard about this: we are bringing a new package format for you to start distributing your apps. It's called snap and it allows you to deliver software to users without going through the traditional Ubuntu archive inclusion process. Whether you are making a game, an utility or the next Firefox, it will enable you to continuously bring the latest version to users. And it’s easy! –it will mostly involve adding a single declarative file to your source tree.

Ubuntu Core and Snapcraft

Snappy Ubuntu Core is the future of Ubuntu, it is built around the snap packaging format and is a brand new world if you are used to classic Ubuntu. Transactional updates, confined apps, smaller and very modular, it’s the Ubuntu for all devices and form factors: your ARM board, your router, your drone, your laptop… your imagination is the limit.

Version 2.0 has just been released, and we’ve collected the highlights from the development team to get you started:

This is important for app developers in a multitude of ways. Snappy Ubuntu Core incorporates a lot of the feedback of third party app developers, ISVs and upstream projects we have been getting over the years. What all of them wanted in a nutshell was: a solid Ubuntu base, a lot of flexibility in handling their app and the relevant stack, being mostly independent from distro freezes, dead-simple packaging, bullet-proof upgrades and rollbacks, and an app store model established with the rise of the smartphones. Snappy Ubuntu Core is exactly that and more. What it also brings to Ubuntu is a clear isolation between apps and a universal trust model.

We have been working with the Engineering teams extensively and assisted them in testing the software, making sure that things worked, writing documentation, putting together examples and bootstrapping an initial community.

What we have today is just the start. There are still a number of details to be figured out, which will all land in Ubuntu through SRUs and Ubuntu Core updates.

Phone and Tablet

We have a tablet that converges into a desktop when a bluetooth mouse is detected! It ships some desktop apps by default such as Firefox and LibreOffice and of course, Ubuntu SDK apps. This is an exciting moment for everyone involved as it’s a milestone on the road to full devices convergence: many form factors and architectures, one codebase.

We have released the latest and greatest phone over-the-air update: OTA 10 a few days ago, which - as usual - brings new features and bug fixes, such as:

  • Re-designed Out Of the Box Experience

  • VPN support

  • Easy switching to desktop mode

  • New colour palette

  • New default apps: uNav, Dekko, Calendar

For more, see the release notes.

Here is what to look forward to in OTA 11 and of course, the Ubuntu SDK roadmap for the next 6 months: speed and more convergence.

Community phone ports

Our porting community of volunteers, lead by the indefatigable Marius Gripsgard has been extending the range of devices where Ubuntu can be installed.

Along with the OnePlus One port, a Fairphone 2 port, with great help and support from the Fairphone Engineering team, is on the way. The ubports site and the Porting Guide have all the information on status, how to get started and contribute to new or existing ports.

Developer portal

The Developer Portal is the place to get started with developing apps for Ubuntu, no matter if your primary interest is the phone, IoT devices or Ubuntu in general. Thus we have been supporting the various Engineering and product teams to bring together all app development resources and present them in a coherent and digestible way.

One important update was reflecting the changes in products and priorities. We wanted to make it clearer that the primary choice on the site is the one concerning products. An overview of the related changes (both implemented and planned) can be seen here.

A lot of work was put into importing already existing documentation. Both in terms of guides written by Engineering teams, but API docs as well. As usual in a diverse organisation as Ubuntu they come in various forms and we had to adapt to bring them onto the site without confusing our users. From now on it will be easier to import more API docs from more packages from various frameworks at the same time.

One of the great features of the developer site is that it will allow us to get the imported guides translated as well. This is useful for the docs imported from our Markdown importer, e.g. snappy and snapcraft. Here we almost exclusively rely on the great work of the Engineering teams and work in conjunction with them. The Marketing team has been contributing some more docs recently, which will land on the site very soon. On the snappy side of things, we also automatically import available gadget snaps from the store.

With the amount of information growing and growing, we are looking for ways to provide more clarity next cycle. We would like to make the versioning of documentation more obvious and improve the navigation. Luckily we are not alone in this quest, but are working on this together with the Design and Web teams. And lastly we are looking to landing a new blog engine soon, which is being tested on the Ubucon Site right now.

Community and planning

The next edition of the Ubuntu Online Summit is also coming in 2 weeks –3rd to 5th May. It is an excellent opportunity to meet other community members, plan together the next cycle and learn and provide feedback on the roadmaps of the Engineering teams. We hope to see you there.

We’d like to thank everyone who has helped put together yet again our best release so far: from documenters, to translators, to forum and Ask Ubuntu moderators, IRC operators, advocates, bug triagers, testers, app developers, packagers, artists and more…. here’s to you: happy Ubuntu 16.04!

Read more
Colin Ian King

I got bitten this week with the clone() system call returning -EINVAL on aarch64 on code that worked fine on x86.  After re-reading the manual several times and looking at my code, I resorted to shoving in debug into the kernel to track down where the -EINVAL was occurring.

The answer to my issue is in arch/arm64/kernel/process.c, copy_thread():

         if (stack_start) {  
if (is_compat_thread(task_thread_info(p)))
childregs->compat_sp = stack_start;
/* 16-byte aligned stack mandatory on AArch64 */
else if (stack_start & 15)
return -EINVAL;
else
childregs->sp = stack_start;
}

Ahah! The stack being passed into clone() has to be 16 byte aligned.  With this simple fix to my code, clone() worked.   Pity this was not in the documentation.

Read more
Inayaili de León Persson

Redesigning ubuntu.com’s navigation

Last month, the web team had an offsite (outside of the office) week sprint in Holborn, central London. We don’t all work on exactly the same projects in our day-to-day, so it’s nice to have this type of sprint, where everyone can work together and show each other what we’ve been working on over the last few months.

 

Web team in HolbornThe web team in our offsite in Holborn

 

One of the key items in the agenda for the week was to brainstorm possible solutions for the key areas we want to improve in our navigation.

The last time we did a major redesign of ubuntu.com’s navigation was back in 2013 — time flies! So we thought now would be a good time to explore ways to improve how users navigate Ubuntu’s main website and how they discover other sites in the Ubuntu ecosystem.

Collaborative sketching

On the second day of the sprint, Francesca walked us through the current version of the navigation and explained the four key issues we want to solve. For each of the issues, she also showed how other sites solve similar problems.

While the 2013 navigation redesign solved some of the problems we were facing at the time (poor exposure of our breadth of products was a big issue, for instance) other issues have emerged since then that we’ve been meaning to address for a while.

After Francesca’s intro, we broke into four groups, and each group sketched a solution for each of the issues, which was then presented to everyone so we could discuss.

 

Karl presenting sketches to rest of the teamDiscussing the team’s sketches

 

By having everyone in the team, from UX and visual designers, to developers and project managers, contribute ideas and sketch together, we were able to generate lots of ideas in a very short amount of time. We could move the project forward much more quickly than if one UX or one designer had to come up with ideas by themselves over days or weeks.

Improving the global navigation

 

ubuntu.com global navigation barubuntu.com’s global navigation bar

 

The main things to improve on the global navigation were:

  • Some users don’t see the light grey bar above the main orange navigation
  • Some users think the links in the global nav are part of the ubuntu.com site
  • Important sites sit under the “More” link and do not have enough visibility
  • On the new full width sections, the global nav, main nav and “breadcrumb” form a visual sandwich effect that could create confusion

It was interesting to see the different groups come up similar ideas on how to improve the usability of the global navigation. Some of the suggestions were more simple and others more involved.

The main suggestions for improvement that we discussed were:

  • Rename the “More” link as “More sites” to make it more clear
  • Increase the number of links shown by default by using the full width of the page
  • Explore using different colours for the background, such as a dark grey
  • Explore having a drawer that exposes all Ubuntu sites, instead of the links being on display all the time (like the Bloomberg website)

 

Bloomber's global navigation closed

 

Bloomber's global navigation openOn Bloomberg.com when you click the “Bloomberg the Company & Its Products” link at the top (above) you open a large drawer that exposes the Bloomberg universe of sites.

 

Improving the main navigation

 

ubuntu.com's main navubuntu.com’s main navigation with a dropdown that lists second level links

 

The main ubuntu.com nav is possibly the most crucial part of the navigation that we would like to improve. The key improvements we’d like to work on are:

  • Having the ability to highlight key sections or featured content
  • Rely less on the site’s IA to structure the navigation as it makes it hard to cross-link between different sections (some pages are relevant to more than one section)
  • Improve the visibility of content that lives in the third level (it is currently only accessible via the breadcrumb-style navigation)

Even though the proposed solutions were different from each other, there were some patterns that popped up more than once amongst the different groups:

  • Featured content within the navigation (eg. the original Boston Globe responsive redesign, and Amazon.co.uk)
  • Links that live in more than one place (eg. John Lewis, Marks & Spencer and other retailers)
  • The idea of using the mega menu pattern

 

Boston Globel's original responsive menuIn the first version of the Boston Globe’s redesigned website, the main navigation included simple featured content for each of the sections

 

John Lewis mega menu Home & Garden section

John Lewis mega menu Baby & Child sectionOn the John Lewis website, you can get to nursery furniture from “Home & Garden > Room” and “Baby & Child > Baby & Nursery”

 

Amazon.co.uk displays seasonal ads in their mega menuThe main navigation on Amazon.co.uk includes featured ads

 

Solving the third level links issues

 

ubuntu.com's third level breadcrumb-style navubuntu.com’s breadcrumb-style navigation showing third level links

 

The main issues with the current third level links are:

  • We are using a recognisable design pattern (the breadcrumb) in a not so typical way (as second and third level navigation)
  • When you click on a third level link, you lose the visibility of the second level links, making it harder to understand where you are in the site
  • The pattern isn’t flexible: some of our labels are long, and only a small number of links fit horizontally, even on a large screen

One thing that we are almost certain of is that we will not be able to remove third level links completely from our site, so we need to improve the way we show them.

The most interesting suggestions on how to handle this issue were:

  • Include a table of contents for the section at the top of pages (like GOV.uk)
  • Key third level links could be highlighted in the main navigation menu

 

GOV.uk third level navOn GOV.uk, sections which have sub-sections include an overview of all the sub-sections at the top of every page. This makes it easier to understand the type and breadth of content of that section

 

Improving the footer

 

ubuntu.com's footerubuntu.com’s “fat footer” with three levels of links

 

Ubuntu.com’s footer is certainly the simplest of the problems we want to solve in terms of navigation. But, nonetheless, there are some issues with the current solution:

  • There are too many levels of links in the footer
  • The download section is hidden in the second footer level, despite being one of the main top level links

The most popular idea that came out of the sketching session was to use the space available better, so that sections can be listed in a masonry type grid, rather than one per column like they currently are.

This means we’d need fewer columns to list all important content and expose the IA of the site. It also means we can ditch the middle level of the current footer design (which now holds the About, Support and Download sections).

Next steps

The next step in the navigation redesign project is to build prototypes to test some of the ideas we had in our workshop, and refine the visual direction. We want to see if the assumptions we’ve made are true, and if the patterns are easy and intuitive to use across different screen sizes and devices.

We will then do some user testing and iterate based on the feedback we get.

We might also have to do some IA work alongside the redesign of the main navigation, if we do want to have links that are listed in different sections and if we want to present more curated links based on user tasks in each of the sections.

Stay tuned!

Read more
Dustin Kirkland


I'm thrilled to introduce Docker 1.10.3, available on every Ubuntu architecture, for Ubuntu 16.04 LTS, and announce the General Availability of Ubuntu Fan Networking!

That's Ubuntu Docker binaries and Ubuntu Docker images for:
  • armhf (rpi2, et al. IoT devices)
  • arm64 (Cavium, et al. servers)
  • i686 (does anyone seriously still run 32-bit intel servers?)
  • amd64 (most servers and clouds under the sun)
  • ppc64el (OpenPower and IBM POWER8 machine learning super servers)
  • s390x (IBM System Z LinuxOne super uptime mainframes)
That's Docker-Docker-Docker-Docker-Docker-Docker, from the smallest Raspberry Pi's to the biggest IBM mainframes in the world today!  Never more than one 'sudo apt install docker.io' command away.

Moreover, we now have Docker running inside of LXD!  Containers all the way down.  Application containers (e.g. Docker), inside of Machine containers (e.g. LXD), inside of Virtual Machines (e.g. KVM), inside of a public or private cloud (e.g. Azure, OpenStack), running on bare metal (take your pick).

Let's have a look at launching a Docker application container inside of a LXD machine container:

kirkland@x250:~⟫ lxc launch ubuntu-daily:x -p default -p docker
Creating magical-damion
Starting magical-damion
kirkland@x250:~⟫ lxc list | grep RUNNING
| magical-damion | RUNNING | 10.16.4.52 (eth0) | | PERSISTENT | 0 |
kirkland@x250:~⟫ lxc exec magical-damion bash
root@magical-damion:~# apt update >/dev/null 2>&1 ; apt install -y docker.io >/dev/null 2>&1
root@magical-damion:~# docker run -it ubuntu bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
759d6771041e: Pull complete
8836b825667b: Pull complete
c2f5e51744e6: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:b4dbab2d8029edddfe494f42183de20b7e2e871a424ff16ffe7b15a31f102536
Status: Downloaded newer image for ubuntu:latest
root@0577bd7d5db1:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:ac:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1296 (1.2 KB) TX bytes:648 (648.0 B)


Oh, and let's talk about networking...  We're also pleased to announce the general availability of Ubuntu Fan networking -- specially designed to connect all of your Docker containers spread across your network.  Ubuntu's Fan networking feature is an easy way to make every Docker container on your local network easily addressable by every other Docker host and container on the same network.  It's high performance, super simple, utterly deterministic, and we've tested it on every major public cloud as well as OpenStack and our private networks.

Simply installing Ubuntu's Docker package will also install the ubuntu-fan package, which provides an interactive setup script, fanatic, should you choose to join the Fan.  Simply run 'sudo fanatic' and answer the questions.  You can trivially revert your Fan networking setup easily with 'sudo fanatic deconfigure'.

kirkland@x250:~$ sudo fanatic 
Welcome to the fanatic fan networking wizard. This will help you set
up an example fan network and optionally configure docker and/or LXD to
use this network. See fanatic(1) for more details.
Configure fan underlay (hit return to accept, or specify alternative) [10.0.0.0/16]:
Configure fan overlay (hit return to accept, or specify alternative) [250.0.0.0/8]:
Create LXD networking for underlay:10.0.0.0/16 overlay:250.0.0.0/8 [Yn]: n
Create docker networking for underlay:10.0.0.0/16 overlay:250.0.0.0/8 [Yn]: Y
Test docker networking for underlay:10.0.0.45/16 overlay:250.0.0.0/8
(NOTE: potentially triggers large image downloads) [Yn]: Y
local docker test: creating test container ...
34710d2c9a856f4cd7d8aa10011d4d2b3d893d1c3551a870bdb9258b8f583246
test master: ping test (250.0.45.0) ...
test slave: ping test (250.0.45.1) ...
test master: ping test ... PASS
test master: short data test (250.0.45.1 -> 250.0.45.0) ...
test slave: ping test ... PASS
test slave: short data test (250.0.45.0 -> 250.0.45.1) ...
test master: short data ... PASS
test slave: short data ... PASS
test slave: long data test (250.0.45.0 -> 250.0.45.1) ...
test master: long data test (250.0.45.1 -> 250.0.45.0) ...
test master: long data ... PASS
test slave: long data ... PASS
local docker test: destroying test container ...
fanatic-test
fanatic-test
local docker test: test complete PASS (master=0 slave=0)
This host IP address: 10.0.0.45

I've run 'sudo fanatic' here on a couple of machines on my network -- x250 (10.0.0.45) and masterbr (10.0.0.8), and now I'm going to launch a Docker container on each of those two machines, obtain each IP address on the Fan (250.x.y.z), install iperf, and test the connectivity and bandwidth between each of them (on my gigabit home network).  You'll see that we'll get 900mbps+ of throughput:

kirkland@x250:~⟫ sudo docker run -it ubuntu bash
root@c22cf0d8e1f7:/# apt update >/dev/null 2>&1 ; apt install -y iperf >/dev/null 2>&1
root@c22cf0d8e1f7:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:fa:00:2d:00
inet addr:250.0.45.0 Bcast:0.0.0.0 Mask:255.0.0.0
inet6 addr: fe80::42:faff:fe00:2d00/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:6423 errors:0 dropped:0 overruns:0 frame:0
TX packets:4120 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:22065202 (22.0 MB) TX bytes:227225 (227.2 KB)

root@c22cf0d8e1f7:/# iperf -c 250.0.8.0
multicast ttl failed: Invalid argument
------------------------------------------------------------
Client connecting to 250.0.8.0, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 3] local 250.0.45.0 port 54274 connected with 250.0.8.0 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.05 GBytes 902 Mbits/sec

And the second machine:
kirkland@masterbr:~⟫ sudo docker run -it ubuntu bash
root@effc8fe2513d:/# apt update >/dev/null 2>&1 ; apt install -y iperf >/dev/null 2>&1
root@effc8fe2513d:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:fa:00:08:00
inet addr:250.0.8.0 Bcast:0.0.0.0 Mask:255.0.0.0
inet6 addr: fe80::42:faff:fe00:800/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:7659 errors:0 dropped:0 overruns:0 frame:0
TX packets:3433 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:22131852 (22.1 MB) TX bytes:189875 (189.8 KB)

root@effc8fe2513d:/# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 250.0.8.0 port 5001 connected with 250.0.45.0 port 54274
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.05 GBytes 899 Mbits/sec


Finally, let's have another long hard look at the image from the top of this post.  Download it in full resolution to study very carefully what's happening here, because it's pretty [redacted] amazing!


Here, we have a Byobu session, split into 6 panes (Shift-F2 5x Times, Shift-F8 6x times).  In each pane, we have an SSH session to Ubuntu 16.04 LTS servers spread across 6 different architectures -- armhf, arm64, i686, amd64, ppc64el, and s390x.  I used the Shift-F9 key to simultaneously run the same commands in each and every window.  Here are the commands I ran:

clear
lxc launch ubuntu-daily:x -p default -p docker
lxc list | grep RUNNING
uname -a
dpkg -l docker.io | grep docker.io
sudo docker images | grep -m1 ubuntu
sudo docker run -it ubuntu bash
apt update >/dev/null 2>&1 ; apt install -y net-tools >/dev/null 2>&1
ifconfig eth0
exit

That's right.  We just launched Ubuntu LXD containers, as well as Docker containers against every Ubuntu 16.04 LTS architecture.  How's that for Ubuntu everywhere!?!

Ubuntu 16.04 LTS will be one hell of a release!

:-Dustin

Read more
niemeyer

As announced last Saturday, Snappy Ubuntu Core 2.0 has just been tagged and made its way into the archives of Ubuntu 16.04, which is due for the final release in the next days. So this is a nice time to start covering interesting aspects of what is being made available in this release.

A good choice for the first post in this series is talking about how snappy performs changes in the system, as that knowledge will be useful in observing and understanding what is going on in your snappy platform. Let’s start with the first operation you will likely do when first interacting with the snappy platform — install:

% sudo snap install ubuntu-calculator-app
120.01 MB / 120.01 MB [================================================================] 100.00 % 1.45 MB/s

This operation is traditionally done on analogous systems in an ephemeral way. That is, the software has either a local or a remote database of options to install, and once the change is requested the platform of choice will start acting on it with all state for the modification kept in memory. If something doesn’t go so well, such as a reboot or even a crash, the modification is lost.. in the best case. Besides being completely lost, it might also be partially applied to the system, with some files spread through the filesystem, and perhaps some of the involved hooks run. After the restart, the partial state remains until some manual action is taken.

Snappy instead has an engine that tracks and controls such changes in a persistent manner. All the recent changes, pending or not, may be observed via the API and the command line:

% snap changes
ID   Status  ...  Summary
1    Done    ...  Install "ubuntu-calculator-app" snap

(the spawn and ready date/time columns have been hidden for space)

The output gives an overview of what happened recently in the system, whether pending or not. If one of these changes is unintendedly interrupted for whatever reason, the daemon will attempt to continue the requested change at the next opportunity.

Continuing is not always possible, though, because there are external factors that such a change will generally depend upon (the snap being available, the system state remaining similar, etc). In those cases, the change will fail, and any relevant modifications performed on the system while attempting to accomplish the defined goal will be undone.

Because such partial states are possible and need to be handled properly by the system, changes are in fact broken down into finer grained tasks which are also tracked and observable while in progress or after completion. Using the change ID obtained in the former command, we can get a better picture of what that changed involved:

% snap changes 1
Status ...  Summary
Done   ...  Download snap "ubuntu-core" from channel "stable"
Done   ...  Mount snap "ubuntu-core"
Done   ...  Copy snap "ubuntu-core" data
Done   ...  Setup snap "ubuntu-core" security profiles
Done   ...  Make snap "ubuntu-core" available
Done   ...  Download snap "ubuntu-calculator-app"
Done   ...  Mount snap "ubuntu-calculator-app"
Done   ...  Copy snap "ubuntu-calculator-app" data
Done   ...  Setup snap "ubuntu-calculator-app" security profiles
Done   ...  Make snap "ubuntu-calculator-app" available

(the spawn and ready date/time columns have been hidden for space)

Here we can observe an interesting implementation detail of the snappy integration into Ubuntu: the ubuntu-core snap is at the moment ~80MB, and contains the software bundled with the snappy platform itself. Instead of having it pre-installed, it’s only pulled in when the first snap is installed.

Another interesting implementation detail that surfaces here is the fact snaps are in fact mounted rather than copied into the system as traditional packaging systems do, and they’re mounted read-only. That means the operation of having the content of a snap in the filesystem is instantaneous and atomic, and so is removing it. There are no partial states for that specific aspect, and the content cannot be modified.

Coming back into the task list, we can see above that all the tasks that the change involved are ready and did succeed, as expected from the earlier output we had seen for the change itself. Being more of an introspection view, though, this tasks view will often also show logs and error messages for the individual tasks, whether in progress or not.

The following view presents a similar change but with an error due to an intentionally corrupted system state that snappy could not recover from (path got a busy mountpoint hacked in):

% sudo snap install xkcd-webserver
[\] Make snap "xkcd-webserver" available to the system
error: cannot perform the following tasks:
- Make snap "xkcd-webserver" available to the system (symlink 13 /snap/xkcd-webserver/current: file exists)

% sudo snap changes 2
Status  ...  Summary
Undone  ...  Download snap "xkcd-webserver" from channel "stable"
Undone  ...  Mount snap "xkcd-webserver"
Undone  ...  Copy snap "xkcd-webserver" data
Undone  ...  Setup snap "xkcd-webserver" security profiles
Error   ...  Make snap "xkcd-webserver" available to the system

.................................................................
Make snap "xkcd-webserver" available to the system

2016-04-20T14:14:30-03:00 ERROR symlink 13 /snap/xkcd-webserver/current: file exists

Note how reassuring that report looks. It says exactly what went wrong, at which stage of the process, and it also points out that all the prior tasks that previously succeeded had their modifications undone. The security profiles were removed, the mount point was taken down, and so on.

This sort of behavior is to be expected of modern operating systems, and is fundamental when considering systems that should work unattended. Whether in a single execution or across restarts and reboots, changes either succeed or they don’t, and the system remains consistent, reliable, observable, and secure.

In the next blog post we’ll see details about the interfaces feature in snappy, which controls aspects of confinement and integration between snaps.

@gniemeyer

Read more
Dustin Kirkland


I happen to have a full mirror of the entire Ubuntu Xenial archive here on a local SSD, and I took the opportunity to run a few numbers...
  • 6: This is our 6th Ubuntu LTS
    • 6.06, 8.04, 10.04, 12.04, 14.04, 16.04
  • 7: With Ubuntu 16.04 LTS, we're supporting 7 CPU architectures
    • armhf, arm64, i386, amd64, powerpc, ppc64el, s390x
  • 25,671: Ubuntu 16.04 LTS is comprised of 25,671 source packages
    • main, universe, restricted, multiverse
  • 150,562+: Over 150,562 (and counting!) cloud instances of Xenial have launched to date
    • and we haven't even officially released yet!
  • 216,475: A complete archive of all binary .deb packages in Ubuntu 16.04 LTS consists of 216,475 debs.
    • 24,803 arch independent
    • 27,159 armhf
    • 26,845 arm64
    • 28,730 i386
    • 28,902 amd64
    • 27,061 powerpc
    • 26,837 ppc64el
    • 26,138 s390x
  • 1,426,792,926: A total line count of all source packages in Ubuntu 16.04 LTS using cloc yields 1,426,792,926 total lines of source code
  • 250,478,341,568: A complete archive all debs, all architectures in Ubuntu 16.04 LTS requires 250GB of disk space
Yes, that's 1.4 billion lines of source code comprising the entire Ubuntu 16.04 LTS archive.  What an amazing achievement of open source development!

Perhaps my fellow nerds here might be interested in a breakdown of all 1.4 billion lines across 25K source packages, and throughout 176 different programming languages, as measured by Al Danial's cloc utility.  Interesting data!


You can see the full list here.  What further insight can you glean?

:-Dustin

Read more
Dustin Kirkland


On July 7, 2010, I received the above email.  In hindsight, this note effectively changed the landscape of cloud computing forever.  I was one of 3 Canonical employees in attendance (Nick Barcet, Neil Levine) and among a number former colleagues (Theirry Carrez, Soren Hansen, Rick Clark) at the first OpenStack Design Summit at the Omni hotel in Austin, Texas, in July of 2010.

These are the only pictures I snapped with my phone (metadata says it was an HTC Hero) of the event, which, almost unbelievably fit entirely within a single conference room :-)


The "fishbowl" round table discussion format was modeled after Ubuntu Developer Summits.


It was so much fun to see so many unfamiliar, non-Ubuntu people using the fishbowl discussion format.


Also borrowed from Ubuntu Developer Summits was the collaborative, community-sourced note taking in Etherpad-Lite.



Breakfast, in the beautiful Omni lobby.


Lots of natural light, but thankfully, air conditioned.  By the way, does anyone have pictures from the 120oF Whole Foods roof top event?


My, my, my, how far we've come in 6 short years!

This month's OpenStack Summit returns to Austin, Texas, and fills the entire Austin Convention Center, and overflows into at least two nearby hotels, with 5,000+ OpenStack developers, users, and enthusiasts!


In fact, if you're reading this post on insights.ubuntu.com, you're being served by Wordpress and MySQL hosted on a production Ubuntu OpenStack at Canonical.

Welcome back home, OpenStack!

:-Dustin

Read more
Stéphane Graber

The next post in the LXD series is currently blocked on a pending kernel fix, so I figured I’d do an out of series post on how to use the LXD API directly.

LXD logo

Setting up the LXD daemon

The LXD REST API can be accessed over either a local Unix socket or over HTTPs. The protocol in both case is identical, the only difference being that the Unix socket is plain text, relying on the filesystem for authentication.

To enable remote connections to you LXD daemon, run:

lxc config set core.https_address "[::]:8443"

This will have it bind all addresses on port 8443.

To setup a trust relationship with a new client, a password is required, you can set one with:

lxc config set core.trust_password <some random password>

Local or remote

curl over unix socket

As mentioned above, the Unix socket doesn’t need authentication, so with a recent version of curl, you can just do:

stgraber@castiana:~$ curl --unix-socket /var/lib/lxd/unix.socket s/
{"type":"sync","status":"Success","status_code":200,"metadata":["/1.0"]}

Not the most readable output. You can make it a lot more readable by using jq:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket s/ | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": [
  "/1.0"
 ]
}

curl over the network (and client authentication)

The REST API is authenticated by the use of client certificates. LXD generates one when you first use the command line client, so we’ll be using that one, but you could generate your own with openssl if you wanted to.

First, lets confirm that this particular certificate isn’t trusted:

curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/1.0 | jq .metadata.auth
"untrusted"

Now, lets tell the server to add it by giving it the password that we set earlier:

stgraber@castiana:~$ curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/1.0/certificates -X POST -d '{"type": "client", "password": "some-password"}' | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {}
}

And now confirm that we are properly authenticated:

stgraber@castiana:~$ curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/1.0 | jq .metadata.auth
"trusted"

And confirm that things look the same as over the Unix socket:

stgraber@castiana:~$ curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/ | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": [
  "/1.0"
 ]
}

Walking through the API

To keep the commands short, all my examples will be using the local Unix socket, you can add the arguments shown above to get this to work over the HTTPs connection.

Note that in an untrusted environment (so anything but localhost), you should also pass LXD the expected server certificate so that you can confirm that you’re talking to the right machine and aren’t the target of a man in the middle attack.

Server information

You can get server runtime information with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0 | jq .
 {
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "api_extensions": [],
  "api_status": "stable",
  "api_version": "1.0",
  "auth": "trusted",
  "config": {
   "core.https_address": "[::]:8443",
   "core.trust_password": true,
   "storage.zfs_pool_name": "encrypted/lxd"
  },
  "environment": {
   "addresses": [
    "192.168.54.140:8443",
    "10.212.54.1:8443",
    "[2001:470:b368:4242::1]:8443"
   ],
   "architectures": [
    "x86_64",
    "i686"
   ],
   "certificate": "BIG PEM BLOB",
   "driver": "lxc",
   "driver_version": "2.0.0",
   "kernel": "Linux",
   "kernel_architecture": "x86_64",
   "kernel_version": "4.4.0-18-generic",
   "server": "lxd",
   "server_pid": 26227,
   "server_version": "2.0.0",
   "storage": "zfs",
   "storage_version": "5"
  },
  "public": false
 }
}

Everything except the config section is read-only and so doesn’t need to be sent back when updating, so say we want to unset that trust password and have LXD stop listening over https, we can do that with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X PUT -d '{"config": {"storage.zfs_pool_name": "encrypted/lxd"}}' a/1.0 | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {}
}

Operations

For anything that could take more than a second, LXD will use a background operation. That’s to make it easier for the client to do multiple requests in parallel and to limit the number of connections to the server.

You can list all current operations with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "running": [
   "/1.0/operations/008bc02e-21a0-4070-a28c-633b79a46517"
  ]
 }
}

And get more details on it with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/008bc02e-21a0-4070-a28c-633b79a46517 | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "id": "008bc02e-21a0-4070-a28c-633b79a46517",
  "class": "task",
  "created_at": "2016-04-18T22:24:54.469437937+01:00",
  "updated_at": "2016-04-18T22:25:22.42813972+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/blah"
   ]
  },
  "metadata": {
   "download_progress": "48%"
  },
  "may_cancel": false,
  "err": ""
 }
}

In this case, it was me creating a new container called “blah” and the image is tracking the needed download, in this case of the Ubuntu 14.04 image.

You can subscribe to all operation notifications by using the /1.0/events websocket, or if your client isn’t that smart, you can just block on the operation with:

curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/b1f57056-c79b-4d3c-94bf-50b5c47a85ad/wait | jq .

Which will print a copy of the operation status (same as above) once the operation reaches a terminal state (success, failure or canceled).

The other endpoints

The REST API currently has the following endpoints:

  • /
    • /1.0
      • /1.0/certificates
        • /1.0/certificates/<fingerprint>
      • /1.0/containers
        • /1.0/containers/<name>
          • /1.0/containers/<name>/exec
        • /1.0/containers/<name>/files
        • /1.0/containers/<name>/snapshots
        • /1.0/containers/<name>/snapshots/<name>
        • /1.0/containers/<name>/state
        • /1.0/containers/<name>/logs
        • /1.0/containers/<name>/logs/<logfile>
      • /1.0/events
      • /1.0/images
        • /1.0/images/<fingerprint>
          • /1.0/images/<fingerprint>/export
      • /1.0/images/aliases
        • /1.0/images/aliases/<name>
      • /1.0/networks
        • /1.0/networks/<name>
      • /1.0/operations
        • /1.0/operations/<uuid>
          • /1.0/operations/<uuid>/wait
        • /1.0/operations/<uuid>/websocket
      • /1.0/profiles
        • /1.0/profiles/<name>

Detailed documentation on the various actions for each of them can be found here.

Basic container life-cycle

Going through absolutely everything above would make this blog post enormous, so lets just focus on the most basic things, creating a container, starting it, dealing with files a bit, creating a snapshot and deleting the whole thing.

Create

To create a container named “xenial” from an Ubuntu 16.04 image coming from https://cloud-images.ubuntu.com/daily (also known as ubuntu-daily:16.04 in the client), you need to run:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X POST -d '{"name": "xenial", "source": {"type": "image", "protocol": "simplestreams", "server": "https://cloud-images.ubuntu.com/daily", "alias": "16.04"}}' a/1.0/containers | jq .
{
 "type": "async",
 "status": "Operation created",
 "status_code": 100,
 "metadata": {
  "id": "e2714931-470e-452a-807c-c1be19cdff0d",
  "class": "task",
  "created_at": "2016-04-18T22:36:20.935649438+01:00",
  "updated_at": "2016-04-18T22:36:20.935649438+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": null,
  "may_cancel": false,
  "err": ""
 },
 "operation": "/1.0/operations/e2714931-470e-452a-807c-c1be19cdff0d"
}

This confirms that the container creation was received. We can check for progress with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/e2714931-470e-452a-807c-c1be19cdff0d | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "id": "e2714931-470e-452a-807c-c1be19cdff0d",
  "class": "task",
  "created_at": "2016-04-18T22:36:20.935649438+01:00",
  "updated_at": "2016-04-18T22:36:31.135038483+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": {
  "download_progress": "19%"
 },
 "may_cancel": false,
 "err": ""
 }
}

And finally wait until it’s done with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/e2714931-470e-452a-807c-c1be19cdff0d/wait | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "id": "e2714931-470e-452a-807c-c1be19cdff0d",
  "class": "task",
  "created_at": "2016-04-18T22:36:20.935649438+01:00",
  "updated_at": "2016-04-18T22:38:01.385511623+01:00",
  "status": "Success",
  "status_code": 200,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": {
   "download_progress": "100%"
  },
  "may_cancel": false,
  "err": ""
 }
}

Start

Starting the container is done my modifying its running state:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X PUT -d '{"action": "start"}' a/1.0/containers/xenial/state | jq .
{
 "type": "async",
 "status": "Operation created",
 "status_code": 100,
 "metadata": {
  "id": "614ac9f0-f0fc-4351-9e6f-14710fd93542",
  "class": "task",
  "created_at": "2016-04-18T22:39:42.766830946+01:00",
  "updated_at": "2016-04-18T22:39:42.766830946+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": null,
  "may_cancel": false,
  "err": ""
 },
 "operation": "/1.0/operations/614ac9f0-f0fc-4351-9e6f-14710fd93542"
}

If you’re doing this by hand as I am right now, there’s no way you can actually access that operation and wait for it to finish as it’s very very quick and data about past operations disappears 5 seconds after they’re done.

You can however check the container state:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/state | jq .metadata.status
"Running"

Or even get its IP address(es) with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/state | jq .metadata.network.eth0.addresses
[
 {
  "family": "inet",
  "address": "10.212.54.43",
  "netmask": "24",
  "scope": "global"
 },
 {
  "family": "inet6",
  "address": "2001:470:b368:4242:216:3eff:fe17:331c",
  "netmask": "64",
  "scope": "global"
 },
 {
  "family": "inet6",
  "address": "fe80::216:3eff:fe17:331c",
  "netmask": "64",
  "scope": "link"
 }
]

Read a file

Reading a file from the container is ridiculously easy:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/files?path=/etc/hosts
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Push a file

Pushing a file is only more difficult because you need to set the Content-Type to octet-stream:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X POST -H "Content-Type: application/octet-stream" -d 'abc' a/1.0/containers/xenial/files?path=/tmp/a
{"type":"sync","status":"Success","status_code":200,"metadata":{}}

We can then confirm it worked with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/files?path=/tmp/a
abc

Snapshot

To make a snapshot, just run:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X POST -d '{"name": "my-snapshot"}' a/1.0/containers/xenial/snapshots | jq .
{
 "type": "async",
 "status": "Operation created",
 "status_code": 100,
 "metadata": {
  "id": "d68141de-0c13-419c-a21c-13e30de29154",
  "class": "task",
  "created_at": "2016-04-18T22:54:04.148986484+01:00",
  "updated_at": "2016-04-18T22:54:04.148986484+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": null,
  "may_cancel": false,
  "err": ""
 },
 "operation": "/1.0/operations/d68141de-0c13-419c-a21c-13e30de29154"
}

And you can then get all the details about it:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/snapshots/my-snapshot | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "architecture": "x86_64",
  "config": {
   "volatile.base_image": "0b06c2858e2efde5464906c93eb9593a29bf46d069cf8d007ada81e5ab80613c",
   "volatile.eth0.hwaddr": "00:16:3e:17:33:1c",
   "volatile.last_state.idmap": "[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]"
  },
  "created_at": "2016-04-18T21:54:04Z",
  "devices": {
   "root": {
    "path": "/",
    "type": "disk"
   }
  },
  "ephemeral": false,
  "expanded_config": {
   "volatile.base_image": "0b06c2858e2efde5464906c93eb9593a29bf46d069cf8d007ada81e5ab80613c",
   "volatile.eth0.hwaddr": "00:16:3e:17:33:1c",
   "volatile.last_state.idmap": "[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]"
  },
  "expanded_devices": {
   "eth0": {
    "name": "eth0",
    "nictype": "bridged",
    "parent": "lxdbr0",
    "type": "nic"
   },
   "root": {
    "path": "/",
    "type": "disk"
   }
  },
  "name": "xenial/my-snapshot",
  "profiles": [
   "default"
  ],
  "stateful": false
 }
}

Delete

You can’t delete a running container, so first you must stop it with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X PUT -d '{"action": "stop", "force": true}' a/1.0/containers/xenial/state | jq .
{
 "type": "async",
 "status": "Operation created",
 "status_code": 100,
 "metadata": {
  "id": "97945ec9-f9b0-4fa8-aaba-06e41a9bc2a9",
  "class": "task",
  "created_at": "2016-04-18T22:56:18.28952729+01:00",
  "updated_at": "2016-04-18T22:56:18.28952729+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": null,
  "may_cancel": false,
  "err": ""
 },
 "operation": "/1.0/operations/97945ec9-f9b0-4fa8-aaba-06e41a9bc2a9"
}

Then you can delete it with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X DELETE a/1.0/containers/xenial | jq .
{
 "type": "async",
 "status": "Operation created",
 "status_code": 100,
 "metadata": {
  "id": "439bf4a1-e056-4b76-86ad-bff06169fce1",
  "class": "task",
  "created_at": "2016-04-18T22:56:22.590239576+01:00",
  "updated_at": "2016-04-18T22:56:22.590239576+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": null,
  "may_cancel": false,
  "err": ""
 },
 "operation": "/1.0/operations/439bf4a1-e056-4b76-86ad-bff06169fce1"
}

And confirm it’s gone:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/containers/xenial | jq .
{
 "error": "not found",
 "error_code": 404,
 "type": "error"
}

Conclusion

The LXD API has been designed to be simple yet powerful, it can easily be used through even the most simple client but also supports advanced features to allow more complex clients to be very efficient.

Our REST API is stable which means that any change we make to it will be fully backward compatible with the API as it was in LXD 2.0. We will only be doing additions to it, no removal or change of behavior for the existing end points.

Support for new features can be detected by the client by looking at the “api_extensions” list from GET /1.0. We currently do not advertise any but will no doubt make use of this very soon.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more