Canonical Voices

Posts tagged with 'ubuntu-server'

Dustin Kirkland

I'm delighted to share the slides from our joint IBM and Canonical webinar about Ubuntu on IBM POWER8 and LinuxOne servers.  You can download the PDF here, or tab through the slides embedded below.  The audio/video recording should be available tomorrow.



Cheers,
:-Dustin

Read more
Dustin Kirkland


I'm thrilled to introduce Docker 1.10.3, available on every Ubuntu architecture, for Ubuntu 16.04 LTS, and announce the General Availability of Ubuntu Fan Networking!

That's Ubuntu Docker binaries and Ubuntu Docker images for:
  • armhf (rpi2, et al. IoT devices)
  • arm64 (Cavium, et al. servers)
  • i686 (does anyone seriously still run 32-bit intel servers?)
  • amd64 (most servers and clouds under the sun)
  • ppc64el (OpenPower and IBM POWER8 machine learning super servers)
  • s390x (IBM System Z LinuxOne super uptime mainframes)
That's Docker-Docker-Docker-Docker-Docker-Docker, from the smallest Raspberry Pi's to the biggest IBM mainframes in the world today!  Never more than one 'sudo apt install docker.io' command away.

Moreover, we now have Docker running inside of LXD!  Containers all the way down.  Application containers (e.g. Docker), inside of Machine containers (e.g. LXD), inside of Virtual Machines (e.g. KVM), inside of a public or private cloud (e.g. Azure, OpenStack), running on bare metal (take your pick).

Let's have a look at launching a Docker application container inside of a LXD machine container:

kirkland@x250:~⟫ lxc launch ubuntu-daily:x -p default -p docker
Creating magical-damion
Starting magical-damion
kirkland@x250:~⟫ lxc list | grep RUNNING
| magical-damion | RUNNING | 10.16.4.52 (eth0) | | PERSISTENT | 0 |
kirkland@x250:~⟫ lxc exec magical-damion bash
root@magical-damion:~# apt update >/dev/null 2>&1 ; apt install -y docker.io >/dev/null 2>&1
root@magical-damion:~# docker run -it ubuntu bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
759d6771041e: Pull complete
8836b825667b: Pull complete
c2f5e51744e6: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:b4dbab2d8029edddfe494f42183de20b7e2e871a424ff16ffe7b15a31f102536
Status: Downloaded newer image for ubuntu:latest
root@0577bd7d5db1:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:ac:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1296 (1.2 KB) TX bytes:648 (648.0 B)


Oh, and let's talk about networking...  We're also pleased to announce the general availability of Ubuntu Fan networking -- specially designed to connect all of your Docker containers spread across your network.  Ubuntu's Fan networking feature is an easy way to make every Docker container on your local network easily addressable by every other Docker host and container on the same network.  It's high performance, super simple, utterly deterministic, and we've tested it on every major public cloud as well as OpenStack and our private networks.

Simply installing Ubuntu's Docker package will also install the ubuntu-fan package, which provides an interactive setup script, fanatic, should you choose to join the Fan.  Simply run 'sudo fanatic' and answer the questions.  You can trivially revert your Fan networking setup easily with 'sudo fanatic deconfigure'.

kirkland@x250:~$ sudo fanatic 
Welcome to the fanatic fan networking wizard. This will help you set
up an example fan network and optionally configure docker and/or LXD to
use this network. See fanatic(1) for more details.
Configure fan underlay (hit return to accept, or specify alternative) [10.0.0.0/16]:
Configure fan overlay (hit return to accept, or specify alternative) [250.0.0.0/8]:
Create LXD networking for underlay:10.0.0.0/16 overlay:250.0.0.0/8 [Yn]: n
Create docker networking for underlay:10.0.0.0/16 overlay:250.0.0.0/8 [Yn]: Y
Test docker networking for underlay:10.0.0.45/16 overlay:250.0.0.0/8
(NOTE: potentially triggers large image downloads) [Yn]: Y
local docker test: creating test container ...
34710d2c9a856f4cd7d8aa10011d4d2b3d893d1c3551a870bdb9258b8f583246
test master: ping test (250.0.45.0) ...
test slave: ping test (250.0.45.1) ...
test master: ping test ... PASS
test master: short data test (250.0.45.1 -> 250.0.45.0) ...
test slave: ping test ... PASS
test slave: short data test (250.0.45.0 -> 250.0.45.1) ...
test master: short data ... PASS
test slave: short data ... PASS
test slave: long data test (250.0.45.0 -> 250.0.45.1) ...
test master: long data test (250.0.45.1 -> 250.0.45.0) ...
test master: long data ... PASS
test slave: long data ... PASS
local docker test: destroying test container ...
fanatic-test
fanatic-test
local docker test: test complete PASS (master=0 slave=0)
This host IP address: 10.0.0.45

I've run 'sudo fanatic' here on a couple of machines on my network -- x250 (10.0.0.45) and masterbr (10.0.0.8), and now I'm going to launch a Docker container on each of those two machines, obtain each IP address on the Fan (250.x.y.z), install iperf, and test the connectivity and bandwidth between each of them (on my gigabit home network).  You'll see that we'll get 900mbps+ of throughput:

kirkland@x250:~⟫ sudo docker run -it ubuntu bash
root@c22cf0d8e1f7:/# apt update >/dev/null 2>&1 ; apt install -y iperf >/dev/null 2>&1
root@c22cf0d8e1f7:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:fa:00:2d:00
inet addr:250.0.45.0 Bcast:0.0.0.0 Mask:255.0.0.0
inet6 addr: fe80::42:faff:fe00:2d00/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:6423 errors:0 dropped:0 overruns:0 frame:0
TX packets:4120 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:22065202 (22.0 MB) TX bytes:227225 (227.2 KB)

root@c22cf0d8e1f7:/# iperf -c 250.0.8.0
multicast ttl failed: Invalid argument
------------------------------------------------------------
Client connecting to 250.0.8.0, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 3] local 250.0.45.0 port 54274 connected with 250.0.8.0 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.05 GBytes 902 Mbits/sec

And the second machine:
kirkland@masterbr:~⟫ sudo docker run -it ubuntu bash
root@effc8fe2513d:/# apt update >/dev/null 2>&1 ; apt install -y iperf >/dev/null 2>&1
root@effc8fe2513d:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:fa:00:08:00
inet addr:250.0.8.0 Bcast:0.0.0.0 Mask:255.0.0.0
inet6 addr: fe80::42:faff:fe00:800/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:7659 errors:0 dropped:0 overruns:0 frame:0
TX packets:3433 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:22131852 (22.1 MB) TX bytes:189875 (189.8 KB)

root@effc8fe2513d:/# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 250.0.8.0 port 5001 connected with 250.0.45.0 port 54274
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.05 GBytes 899 Mbits/sec


Finally, let's have another long hard look at the image from the top of this post.  Download it in full resolution to study very carefully what's happening here, because it's pretty [redacted] amazing!


Here, we have a Byobu session, split into 6 panes (Shift-F2 5x Times, Shift-F8 6x times).  In each pane, we have an SSH session to Ubuntu 16.04 LTS servers spread across 6 different architectures -- armhf, arm64, i686, amd64, ppc64el, and s390x.  I used the Shift-F9 key to simultaneously run the same commands in each and every window.  Here are the commands I ran:

clear
lxc launch ubuntu-daily:x -p default -p docker
lxc list | grep RUNNING
uname -a
dpkg -l docker.io | grep docker.io
sudo docker images | grep -m1 ubuntu
sudo docker run -it ubuntu bash
apt update >/dev/null 2>&1 ; apt install -y net-tools >/dev/null 2>&1
ifconfig eth0
exit

That's right.  We just launched Ubuntu LXD containers, as well as Docker containers against every Ubuntu 16.04 LTS architecture.  How's that for Ubuntu everywhere!?!

Ubuntu 16.04 LTS will be one hell of a release!

:-Dustin

Read more
Dustin Kirkland


I happen to have a full mirror of the entire Ubuntu Xenial archive here on a local SSD, and I took the opportunity to run a few numbers...
  • 6: This is our 6th Ubuntu LTS
    • 6.06, 8.04, 10.04, 12.04, 14.04, 16.04
  • 7: With Ubuntu 16.04 LTS, we're supporting 7 CPU architectures
    • armhf, arm64, i386, amd64, powerpc, ppc64el, s390x
  • 25,671: Ubuntu 16.04 LTS is comprised of 25,671 source packages
    • main, universe, restricted, multiverse
  • 150,562+: Over 150,562 (and counting!) cloud instances of Xenial have launched to date
    • and we haven't even officially released yet!
  • 216,475: A complete archive of all binary .deb packages in Ubuntu 16.04 LTS consists of 216,475 debs.
    • 24,803 arch independent
    • 27,159 armhf
    • 26,845 arm64
    • 28,730 i386
    • 28,902 amd64
    • 27,061 powerpc
    • 26,837 ppc64el
    • 26,138 s390x
  • 1,426,792,926: A total line count of all source packages in Ubuntu 16.04 LTS using cloc yields 1,426,792,926 total lines of source code
  • 250,478,341,568: A complete archive all debs, all architectures in Ubuntu 16.04 LTS requires 250GB of disk space
Yes, that's 1.4 billion lines of source code comprising the entire Ubuntu 16.04 LTS archive.  What an amazing achievement of open source development!

Perhaps my fellow nerds here might be interested in a breakdown of all 1.4 billion lines across 25K source packages, and throughout 176 different programming languages, as measured by Al Danial's cloc utility.  Interesting data!


You can see the full list here.  What further insight can you glean?

:-Dustin

Read more
Dustin Kirkland


On July 7, 2010, I received the above email.  In hindsight, this note effectively changed the landscape of cloud computing forever.  I was one of 3 Canonical employees in attendance (Nick Barcet, Neil Levine) and among a number former colleagues (Theirry Carrez, Soren Hansen, Rick Clark) at the first OpenStack Design Summit at the Omni hotel in Austin, Texas, in July of 2010.

These are the only pictures I snapped with my phone (metadata says it was an HTC Hero) of the event, which, almost unbelievably fit entirely within a single conference room :-)


The "fishbowl" round table discussion format was modeled after Ubuntu Developer Summits.


It was so much fun to see so many unfamiliar, non-Ubuntu people using the fishbowl discussion format.


Also borrowed from Ubuntu Developer Summits was the collaborative, community-sourced note taking in Etherpad-Lite.



Breakfast, in the beautiful Omni lobby.


Lots of natural light, but thankfully, air conditioned.  By the way, does anyone have pictures from the 120oF Whole Foods roof top event?


My, my, my, how far we've come in 6 short years!

This month's OpenStack Summit returns to Austin, Texas, and fills the entire Austin Convention Center, and overflows into at least two nearby hotels, with 5,000+ OpenStack developers, users, and enthusiasts!


In fact, if you're reading this post on insights.ubuntu.com, you're being served by Wordpress and MySQL hosted on a production Ubuntu OpenStack at Canonical.

Welcome back home, OpenStack!

:-Dustin

Read more
Dustin Kirkland

As announced last week, Microsoft and Canonical have worked together to bring Ubuntu's userspace natively into Windows 10.

As of today, Windows 10 Insiders can now take Ubuntu on Windows for a test drive!  Here's how...

1) You need to have a system running today's 64-bit build of Windows 10 (Build 14316).


2) To do so, you may need to enroll into the Windows Insider program here, insider.windows.com.


3) You need to notify your Windows desktop that you're a Windows Insider, under "System Settings --> Advanced Windows Update options"


4) You need to set your update ambition to the far right, also known as "the fast ring".


5) You need to enable "developer mode", as this new feature is very pointedly directed specifically at developers.


6) You need to check for updates, apply all updates, and restart.


7) You need to turn on the new Windows feature, "Windows Subsystem for Linux (Beta)".  Note (again) that you need a 64-bit version of Windows!  Without that, you won't see the new option.


8) You need to reboot again.  (Windows sure has a fetish for rebooting!)


9) You press the start button and type "bash".


10) The first time you run "bash.exe", you'll accept the terms of service, download Ubuntu, and then you're off and running!



If you screw something up, and you want to start over, simply open a Windows command shell, and run: lxrun /uninstall /full and then just run bash again.

For bonus points, you might also like to enable the Ubuntu monospace font in your console.  Here's how!

a) Download the Ubuntu monospace font, from font.ubuntu.com.


b) Install the Ubuntu monospace font, by opening the zip file you downloaded, finding UbuntuMono-R.ttf, double clicking on it, and then clicking Install.


c) Enable the Ubuntu monospace font for the command console in the Windows registry.  Open regedit and find this key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Console\TrueTypeFont and add a new string value name "000" with value data "Ubuntu Mono"




d) Edit your command console preferences to enable the Ubuntu monospace font.

Cheers!
Dustin

Read more
Dustin Kirkland

Update: Here's how to get started using Ubuntu on Windows

See also Scott Hanselman's blog here
I'm in San Francisco this week, attending Microsoft's Build developer conference, as a sponsored guest of Microsoft.



That's perhaps a bit odd for me, as I hadn't used Windows in nearly 16 years.  But that changed a few months ago, as I embarked on a super secret (and totally mind boggling!) project between Microsoft and Canonical, as unveiled today in a demo during Kevin Gallo's opening keynote of the Build conference....



An Ubuntu user space and bash shell, running natively in a Windows 10 cmd.exe console!


Did you get that?!?  Don't worry, it took me a few laps around that track, before I fully comprehended it when I first heard such crazy talk a few months ago :-)

Here's let's break it down slowly...
  1. Windows 10 users
  2. Can open the Windows Start menu
  3. And type "bash" [enter]
  4. Which opens a cmd.exe console
  5. Running Ubuntu's /bin/bash
  6. With full access to all of Ubuntu user space
  7. Yes, that means apt, ssh, rsync, find, grep, awk, sed, sortxargs, md5sum, gpg, curl, wget, apache, mysql, python, perl, ruby, php, gcc, tar, vim, emacs, diff, patch...
  8. And most of the tens of thousands binary packages available in the Ubuntu archives!
"Right, so just Ubuntu running in a virtual machine?"  Nope!  This isn't a virtual machine at all.  There's no Linux kernel booting in a VM under a hypervisor.  It's just the Ubuntu user space.

"Ah, okay, so this is Ubuntu in a container then?"  Nope!  This isn't a container either.  It's native Ubuntu binaries running directly in Windows.

"Hum, well it's like cygwin perhaps?"  Nope!  Cygwin includes open source utilities are recompiled from source to run natively in Windows.  Here, we're talking about bit-for-bit, checksum-for-checksum Ubuntu ELF binaries running directly in Windows.

[long pause]

"So maybe something like a Linux emulator?"  Now you're getting warmer!  A team of sharp developers at Microsoft has been hard at work adapting some Microsoft research technology to basically perform real time translation of Linux syscalls into Windows OS syscalls.  Linux geeks can think of it sort of the inverse of "wine" -- Ubuntu binaries running natively in Windows.  Microsoft calls it their "Windows Subsystem for Linux".  (No, it's not open source at this time.)

Oh, and it's totally shit hot!  The sysbench utility is showing nearly equivalent cpu, memory, and io performance.

So as part of the engineering work, I needed to wrap the stock Ubuntu root filesystem into a Windows application package (.appx) file for suitable upload to the Windows Store.  That required me to use Microsoft Visual Studio to clone a sample application, edit a few dozen XML files, create a bunch of icon .png's of various sizes, and so on.

Not being Windows developer, I struggled and fought with Visual Studio on this Windows desktop for a few hours, until I was about ready to smash my coffee mug through the damn screen!

Instead, I pressed the Windows key, typed "bash", hit enter.  Then I found the sample application directory in /mnt/c/Users/Kirkland/Downloads, and copied it using "cp -a".  I used find | xargs | rename to update a bunch of filenames.  And a quick grep | xargs | sed to comprehensively search and replace s/SampleApp/UbuntuOnWindows/. And Ubuntu's convert utility quickly resized a bunch of icons.   Then I let Visual Studio do its thing, compiling the package and uploading to the Windows Store.  Voila!

Did you catch that bit about /mnt/c...  That's pretty cool...  All of your Windows drives, like C: are mounted read/write directly under /mnt.  And, vice versa, you can see all of your Ubuntu filesystem from Windows Explorer in C:\Users\Kirkland\AppData\Local\Lxss\rootfs\


Meanwhile, I also needed to ssh over to some of my other Ubuntu systems to get some work done.  No need for Putty!  Just ssh directly from within the Ubuntu shell.



Of course apt install and upgrade as expected.



Is everything working exactly as expected?  No, not quite.  Not yet, at least.  The vast majority of the LTP passes and works well.  But there are some imperfections still, especially around tty's an the vt100.  My beloved byobu, screen, and tmux don't quite work yet, but they're getting close!

And while the current image is Ubuntu 14.04 LTS, we're expecting to see Ubuntu 16.04 LTS replacing Ubuntu 14.04 in the Windows Store very, very soon.

Finally, I imagine some of you -- long time Windows and Ubuntu users alike -- are still wondering, perhaps, "Why?!?"  Having dedicated most of the past two decades of my career to free and open source software, this is an almost surreal endorsement by Microsoft on the importance of open source to developers.  Indeed, what a fantastic opportunity to bridge the world of free and open source technology directly into any Windows 10 desktop on the planet.  And what a wonderful vector into learning and using more Ubuntu and Linux in public clouds like Azure.  From Microsoft's perspective, a variety of surveys and user studies have pointed to bash and Linux tools -- very specifically, Ubuntu -- be available in Windows, and without resource-heavy full virtualization.

So if you're a Windows Insider and have access to the early beta of this technology, we certainly hope you'll try it out!  Let us know what you think!

If you want to hear more, hopefully you'll tune into the Channel 9 Panel discussion at 16:30 PDT on March 30, 2016.

Cheers,
Dustin

Read more
Dustin Kirkland

Still have questions about Ubuntu on Windows?
Watch this Channel 9 session, recorded live at Build this week, hosted by Scott Hanselman, with questions answered by Windows kernel developers Russ Alexander, Ben Hillis, and myself representing Canonical and Ubuntu!

For fun, watch the crowd develop in the background over the 30 minute session!

And here's another recorded session with a demo by Rich Turner and Russ Alexander.  The real light bulb goes off at about 8:01.


Cheers,
:-Dustin

Read more
Dustin Kirkland


We at Canonical have conducted a legal review, including discussion with the industry's leading software freedom legal counsel, of the licenses that apply to the Linux kernel and to ZFS.

And in doing so, we have concluded that we are acting within the rights granted and in compliance with their terms of both of those licenses.  Others have independently achieved the same conclusion.  Differing opinions exist, but please bear in mind that these are opinions.

While the CDDL and GPLv2 are both "copyleft" licenses, they have different scope.  The CDDL applies to all files under the CDDL, while the GPLv2 applies to derivative works.

The CDDL cannot apply to the Linux kernel because zfs.ko is a self-contained file system module -- the kernel itself is quite obviously not a derivative work of this new file system.

And zfs.ko, as a self-contained file system module, is clearly not a derivative work of the Linux kernel but rather quite obviously a derivative work of OpenZFS and OpenSolaris.  Equivalent exceptions have existed for many years, for various other stand alone, self-contained, non-GPL kernel modules.

Our conclusion is good for Ubuntu users, good for Linux, and good for all of free and open source software.

As we have already reached the conclusion, we are not interested in debating license compatibility, but of course welcome the opportunity to discuss the technology.

Cheers,
Dustin

EDIT: This post was updated to link to the supportive position paper from Eben Moglen of the SFLC, an amicus brief from James Bottomley, as well as the contrarian position from Bradley Kuhn and the SFC.

Read more
Dustin Kirkland



I had the opportunity to speak at Container World 2016 in Santa Clara yesterday.  Thanks in part to the Netflix guys who preceded me, the room was absolutely packed!

You can download a PDF of my slides here, or flip through them embedded below.

I'd really encourage you to try the demo instructions of LXD toward the end!


:-Dustin

Read more
Dustin Kirkland


Ubuntu 16.04 LTS (Xenial) is only a few short weeks away, and with it comes one of the most exciting new features Linux has seen in a very long time...

ZFS -- baked directly into Ubuntu -- supported by Canonical.

What is ZFS?

ZFS is a combination of a volume manager (like LVM) and a filesystem (like ext4, xfs, or btrfs).

ZFS one of the most beloved features of Solaris, universally coveted by every Linux sysadmin with a Solaris background.  To our delight, we're happy to make to OpenZFS available on every Ubuntu system.  Ubuntu's reference guide for ZFS can be found here, and these are a few of the killer features:
  • snapshots
  • copy-on-write cloning
  • continuous integrity checking against data corruption
  • automatic repair
  • efficient data compression.
These features truly make ZFS the perfect filesystem for containers.

What does "support" mean?

  • You'll find zfs.ko automatically built and installed on your Ubuntu systems.  No more DKMS-built modules!
$ locate zfs.ko
/lib/modules/4.4.0-4-generic/kernel/zfs/zfs/zfs.ko
  • You'll see the module loaded automatically if you use it.

$ lsmod | grep zfs
zfs 2801664 11
zunicode 331776 1 zfs
zcommon 57344 1 zfs
znvpair 90112 2 zfs,zcommon
spl 102400 3 zfs,zcommon,znvpair
zavl 16384 1 zfs

  • The user space zfsutils-linux package will be included in Ubuntu Main, with security updates provided by Canonical (as soon as this MIR is completed).
  • As always, industry leading, enterprise class technical support is available from Canonical with Ubuntu Advantage services.

How do I get started?

It's really quite simple!  Here's a few commands to get you up and running with ZFS and LXD in 60 seconds or less.

First, make sure you're running Ubuntu 16.04 (Xenial).

$ head -n1 /etc/issue
Ubuntu Xenial Xerus (development branch) \n \l

Now, let's install lxd and zfsutils-linux, if you haven't already:

$ sudo apt install lxd zfsutils-linux

Next, let's use the interactive lxd init command to setup LXD and ZFS.  In the example below, I'm simply using a sparse, loopback file for the ZFS pool.  For best results (and what I use on my laptop and production servers), it's best to use a raw SSD partition or device.

$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 2
Would you like LXD to be available over the network (yes/no)? no
LXD has been successfully configured.

We can check our ZFS pool now:

$ sudo zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
lxd 1.98G 450K 1.98G - 0% 0% 1.00x ONLINE -

$ sudo zpool status
pool: lxd
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
lxd ONLINE 0 0 0
/var/lib/lxd/zfs.img ONLINE 0 0 0
errors: No known data errors

$ lxc config get storage.zfs_pool_name
storage.zfs_pool_name: lxd

Finally, let's import the Ubuntu LXD image, and launch a few containers.  Note how fast containers launch, which is enabled by the ZFS cloning and copy-on-write features:

$ newgrp lxd
$ lxd-images import ubuntu --alias ubuntu
Downloading the GPG key for http://cloud-images.ubuntu.com
Progress: 48 %
Validating the GPG signature of /tmp/tmpa71cw5wl/download.json.asc
Downloading the image.
Image manifest: http://cloud-images.ubuntu.com/server/releases/trusty/release-20160201/ubuntu-14.04-server-cloudimg-amd64.manifest
Image imported as: 54c8caac1f61901ed86c68f24af5f5d3672bdc62c71d04f06df3a59e95684473
Setup alias: ubuntu

$ for i in $(seq 1 5); do lxc launch ubuntu; done
...
$ lxc list
+-------------------------+---------+-------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | EPHEMERAL | SNAPSHOTS |
+-------------------------+---------+-------------------+------+-----------+-----------+
| discordant-loria | RUNNING | 10.0.3.130 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| fictive-noble | RUNNING | 10.0.3.91 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| interprotoplasmic-essie | RUNNING | 10.0.3.242 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| nondamaging-cain | RUNNING | 10.0.3.9 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| untreasurable-efrain | RUNNING | 10.0.3.89 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+

Super easy, right?

Cheers,
:-Dustin

Read more
Dustin Kirkland


There's no shortage of excitement, controversy, and readership, any time you can work "Docker" into a headline these days.  Perhaps a bit like "Donald Trump", but for CIO tech blogs and IT news -- a real hot button.  Hey, look, I even did it myself in the title of this post!

Sometimes an article even starts out about CoreOS, but gets diverted into a discussion about Docker, like this one, where shykes (Docker's founder and CTO) announced that Docker's default image would be moving away from Ubuntu to Alpine Linux.


I have personally been Canonical's business and technical point of contact with Docker Inc, since September of 2013, when I co-presented at an OpenStack Meetup in Austin, Texas, with Ben Golub and Nick Stinemates of Docker.  I can tell you that, along with most of the rest of the Docker community, this casual declaration in an unrelated Hacker News thread, came as a surprise to nearly all of us!

Docker's default container image is certainly Docker's decision to make.  But it would be prudent to examine at a few facts:

(1) Check DockerHub and you may notice that while Busybox (Alpine Linux) has surpassed Ubuntu in the number downloads (66M to 40M), Ubuntu is still by far the most "popular" by number of "stars" -- likes, favorites, +1's, whatever, (3.2K to 499).

(2) Ubuntu's compressed, minimal root tarball is 59 MB, which is what is downloaded over the Internet.  That's different from the 188 MB uncompressed root filesystem, which has been quoted a number of times in the press.

(3) The real magic of Docker is such that you only ever download that base image, one time!  And you only store one copy of the uncompressed root filesystem on your disk! Just once, sudo docker pull ubuntu, on your laptop at home or work, and then launch thousands of images at a coffee shop or airport lounge with its spotty wifi.  Build derivative images, FROM ubuntu, etc. and you only ever store the incremental differences.

Actually, I encourage you to test that out yourself...  I just launched a t2.micro -- Amazon's cheapest instance type with the lowest networking bandwidth.  It took 15.938s to sudo apt install docker.io.  And it took 9.230s to sudo docker pull ubuntu.  It takes less time to download Ubuntu than to install Docker!

ubuntu@ip-172-30-0-129:~⟫ time sudo apt install docker.io -y
...
real 0m15.938s
user 0m2.146s
sys 0m0.913s

As compared to...

ubuntu@ip-172-30-0-129:~⟫ time sudo docker pull ubuntu
latest: Pulling from ubuntu
f15ce52fc004: Pull complete
c4fae638e7ce: Pull complete
a4c5be5b6e59: Pull complete
8693db7e8a00: Pull complete
ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:457b05828bdb5dcc044d93d042863fba3f2158ae249a6db5ae3934307c757c54
Status: Downloaded newer image for ubuntu:latest
real 0m9.230s
user 0m0.021s
sys 0m0.016s

Now, sure, it takes even less than that to download Alpine Linux (0.747s by my test), but again you only ever do that once!  After you have your initial image, launching Docker containers take the exact same amount of time (0.233s) and identical storage differences.  See:

ubuntu@ip-172-30-0-129:/tmp/docker⟫ time sudo docker run alpine /bin/true
real 0m0.233s
user 0m0.014s
sys 0m0.001s
ubuntu@ip-172-30-0-129:/tmp/docker⟫ time sudo docker run ubuntu /bin/true
real 0m0.234s
user 0m0.012s
sys 0m0.002s

(4) I regularly communicate sincere, warm congratulations to our friends at Docker Inc, on its continued growth.  shykes publicly mentioned the hiring of the maintainer of Alpine Linux in that Hacker News post.  As a long time Linux distro developer myself, I have tons of respect for everyone involved in building a high quality Linux distribution.  In fact, Canonical employs over 700 people, in 44 countries, working around the clock, all calendar year, to make Ubuntu the world's most popular Linux OS.  Importantly, that includes a dedicated security team that has an outstanding track record over the last 12 years, keeping Ubuntu servers, clouds, desktops, laptops, tablets, and phones up-to-date and protected against the latest security vulnerabilities.  I don't know personally Natanael, but I'm intimately aware of what a spectacular amount of work it is to maintain and secure an OS distribution, as it makes its way into enterprise and production deployments.  Good luck!

(5) There are currently 5,854 packages available via apk in Alpine Linux (sudo docker run alpine apk search -v).  There are 8,862 packages in Ubuntu Main (officially supported by Canonical), and 53,150 binary packages across all of Ubuntu Main, Universe, Restricted, and Multiverse, supported by the greater Ubuntu community.  Nearly all 50,000+ packages are updated every 6 months, on time, every time, and we release an LTS version of Ubuntu and the best of open source software in the world every 2 years.  Like clockwork.  Choice.  Velocity.  Stability.  That's what Ubuntu brings.

Docker holds a special place in the Ubuntu ecosystem, and Ubuntu has been instrumental in Docker's growth over the last 3 years.  Where we go from here, is largely up to the cross-section of our two vibrant communities.

And so I ask you honestly...what do you want to see?  How would you like to see Docker and Ubuntu operate together?

I'm Canonical's Product Manager for Ubuntu Server, I'm responsible for Canonical's relationship with Docker Inc, and I will read absolutely every comment posted below.

Cheers,
:-Dustin

p.s. I'm speaking at Container Summit in New York City today, and wrote this post from the top of the (inspiring!) One World Observatory at the World Trade Center this morning.  Please come up and talk to me, if you want to share your thoughts (at Container Summit, not the One World Observatory)!


Read more
Dustin Kirkland


As always, I enjoyed speaking at the SCALE14x event, especially at the new location in Pasadena, California!

What if you could adapt a package from a newer version of Ubuntu, onto your stable LTS desktop/server?

Or, as a developer, what if you could provide your latest releases to your users running an older LTS version of Ubuntu?

Introducing adapt!

adapt is a lot like apt...  It’s a simple command that installs packages.

But it “adapts” a requested version to run on your current system.

It's a simple command that installs any package from any release of Ubuntu into any version of Ubuntu.

How does adapt work?

Simple… Containers!

More specifically, LXD system containers.

Why containers?

Containers can run anywhere, physical, virtual, desktops, servers, and any CPU architecture.

And containers are light and fast!  Zero latency and no virtualization overhead.

Most importantly, system containers are perfect copies of the released distribution, the operating system itself.

And all of that continuous integration testing we do perform on every single Ubuntu release?

We leverage that!
You can download a PDF of the slides for my talk here, or flip through them here:



I hope you enjoy some of the magic that LXD is making possible ;-)

Cheers!
Dustin

Read more
Dustin Kirkland

tl;dr

  • Put /tmp on tmpfs and you'll improve your Linux system's I/O, reduce your carbon foot print and electricity usage, stretch the battery life of your laptop, extend the longevity of your SSDs, and provide stronger security.
  • In fact, we should do that by default on Ubuntu servers and cloud images.
  • Having tested 502 physical and virtual servers in production at Canonical, 96.6% of them could immediately fit all of /tmp in half of the free memory available and 99.2% could fit all of /tmp in (free memory + free swap).

Try /tmp on tmpfs Yourself

$ echo "tmpfs /tmp tmpfs rw,nosuid,nodev" | sudo tee -a /etc/fstab
$ sudo reboot

Background

In April 2009, I proposed putting /tmp on tmpfs (an in memory filesystem) on Ubuntu servers by default -- under certain conditions, like, well, having enough memory. The proposal was "approved", but got hung up for various reasons.  Now, again in 2016, I proposed the same improvement to Ubuntu here in a bug, and there's a lively discussion on the ubuntu-cloud and ubuntu-devel mailing lists.

The benefits of /tmp on tmpfs are:
  • Performance: reads, writes, and seeks are insanely fast in a tmpfs; as fast as accessing RAM
  • Security: data leaks to disk are prevented (especially when swap is disabled), and since /tmp is its own mount point, we should add the nosuid and nodev options (and motivated sysadmins could add noexec, if they desire).
  • Energy efficiency: disk wake-ups are avoided
  • Reliability: fewer NAND writes to SSD disks
In the interest of transparency, I'll summarize the downsides:
  • There's sometimes less space available in memory, than in your root filesystem where /tmp may traditionally reside
  • Writing to tmpfs could evict other information from memory to make space
You can learn more about Linux tmpfs here.

Not Exactly Uncharted Territory...

Fedora proposed and implemented this in Fedora 18 a few years ago, citing that Solaris has been doing this since 1994. I just installed Fedora 23 into a VM and confirmed that /tmp is a tmpfs in the default installation, and ArchLinux does the same. Debian debated doing so, in this thread, which starts with all the reasons not to put /tmp on a tmpfs; do make sure you read the whole thread, though, and digest both the pros and cons, as both are represented throughout the thread.

Full Data Treatment

In the current thread on ubuntu-cloud and ubuntu-devel, I was asked for some "real data"...

In fact, across the many debates for and against this feature in Ubuntu, Debian, Fedora, ArchLinux, and others, there is plenty of supposition, conjecture, guesswork, and presumption.  But seeing as we're talking about data, let's look at some real data!

Here's an analysis of a (non-exhaustive) set of 502 of Canonical's production servers that run Ubuntu.com, Launchpad.net, and hundreds of related services, including OpenStack, dozens of websites, code hosting, databases, and more. These servers sampled are slightly biased with more physical machines than virtual machines, but both are present in the survey, and a wide variety of uptime is represented, from less than a day of uptime, to 1306 days of uptime (with live patched kernels, of course).  Note that this is not an exhaustive survey of all servers at Canonical.

I humbly invite further study and analysis of the raw, tab-separated data, which you can find at:
The column headers are:
  • Column 1: The host names have been anonymized to sequential index numbers
  • Column 2: `du -s /tmp` disk usage of /tmp as of 2016-01-17 (ie, this is one snapshot in time)
  • Column 3-8: The output of the `free` command, memory in KB for each server
  • Column 9-11: The output of the `free` command, sway in KB for each server
  • Column 12: The number of inodes in /tmp
I have imported it into a Google Spreadsheet to do some data treatment. You're welcome to do the same, or use the spreadsheet of your choice.

For the numbers below, 1 MB = 1000 KB, and 1 GB = 1000 MB, per Wikipedia. (Let's argue MB and MiB elsewhere, shall we?)  The mean is the arithmetic average.  The median is the middle value in a sorted list of numbers.  The mode is the number that occurs most often.  If you're confused, this article might help.  All calculations are accurate to at least 2 significant digits.

Statistical summary of /tmp usage:

  • Max: 101 GB
  • Min: 4.0 KB
  • Mean: 453 MB
  • Median: 16 KB
  • Mode: 4.0 KB
Looking at all 502 servers, there are two extreme outliers in terms of /tmp usage. One server has 101 GB of data in /tmp, and the other has 42 GB. The latter is a very noisy django.log. There are 4 more severs using between 10 GB and 12 GB of /tmp. The remaining 496 severs surveyed (98.8%) are using less than 4.8 GB of /tmp. In fact, 483 of the servers surveyed (96.2%) use less than 1 GB of /tmp. 454 of the servers surveyed (90.4%) use less than 100 MB of /tmp. 414 of the servers surveyed (82.5%) use less than 10 MB of /tmp. And actually, 370 of the servers surveyed (73.7%) -- the overwhelming majority -- use less than 1MB of /tmp.

Statistical summary of total memory available:

  • Max: 255 GB
  • Min: 1.0 GB
  • Mean: 24 GB
  • Median: 10.2 GB
  • Mode: 4.1 GB
All of the machines surveyed (100%) have at least 1 GB of RAM.  495 of the machines surveyed (98.6%) have at least 2GB of RAM.   437 of the machines surveyed (87%) have at least 4 GB of RAM.   255 of the machines surveyed (50.8%) have at least 10GB of RAM.    157 of the machines surveyed (31.3%) have more than 24 GB of RAM.  74 of the machines surveyed (14.7%) have at least 64 GB of RAM.

Statistical summary of total swap available:

  • Max: 201 GB
  • Min: 0.0 KB
  • Mean: 13 GB
  • Median: 6.3 GB
  • Mode: 2.96 GB
485 of the machines surveyed (96.6%) have at least some swap enabled, while 17 of the machines surveyed (3.4%) have zero swap configured. One of these swap-less machines is using 415 MB of /tmp; that machine happens to have 32 GB of RAM. All of the rest of the swap-less machines are using between 4 KB and 52 KB (inconsequential) /tmp, and have between 2 GB and 28 GB of RAM.  5 machines (1.0%) have over 100 GB of swap space.

Statistical summary of swap usage:

  • Max: 19 GB
  • Min: 0.0 KB
  • Mean: 657 MB
  • Median: 18 MB
  • Mode: 0.0 KB
476 of the machines surveyed (94.8%) are using less than 4 GB of swap. 463 of the machines surveyed (92.2%) are using less than 1 GB of swap. And 366 of the machines surveyed (72.9%) are using less than 100 MB of swap.  There are 18 "swappy" machines (3.6%), using 10 GB or more swap.

Modeling /tmp on tmpfs usage

Next, I took the total memory (RAM) in each machine, and divided it in half which is the default allocation to /tmp on tmpfs, and subtracted the total /tmp usage on each system, to determine "if" all of that system's /tmp could actually fit into its tmpfs using free memory alone (ie, without swap or without evicting anything from memory).

485 of the machines surveyed (96.6%) could store all of their /tmp in a tmpfs, in free memory alone -- i.e. without evicting anything from cache.

Now, if we take each machine, and sum each system's "Free memory" and "Free swap", and check its /tmp usage, we'll see that 498 of the systems surveyed (99.2%) could store the entire contents of /tmp in tmpfs free memory + swap available. The remaining 4 are our extreme outliers identified earlier, with /tmp usages of [101 GB, 42 GB, 13 GB, 10 GB].

Performance of tmpfs versus ext4-on-SSD

Finally, let's look at some raw (albeit rough) read and write performance numbers, using a simple dd model.

My /tmp is on a tmpfs:
kirkland@x250:/tmp⟫ df -h .
Filesystem Size Used Avail Use% Mounted on
tmpfs 7.7G 2.6M 7.7G 1% /tmp

Let's write 2 GB of data:
kirkland@x250:/tmp⟫ dd if=/dev/zero of=/tmp/zero bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 1.56469 s, 1.4 GB/s

And let's write it completely synchronously:
kirkland@x250:/tmp⟫ dd if=/dev/zero of=./zero bs=2G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 2.47235 s, 869 MB/s

Let's try the same thing to my Intel SSD:
kirkland@x250:/local⟫ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/dm-0 217G 106G 100G 52% /

And write 2 GB of data:
kirkland@x250:/local⟫ dd if=/dev/zero of=./zero bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 7.52918 s, 285 MB/s

And let's redo it completely synchronously:
kirkland@x250:/local⟫ dd if=/dev/zero of=./zero bs=2G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 11.9599 s, 180 MB/s

Let's go back and read the tmpfs data:
kirkland@x250:~⟫ dd if=/tmp/zero of=/dev/null bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 1.94799 s, 1.1 GB/s

And let's read the SSD data:
kirkland@x250:~⟫ dd if=/local/zero of=/dev/null bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 2.55302 s, 841 MB/s

Now, let's create 10,000 small files (1 KB) in tmpfs:
kirkland@x250:/tmp/foo⟫ time for i in $(seq 1 10000); do dd if=/dev/zero of=$i bs=1K count=1 oflag=dsync ; done
real 0m15.518s
user 0m1.592s
sys 0m7.596s

And let's do the same on the SSD:
kirkland@x250:/local/foo⟫ time for i in $(seq 1 10000); do dd if=/dev/zero of=$i bs=1K count=1 oflag=dsync ; done
real 0m26.713s
user 0m2.928s
sys 0m7.540s

For better or worse, I don't have any spinning disks, so I couldn't repeat the tests there.

So on these rudimentary read/write tests via dd, I got 869 MB/s - 1.4 GB/s write to tmpfs and 1.1 GB/s read from tmps, and 180 MB/s - 285 MB/s write to SSD and 841 MB/s read from SSD.

Surely there are more scientific ways of measuring I/O to tmpfs and physical storage, but I'm confident that, by any measure, you'll find tmpfs extremely fast when tested against even the fastest disks and filesystems.

Summary

  • /tmp usage
    • 98.8% of the servers surveyed use less than 4.8 GB of /tmp
    • 96.2% use less than 1.0 GB of /tmp
    • 73.7% use less than 1.0 MB of /tmp
    • The mean/median/mode are [453 MB / 16 KB / 4 KB]
  • Total memory available
    • 98.6% of the servers surveyed have at least 2.0 GB of RAM
    • 88.0% have least 4.0 GB of RAM
    • 57.4% have at least 8.0 GB of RAM
    • The mean/median/mode are [24 GB / 10 GB / 4 GB]
  • Swap available
    • 96.6% of the servers surveyed have some swap space available
    • The mean/median/mode are [13 GB / 6.3 GB / 3 GB]
  • Swap used
    • 94.8% of the servers surveyed are using less than 4 GB of swap
    • 92.2% are using less than 1 GB of swap
    • 72.9% are using less than 100 MB of swap
    • The mean/median/mode are [657 MB / 18 MB / 0 KB]
  • Modeling /tmp on tmpfs
    • 96.6% of the machines surveyed could store all of the data they currently have stored in /tmp, in free memory alone, without evicting anything from cache
    • 99.2% of the machines surveyed could store all of the data they currently have stored in /tmp in free memory + free swap
    • 4 of the 502 machines surveyed (0.8%) would need special handling, reconfiguration, or more swap

Conclusion


  • Can /tmp be mounted as a tmpfs always, everywhere?
    • No, we did identify a few systems (4 out of 502 surveyed, 0.8% of total) consuming inordinately large amounts of data in /tmp (101 GB, 42 GB), and with insufficient available memory and/or swap.
    • But those were very much the exception, not the rule.  In fact, 96.6% of the systems surveyed could fit all of /tmp in half of the freely available memory in the system.
  • Is this the first time anyone has suggested or tried this as a Linux/UNIX system default?
    • Not even remotely.  Solaris has used tmpfs for /tmp for 22 years, and Fedora and ArchLinux for at least the last 4 years.
  • Is tmpfs really that much faster, more efficient, more secure?
    • Damn skippy.  Try it yourself!
:-Dustin

Read more
Dustin Kirkland


Picture yourself containers on a server
With systemd trees and spawned tty's
Somebody calls you, you answer quite quickly
A world with the density so high

    - Sgt. Graber's LXD Smarts Club Band

Last week, we proudly released Ubuntu 15.10 (Wily) -- the final developer snapshot of the Ubuntu Server before we focus the majority of our attention on quality, testing, performance, documentation, and stability for the Ubuntu 16.04 LTS cycle in the next 6 months.

Notably, LXD has been promoted to the Ubuntu Main archive, now commercially supported by Canonical.  That has enabled us to install LXD by default on all Ubuntu Servers, from 15.10 forward.
Join us for an interactive, live webinar on November 12th at 5pm BST/12pm EST led by James Page, where he will demonstrate LXD as the fastest hypervisor in OpenStack!
That means that every Ubuntu server -- Intel, AMD, ARM, POWER, and even Virtual Machines in the cloud -- is now a full machine container hypervisor, capable of hosting hundreds of machine containers, right out of the box!

LXD in the Sky with Diamonds!  Well, LXD is in the Cloud with Diamond level support from Canonical, anyway.  You can even test it in your web browser here.

The development tree of Xenial (Ubuntu 16.04 LTS) has already inherited this behavior, and we will celebrate this feature broadly through our use of LXD containers in Juju, MAAS, and the reference platform of Ubuntu OpenStack, as well as the new nova-lxd hypervisor in the OpenStack Autopilot within Landscape.

While the young and the restless are already running Wily Ubuntu 15.10, the bold and the beautiful are still bound to their Trusty Ubuntu 14.04 LTS servers.

At Canonical, we understand both motivations, and this is why we have backported LXD to the Trusty archives, for safe, simple consumption and testing of this new generation of machine containers there, on your stable LTS.

Installing LXD on Trusty simply requires enabling the trusty-backports pocket, and installing the lxd package from there, with these 3 little commands:

sudo sed -i -e "/trusty-backports/ s/^# //" /etc/apt/sources.list
sudo apt-get update; sudo apt-get dist-upgrade -y
sudo apt-get -t trusty-backports install lxd

In minutes, you can launch your first LXD containers.  First, inherit your new group permissions, so you can execute the lxc command as your non-root user.  Then, import some images, and launch a new container named lovely-rita.  Shell into that container, and examine the process tree, install some packages, check the disk and memory and cpu available.  Finally, exit when you're done, and optionally delete the container.

newgrp lxd
lxd-images import ubuntu --alias ubuntu
lxc launch ubuntu lovely-rita
lxc list
lxc exec lovely-rita bash
ps -ef
apt-get update
df -h
free
cat /proc/cpuinfo
exit
lxc delete lovely-rita

I was able to run over 600 containers simultaneously on my Thinkpad (x250, 16GB of RAM), and over 60 containers on an m1.small in Amazon (1.6GB of RAM).

We're very interested in your feedback, as LXD is one of the most important features of the Ubuntu 16.04 LTS.  You can learn more about LXD, view the source code, file bugs, discuss on the mailing list, and peruse the Linux Containers upstream projects.

With a little help from my friends!
:-Dustin

Read more
Dustin Kirkland


I delivered a presentation and an exciting live demo in San Francisco this week at the Container Summit (organized by Joyent).

It was professionally recorded by the A/V crew at the conference.  The live demo begins at the 25:21 mark.


You can also find the slide deck embedded below and download the PDFs from here.


Cheers,
:-Dustin

Read more
Dustin Kirkland


Canonical is delighted to sponsor ContainerCon 2015, a Linux Foundation event in Seattle next week, August 17-19, 2015. It's quite exciting to see the A-list of sponsors, many of them newcomers to this particular technology, teaming with energy around containers. 

From chroots to BSD Jails and Solaris Zones, the concepts behind containers were established decades ago, and in fact traverse the spectrum of server operating systems. At Canonical, we've been working on containers in Ubuntu for more than half a decade, providing a home and resources for stewardship and maintenance of the upstream Linux Containers (LXC) project since 2010.

Last year, we publicly shared our designs for LXD -- a new stratum on top of LXC that endows the advantages of a traditional hypervisor into the faster, more efficient world of containers.

Those designs are now reality, with the open source Golang code readily available on Github, and Ubuntu packages available in a PPA for all supported releases of Ubuntu, and already in the Ubuntu 15.10 beta development tree. With ease, you can launch your first LXD containers in seconds, following this simple guide.

LXD is a persistent daemon that provides a clean RESTful interface to manage (start, stop, clone, migrate, etc.) any of the containers on a given host.

Hosts running LXD are handily federated into clusters of container hypervisors, and can work as Nova Compute nodes in OpenStack, for example, delivering Infrastructure-as-a-Service cloud technology at lower costs and greater speeds.

Here, LXD and Docker are quite complementary technologies. LXD furnishes a dynamic platform for "system containers" -- containers that behave like physical or virtual machines, supplying all of the functionality of a full operating system (minus the kernel, which is shared with the host). Such "machine containers" are the core of IaaS clouds, where users focus on instances with compute, storage, and networking that behave like traditional datacenter hardware.

LXD runs perfectly well along with Docker, which supplies a framework for "application containers" -- containers that enclose individual processes that often relate to one another as pools of micro services and deliver complex web applications.

Moreover, the Zen of LXD is the fact that the underlying container implementation is actually decoupled from the RESTful API that drives LXD functionality. We are most excited to discuss next week at ContainerCon our work with Microsoft around the LXD RESTful API, as a cross-platform container management layer.

Ben Armstrong, a Principal Program Manager Lead at Microsoft on the core virtualization and container technologies, has this to say:
“As Microsoft is working to bring Windows Server Containers to the world – we are excited to see all the innovation happening across the industry, and have been collaborating with many projects to encourage and foster this environment. Canonical’s LXD project is providing a new way for people to look at and interact with container technologies. Utilizing ‘system containers’ to bring the advantages of container technology to the core of your cloud infrastructure is a great concept. We are looking forward to seeing the results of our engagement with Canonical in this space.”
Finally, if you're in Seattle next week, we hope you'll join us for the technical sessions we're leading at ContainerCon 2015, including: "Putting the D in LXD: Migration of Linux Containers", "Container Security - Past, Present, and Future", and "Large Scale Container Management with LXD and OpenStack". Details are below.
Date: Monday, August 17 • 2:20pm - 3:10pm
Title: Large Scale Container Management with LXD and OpenStack
Speaker: Stéphane Graber
Abstracthttp://sched.co/3YK6
Location: Grand Ballroom B
Schedulehttp://sched.co/3YK6 
Date: Wednesday, August 19 10:25am-11:15am
Title: Putting the D in LXD: Migration of Linux Containers
Speaker: Tycho Andersen
Abstract: http://sched.co/3YTz
Location: Willow A
Schedule: http://sched.co/3YTz
Date: Wednesday, August 19 • 3:00pm - 3:50pm
Title: Container Security - Past, Present and Future
Speaker: Serge Hallyn
Abstract: http://sched.co/3YTl
Location: Ravenna
Schedule: http://sched.co/3YTl
Cheers,
Dustin

Read more
Dustin Kirkland

tl;dr:  Your Ubuntu-based container is not a copyright violation.  Nothing to see here.  Carry on.
I am speaking for my employer, Canonical, when I say you are not violating our policies if you use Ubuntu with Docker in sensible, secure ways.  Some have claimed otherwise, but that’s simply sensationalist and untrue.

Canonical publishes Ubuntu images for Docker specifically so that they will be useful to people. You are encouraged to use them! We see no conflict between our policies and the common sense use of Docker.

Going further, we distribute Ubuntu in many different signed formats -- ISOs, root tarballs, VMDKs, AMIs, IMGs, Docker images, among others.  We take great pride in this work, and provide them to the world at large, on ubuntu.com, in public clouds like AWS, GCE, and Azure, as well as in OpenStack and on DockerHub.  These images, and their signatures, are mirrored by hundreds of organizations all around the world. We would not publish Ubuntu in the DockerHub if we didn’t hope it would be useful to people using the DockerHub. We’re delighted for you to use them in your public clouds, private clouds, and bare metal deployments.

Any Docker user will recognize these, as the majority of all Dockerfiles start with these two words....

FROM ubuntu

In fact, we gave away hundreds of these t-shirts at DockerCon.


We explicitly encourage distribution and redistribution of Ubuntu images and packages! We also embrace a very wide range of community remixes and modifications. We go further than any other commercially supported Linux vendor to support developers and community members scratching their itches. There are dozens of such derivatives and many more commercial initiatives based on Ubuntu - we are definitely not trying to create friction for people who want to get stuff done with Ubuntu.

Our policy exists to ensure that when you receive something that claims to be Ubuntu, you can trust that it will work to the same standard, regardless of where you got it from. And people everywhere tell us they appreciate that - when they get Ubuntu on a cloud or as a VM, it works, and they can trust it.  That concept is actually hundreds of years old, and we’ll talk more about that in a minute....


So, what do I mean by “sensible use” of Docker? In short - secure use of Docker. If you are using a Docker container then you are effectively giving the producer of that container ‘root’ on your host. We can safely assume that people sharing an Ubuntu docker based container know and trust one another, and their use of Ubuntu is explicitly covered as personal use in our policy. If you trust someone to give you a Docker container and have root on your system, then you can handle the risk that they inadvertently or deliberately compromise the integrity or reliability of your system.

Our policy distinguishes between personal use, which we can generalise to any group of collaborators who share root passwords, and third party redistribution, which is what people do when they exchange OS images with strangers.

Third party redistribution is more complicated because, when things go wrong, there’s a real question as to who is responsible for it. Here’s a real example: a school district buys laptops for all their students with free software. A local supplier takes their preferred Linux distribution and modifies parts of it (like the kernel) to work on their hardware, and sells them all the PCs. A month later, a distro kernel update breaks all the school laptops. In this case, the Linux distro who was not involved gets all the bad headlines, and the free software advocates who promoted the whole idea end up with egg on their faces.

We’ve seen such cases in real hardware, and in public clouds and other, similar environments.  Digital Ocean very famously published some modified and very broken Ubuntu images, outside of Canonical's policies.  That's inherently wrong, and easily avoidable.

So we simply say, if you’re going to redistribute Ubuntu to third parties who are trusting both you and Ubuntu to get it right, come and talk to Canonical and we’ll work out how to ensure everybody gets what they want and need.

Here’s a real exercise I hope you’ll try...

  1. Head over to your local purveyor of fine wines and liquors.
  2. Pick up a nice bottle of Champagne, Single Malt Scotch Whisky, Kentucky Straight Bourbon Whiskey, or my favorite -- a rare bottle of Lambic Oude Gueze.
  3. Carefully check the label, looking for a seal of Appellation d'origine contrôlée.
  4. In doing so, that bottle should earn your confidence that it was produced according to strict quality, format, and geographic standards.
  5. Before you pop the cork, check the seal, to ensure it hasn’t been opened or tampered with.  Now, drink it however you like.
  6. Pour that Champagne over orange juice (if you must).  Toss a couple ice cubes in your Scotch (if that’s really how you like it).  Pour that Bourbon over a Coke (if that’s what you want).
  7. Enjoy however you like -- straight up or mixed to taste -- with your own guests in the privacy of your home.  Just please don’t pour those concoctions back into the bottle, shove a cork in, put them back on the shelf at your local liquor store and try to pass them off as Champagne/Scotch/Bourbon.


Rather, if that’s really what you want to do -- distribute a modified version of Ubuntu -- simply contact us and ask us first (thanks for sharing that link, mjg59).  We have some amazing tools that can help you either avoid that situation entirely, or at least let’s do everyone a service and let us help you do it well.

Believe it or not, we’re really quite reasonable people!  Canonical has a lengthy, public track record, donating infrastructure and resources to many derivative Ubuntu distributions.  Moreover, we’ve successfully contracted mutually beneficial distribution agreements with numerous organizations and enterprises. The result is happy users and happy companies.

FROM ubuntu,
Dustin

The one and only Champagne region of France

Read more
Dustin Kirkland


As you probably remember from grade school math class, primes are numbers that are only divisible by 1 and themselves.  2, 3, 5, 7, and 11 are the first 5 prime numbers, for example.

Many computer operations, such as public-key cryptography, depends entirely on prime numbers.  In fact, RSA encryption, invented in 1978, uses a modulo of a product of two very large primes for encryption and decryption.  The security of asymmetric encryption is tightly coupled with the computational difficulty in factoring large numbers.  I actually use prime numbers as the status update intervals in Byobu, in order to improve performance and distribute the update spikes.

Euclid proved that there are infinitely many prime numbers around 300 BC.  But the Prime Number Theorem (proven in the 19th century) says that the probability of any number is prime is inversely proportional to its number of digits.  That means that larger prime numbers are notoriously harder to find, and it gets harder as they get bigger!
What's the largest known prime number in the world?

Well, it has 17,425,170 decimal digits!  If you wanted to print it out, size 11 font, it would take 6,543 pages -- or 14 reams of paper!

That number is actually one less than a very large power of 2.  257,885,161-1.  It was discovered by Curtis Cooper on January 25, 2013, on an Intel Core2 Duo.

Actually, each of the last 14 record largest prime numbers discovered (between 1996 and today) have been of that form, 2P-1.  Numbers of that form are called Mersenne Prime Numbers, named after Friar Marin Mersenne, a French priest who studied them in the 1600s.


Friar Mersenne's work continues today in the form of the Great Internet Mersenne Prime Search, and the mprime program, which has been used to find those 14 huge prime numbers since 1996.

mprime is a massive parallel, cpu scavenging utility, much like SETI@home or the Protein Folding Project.  It runs in the background, consuming resources, working on its little piece of the problem.  mprime is open source code, and also distributed as a statically compiled binary.  And it will make a fine example of how to package a service into a Docker container, a Juju charm, and a Snappy snap.


Docker Container

First, let's build the Docker container, which will serve as our fundamental building block.  You'll first need to download the mprime tarball from here.  Extract it, and the directory structure should look a little like this (or you can browse it here):

├── license.txt
├── local.txt
├── mprime
├── prime.log
├── prime.txt
├── readme.txt
├── results.txt
├── stress.txt
├── undoc.txt
├── whatsnew.txt
└── worktodo.txt

And then, create a Dockerfile, that copies the files we need into the image.  Here's our example.

FROM ubuntu
MAINTAINER Dustin Kirkland email@example.com
COPY ./mprime /opt/mprime/
COPY ./license.txt /opt/mprime/
COPY ./prime.txt /opt/mprime/
COPY ./readme.txt /opt/mprime/
COPY ./stress.txt /opt/mprime/
COPY ./undoc.txt /opt/mprime/
COPY ./whatsnew.txt /opt/mprime/
CMD ["/opt/mprime/mprime", "-w/opt/mprime/"]

Now, build your Docker image with:

$ sudo docker build .
Sending build context to Docker daemon 36.02 MB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
...
Successfully built de2e817b195f

Then publish the image to Dockerhub.

$ sudo docker push kirkland/mprime

You can see that image, which I've publicly shared here: https://registry.hub.docker.com/u/kirkland/mprime/



Now you can run this image anywhere you can run Docker.

$ sudo docker run -d kirkland/mprime

And verify that it's running:

$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9233f626c85 kirkland/mprime:latest "/opt/mprime/mprime 24 seconds ago Up 23 seconds furious_pike

Juju Charm

So now, let's create a Juju Charm that uses this Docker container.  Actually, we're going to create a subordinate charm.  Subordinate services in Juju are often monitoring and logging services, things that run along side primary services.  Something like mprime is a good example of something that could be a subordinate service, attached to one or many other services in a Juju model.

Our directory structure for the charm looks like this (or you can browse it here):

└── trusty
└── mprime
├── config.yaml
├── copyright
├── hooks
│   ├── config-changed
│   ├── install
│   ├── juju-info-relation-changed
│   ├── juju-info-relation-departed
│   ├── juju-info-relation-joined
│   ├── start
│   ├── stop
│   └── upgrade-charm
├── icon.png
├── icon.svg
├── metadata.yaml
├── README.md
└── revision
3 directories, 15 files

The three key files we should look at here are metadata.yaml, hooks/install and hooks/start:

$ cat metadata.yaml
name: mprime
summary: Search for Mersenne Prime numbers
maintainer: Dustin Kirkland
description: |
A Mersenne prime is a prime of the form 2^P-1.
The first Mersenne primes are 3, 7, 31, 127
(corresponding to P = 2, 3, 5, 7).
There are only 48 known Mersenne primes, and
the 13 largest known prime numbers in the world
are all Mersenne primes.
This charm uses a Docker image that includes the
statically built, 64-bit Linux binary mprime
which will consume considerable CPU and Memory,
searching for the next Mersenne prime number.
See http://www.mersenne.org/ for more details!
tags:
- misc
subordinate: true
requires:
juju-info:
interface: juju-info
scope: container

And:

$ cat hooks/install
#!/bin/bash
apt-get install -y docker.io
docker pull kirkland/mprime

And:

$ cat hooks/start
#!/bin/bash
service docker restart
docker run -d kirkland/mprime

Now, we can add the mprime service to any other running Juju service.  As an example here, I'll --bootstrap, deploy the Apache2 charm, and attach mprime to it.

$ juju bootrap
$ juju deploy apache2
$ juju deploy cs:~kirkland/mprime
$ juju add-relation apache2 mprime

Looking at our services, we can see everything deployed and running here:

$ juju status
services:
apache2:
charm: cs:trusty/apache2-14
exposed: false
service-status:
current: unknown
since: 20 Jul 2015 11:55:59-05:00
relations:
juju-info:
- mprime
units:
apache2/0:
workload-status:
current: unknown
since: 20 Jul 2015 11:55:59-05:00
agent-status:
current: idle
since: 20 Jul 2015 11:56:03-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
machine: "1"
public-address: 23.20.147.158
subordinates:
mprime/0:
workload-status:
current: unknown
since: 20 Jul 2015 11:58:52-05:00
agent-status:
current: idle
since: 20 Jul 2015 11:58:56-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
upgrading-from: local:trusty/mprime-1
public-address: 23.20.147.158
mprime:
charm: local:trusty/mprime-1
exposed: false
service-status: {}
relations:
juju-info:
- apache2
subordinate-to:
- apache2


Snappy Ubuntu Core Snap

Finally, let's build a Snap.  Snaps are applications that run in Ubuntu's transactional, atomic OS, Snappy Ubuntu Core.

We need the simple directory structure below (or you can browse it here):

├── meta
│   ├── icon.png
│   ├── icon.svg
│   ├── package.yaml
│   └── readme.md
└── start.sh
1 directory, 5 files

The package.yaml describes what we're actually building, and what capabilities the service needs.  It looks like this:

name: mprime
vendor: Dustin Kirkland 
architecture: [amd64]
icon: meta/icon.png
version: 28.5-11
frameworks:
- docker
services:
- name: mprime
description: "Search for Mersenne Prime Numbers"
start: start.sh
caps:
- docker_client
- networking

And the start.sh launches the service via Docker.

#!/bin/sh
PATH=$PATH:/apps/docker/current/bin/
docker rm -v -f mprime
docker run --name mprime -d kirkland/mprime
docker wait mprime

Now, we can build the snap like so:

$ snappy build .
Generated 'mprime_28.5-11_amd64.snap' snap
$ ls -halF *snap
-rw-rw-r-- 1 kirkland kirkland 9.6K Jul 20 12:38 mprime_28.5-11_amd64.snap

First, let's install the Docker framework, upon which we depend:

$ snappy-remote --url ssh://snappy-nuc install docker
=======================================================
Installing docker from the store
Installing docker
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

And now, we can install our locally built Snap.
$ snappy-remote --url ssh://snappy-nuc install mprime_28.5-11_amd64.snap
=======================================================
Installing mprime_28.5-11_amd64.snap from local environment
Installing /tmp/mprime_28.5-11_amd64.snap
2015/07/20 17:44:26 Signature check failed, but installing anyway as requested
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 sideload
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

Alternatively, you can install the snap directly from the Ubuntu Snappy store, where I've already uploaded the mprime snap:

$ snappy-remote --url ssh://snappy-nuc install mprime.kirkland
=======================================================
Installing mprime.kirkland from the store
Installing mprime.kirkland
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 kirkland
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

Conclusion

How long until this Docker image, Juju charm, or Ubuntu Snap finds a Mersenne Prime?  Almost certainly never :-)  I want to be clear: that was never the point of this exercise!

Rather I hope you learned how easy it is to run a Docker image inside either a Juju charm or an Ubuntu snap.  And maybe learned something about prime numbers along the way ;-)

Join us in #docker, #juju, and #snappy on irc.freenode.net.

Cheers,
Dustin

Read more
Dustin Kirkland

652 Linux containers running on a Laptop?  Are you kidding me???

A couple of weeks ago, at the OpenStack Summit in Vancouver, Canonical released the results of some scalability testing of Linux containers (LXC) managed by LXD.

Ryan Harper and James Page presented their results -- some 536 Linux containers on a very modest little Intel server (16GB of RAM), versus 37 KVM virtual machines.

Ryan has published the code he used for the benchmarking, and I've used to to reproduce the test on my dev laptop (Thinkpad x230, 16GB of RAM, Intel i7-3520M).

I managed to pack a whopping 652 Ubuntu 14.04 LTS (Trusty) containers on my Ubuntu 15.04 (Vivid) laptop!


The system load peaked at 1056 (!!!), but I was using merely 56% of 15.4GB of system memory.  Amazingly, my Unity desktop and Byobu command line were still perfectly responsive, as were the containers that I ssh'd into.  (Aside: makes me wonder if the Linux system load average is accounting for container process correctly...)


Check out the process tree for a few hundred system containers here!

As for KVM, I managed to launch 31 virtual machines without KSM enabled, and 65 virtual machines with KSM enabled and working hard.  So that puts somewhere between 10x - 21x as many containers as virtual machines on the same laptop.

You can now repeat these tests, if you like.  Please share your results with #LXD on Google+ or Twitter!

I'd love to see someone try this in AWS, anywhere from an m3.small to an r3.8xlarge, and share your results ;-)

Density test instructions

## Install lxd
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-git-master
$ sudo apt-get update
$ sudo apt-get install -y lxd bzr
$ cd /tmp
## At this point, it's a good idea to logout/login or reboot
## for your new group permissions to get applied
## Grab the tests, disable the tools download
$ bzr branch lp:~raharper/+junk/density-check
$ cd density-check
$ mkdir lxd_tools
## Periodically squeeze your cache
$ sudo bash -x -c 'while true; do sleep 30; \
echo 3 | sudo tee /proc/sys/vm/drop_caches; \
free; done' &
## Run the LXD test
$ ./density-check-lxd --limit=mem:512m --load=idle release=trusty arch=amd64
## Run the KVM test
$ ./density-check-kvm --limit=mem:512m --load=idle release=trusty arch=amd64

As for the speed-of-launch test, I'll cover that in a follow-up post!

Can you contain your excitement?

Cheers!
Dustin

Read more
jdstrand

Most of this has been discussed on mailing lists, blog entries, etc, while developing Ubuntu Touch, but I wanted to write up something that ties together these conversations for Snappy. This will provide background for the conversations surrounding hardware access for snaps that will be happening soon on the snappy-devel mailing list.

Background

Ubuntu Touch has several goals that all apply to Snappy:

  • we want system-image upgrades
  • we want to replace the distro archive model with an app store model for Snappy systems
  • we want developers to be able to get their apps to users quickly
  • we want a dependable application lifecycle
  • we want the system to be easy to understand and to develop on
  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Snappy adds a few things to the above (that pertain to this conversation):

  • we want the system to be bulletproof (transactional updates with rollbacks)
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

Let’s look at what all these mean more closely.

system-image upgrades

  • we want system-image upgrades
  • we want the system to be bulletproof (transactional updates with rollbacks)

We want system-image upgrades so updates are fast, reliable and so people (users, admins, snappy developers, system builders, etc) always know what they have and can depend on it being there. In addition, if an upgrade goes bad, we want a mechanism to be able to rollback the system to a known good state. In order to achieve this, apps need to work within the system and live in their own area and not modify the system in unpredictable ways. The Snappy FHS is designed for this and the security policy enforces that apps follow it. This protects us from malware, sure, but at least as importantly, it protects us from programming errors and well-intentioned clever people who might accidentally break the Snappy promise.

app store

  • we want to replace the distro archive model with an app store model
  • we want developers to be able to get their apps to users quickly

Ubuntu is a fantastic distribution and we have a wonderfully rich archive of software that is refreshed on a cadence. However, the traditional distro model has a number of drawbacks and arguably the most important one is that software developers have an extremely high barrier to overcome to get their software into users hands on their own time-frame. The app store model greatly helps developers and users desiring new software because it gives developers the freedom and ability to get their software out there quickly and easily, which is why Ubuntu Touch is doing this now.

In order to enable developers in the Ubuntu app store, we’ve developed a system where a developer can upload software and have it available to users in seconds with no human review, intervention or snags. We also want users to be able to trust what’s in Ubuntu’s store, so we’ve created store policies that understand the Ubuntu snappy system such that apps do not require any manual review so long as the developer follows the rules. However, the Ubuntu Core system itself is completely flexible– people can install apps that are tightly confined, loosely confined, unconfined, whatever (more on this, below). In this manner, people can develop snaps for their own needs and distribute them however they want.

It is the Ubuntu store policy that dictates what is in the store. The existing store policy is in place to improve the situation and is based on our experiences with the traditional distro model and attempts to build something app store-like experiences on top of it (eg, MyApps).

application lifecycle

  • dependable application lifecycle

This has not been discussed as much with Snappy for Ubuntu Core, but Touch needs to have a good application lifecycle model such that apps cannot run unconstrained and unpredictably in the background. In other words, we want to avoid problems with battery drain and slow systems on Touch. I think we’ve done a good job so far on Touch, and this story is continuing to evolve.

(I mention application lifecycle in this conversation for completeness and because application lifecycle and security work together via the app’s application id)

security

  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Everyone wants a system that they trust and that is secure, and security is one of the core tenants of Snappy systems. For Ubuntu Touch, we’ve created a
system that is secure, that is easy to use and understand by users, and that still honors relevant, meaningful Linux traditions. For Snappy, we’ll be adding several additional security features (eg, seccomp, controlled abstract socket communication, firewalling, etc).

Our security story and app store policies give us something that is between Apple and Google. We have a strong security story that has a number of similarities to Apple, but a lightweight store policy akin to Google Play. In addition to that, our trust model is that apps not needing manual review are untrusted by the OS and have limited access to the system. On Touch we use tasteful, contextual prompting so the user may trust the apps to do things beyond what the OS allows on its own (simple example, app needs access to location, user is prompted at the time of use if the app can access it, user answers and the decision is remembered next time).

Snappy for Ubuntu Core is different not only because the UI supports a CLI, but also because we’ve defined a Snappy for Ubuntu Core user that is able to run the ‘snappy’ command as someone who is an admin, a system builder, a developer and/or someone otherwise knowledgeable enough to make a more informed trust decision. (This will come up again later, below)

easy to use

  • we want the system to be easy to understand and to develop on
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

We want a system that is easy to use and understand. It is key that developers are able to develop on it, system builders able to get their work done and admins can install and use the apps from the store.

For Ubuntu Touch, we’ve made a system that is easy to understand and to develop on with a simple declarative permissions model. We’ll refine that for Snappy and make it easy to develop on too. Remember, the security policy is there not just so we can be ‘super secure’ but because it is what gives us the assurances needed for system upgrades, a safe app store and an altogether bulletproof system.

As mentioned, the system we have designed is super flexible. Specifically, the underlying system supports:

  1. apps working wholly within the security policy (aka, ‘common’ security policy groups and templates)
  2. apps declaring specific exceptions to the security policy
  3. apps declaring to use restricted security policy
  4. apps declaring to run (effectively) unconfined
  5. apps shipping hand-crafted policy (that can be strict or lenient)

(Keep in mind the Ubuntu App Store policy will auto-accept apps falling under ‘1’ and trigger manual review for the others)

The above all works today (though it isn’t always friendly– we’re working on that) and the developer is in control. As such, Snappy developers have a plethora of options and can create snaps with security policy for their needs. When the developer wants to ship the app and make it available to all Snappy users via the Ubuntu App Store, then the developer may choose to work within the system to have automated reviews or choose not to and manage the process via manual reviews/commercial relationship with Canonical.

Moving forward

The above works really well for Ubuntu Touch, but today there is too much friction with regard to hardware access. We will make this experience better without compromising on any of our goals. How do we put this all together, today, so people can get stuff done with snappy without sacrificing on our goals, making it harder on ourselves in the future or otherwise opening Pandora’s box? We don’t want to relax our security policy, because we can’t make the bulletproof assurances we are striving for and it would be hard to tighten the security. We could also add some temporary security policy that adds only certain accesses (eg, serial devices) but, while useful, this is too inflexible. We also don’t want to have apps declare the accesses themselves to automatically adds the necessary security policy, because this (potentially) privileged access is then hidden from the Snappy for Ubuntu Core user.

The answer is simple when we remember that the Snappy for Ubuntu Core user (ie, the one who is able to run the snappy command) is knowledgeable enough to make the trust decision for giving an app access to hardware. In other words, let the admin/developer/system builder be in control.

immediate term

The first thing we are going to do is unblock people and adjust snappy to give the snappy core user the ability to add specific device access to snap-specific security policy. In essence you’ll install a snap, then run a command to give the snap access to a particular device, then you’re done. This simple feature will unblock developers and snappy users immediately while still supporting our trust-model and goals fully. Plus it will be worth implementing since we will likely always want to support this for maximum flexibility and portability (since people can use traditional Linux APIs).

The user experience for this will be discussed and refined on the mailing list in the coming days.

short term

After that, we’ll build on this and explore ways to make the developer and user experience better through integration with the OEM part and ways of interacting with the underlying system so that the user doesn’t have to necessarily know the device name to add, but can instead be given smart choices (this can have tie-ins to the web interface for snappy too). We’ll want to be thinking about hotpluggable devices as well.

Since this all builds on the concept of the immediate term solution, it also supports our trust-model and goals fully and is relatively easy to implement.

future

Once we have the above in place, we should have a reasonable experience for snaps needing traditional device access. This will give us time to evaluate how people are accessing hardware and see if we can make things even better by using frameworks and/or a hardware abstraction layer. In this manner, snaps can program to an easy to use API and the system can mediate access to the underlying hardware via that API.


Filed under: canonical, security, ubuntu, ubuntu-server, uncategorized

Read more