Canonical Voices

Posts tagged with 'canonical'

Dustin Kirkland


As you probably remember from grade school math class, primes are numbers that are only divisible by 1 and themselves.  2, 3, 5, 7, and 11 are the first 5 prime numbers, for example.

Many computer operations, such as public-key cryptography, depends entirely on prime numbers.  In fact, RSA encryption, invented in 1978, uses a modulo of a product of two very large primes for encryption and decryption.  The security of asymmetric encryption is tightly coupled with the computational difficulty in factoring large numbers.  I actually use prime numbers as the status update intervals in Byobu, in order to improve performance and distribute the update spikes.

Euclid proved that there are infinitely many prime numbers around 300 BC.  But the Prime Number Theorem (proven in the 19th century) says that the probability of any number is prime is inversely proportional to its number of digits.  That means that larger prime numbers are notoriously harder to find, and it gets harder as they get bigger!
What's the largest known prime number in the world?

Well, it has 17,425,170 decimal digits!  If you wanted to print it out, size 11 font, it would take 6,543 pages -- or 14 reams of paper!

That number is actually one less than a very large power of 2.  257,885,161-1.  It was discovered by Curtis Cooper on January 25, 2013, on an Intel Core2 Duo.

Actually, each of the last 14 record largest prime numbers discovered (between 1996 and today) have been of that form, 2P-1.  Numbers of that form are called Mersenne Prime Numbers, named after Friar Marin Mersenne, a French priest who studied them in the 1600s.


Friar Mersenne's work continues today in the form of the Great Internet Mersenne Prime Search, and the mprime program, which has been used to find those 14 huge prime numbers since 1996.

mprime is a massive parallel, cpu scavenging utility, much like SETI@home or the Protein Folding Project.  It runs in the background, consuming resources, working on its little piece of the problem.  mprime is open source code, and also distributed as a statically compiled binary.  And it will make a fine example of how to package a service into a Docker container, a Juju charm, and a Snappy snap.


Docker Container

First, let's build the Docker container, which will serve as our fundamental building block.  You'll first need to download the mprime tarball from here.  Extract it, and the directory structure should look a little like this (or you can browse it here):

├── license.txt
├── local.txt
├── mprime
├── prime.log
├── prime.txt
├── readme.txt
├── results.txt
├── stress.txt
├── undoc.txt
├── whatsnew.txt
└── worktodo.txt

And then, create a Dockerfile, that copies the files we need into the image.  Here's our example.

FROM ubuntu
MAINTAINER Dustin Kirkland email@example.com
COPY ./mprime /opt/mprime/
COPY ./license.txt /opt/mprime/
COPY ./prime.txt /opt/mprime/
COPY ./readme.txt /opt/mprime/
COPY ./stress.txt /opt/mprime/
COPY ./undoc.txt /opt/mprime/
COPY ./whatsnew.txt /opt/mprime/
CMD ["/opt/mprime/mprime", "-w/opt/mprime/"]

Now, build your Docker image with:

$ sudo docker build .
Sending build context to Docker daemon 36.02 MB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
...
Successfully built de2e817b195f

Then publish the image to Dockerhub.

$ sudo docker push kirkland/mprime

You can see that image, which I've publicly shared here: https://registry.hub.docker.com/u/kirkland/mprime/



Now you can run this image anywhere you can run Docker.

$ sudo docker run -d kirkland/mprime

And verify that it's running:

$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9233f626c85 kirkland/mprime:latest "/opt/mprime/mprime 24 seconds ago Up 23 seconds furious_pike

Juju Charm

So now, let's create a Juju Charm that uses this Docker container.  Actually, we're going to create a subordinate charm.  Subordinate services in Juju are often monitoring and logging services, things that run along side primary services.  Something like mprime is a good example of something that could be a subordinate service, attached to one or many other services in a Juju model.

Our directory structure for the charm looks like this (or you can browse it here):

└── trusty
└── mprime
├── config.yaml
├── copyright
├── hooks
│   ├── config-changed
│   ├── install
│   ├── juju-info-relation-changed
│   ├── juju-info-relation-departed
│   ├── juju-info-relation-joined
│   ├── start
│   ├── stop
│   └── upgrade-charm
├── icon.png
├── icon.svg
├── metadata.yaml
├── README.md
└── revision
3 directories, 15 files

The three key files we should look at here are metadata.yaml, hooks/install and hooks/start:

$ cat metadata.yaml
name: mprime
summary: Search for Mersenne Prime numbers
maintainer: Dustin Kirkland
description: |
A Mersenne prime is a prime of the form 2^P-1.
The first Mersenne primes are 3, 7, 31, 127
(corresponding to P = 2, 3, 5, 7).
There are only 48 known Mersenne primes, and
the 13 largest known prime numbers in the world
are all Mersenne primes.
This charm uses a Docker image that includes the
statically built, 64-bit Linux binary mprime
which will consume considerable CPU and Memory,
searching for the next Mersenne prime number.
See http://www.mersenne.org/ for more details!
tags:
- misc
subordinate: true
requires:
juju-info:
interface: juju-info
scope: container

And:

$ cat hooks/install
#!/bin/bash
apt-get install -y docker.io
docker pull kirkland/mprime

And:

$ cat hooks/start
#!/bin/bash
service docker restart
docker run -d kirkland/mprime

Now, we can add the mprime service to any other running Juju service.  As an example here, I'll --bootstrap, deploy the Apache2 charm, and attach mprime to it.

$ juju bootrap
$ juju deploy apache2
$ juju deploy cs:~kirkland/mprime
$ juju add-relation apache2 mprime

Looking at our services, we can see everything deployed and running here:

$ juju status
services:
apache2:
charm: cs:trusty/apache2-14
exposed: false
service-status:
current: unknown
since: 20 Jul 2015 11:55:59-05:00
relations:
juju-info:
- mprime
units:
apache2/0:
workload-status:
current: unknown
since: 20 Jul 2015 11:55:59-05:00
agent-status:
current: idle
since: 20 Jul 2015 11:56:03-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
machine: "1"
public-address: 23.20.147.158
subordinates:
mprime/0:
workload-status:
current: unknown
since: 20 Jul 2015 11:58:52-05:00
agent-status:
current: idle
since: 20 Jul 2015 11:58:56-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
upgrading-from: local:trusty/mprime-1
public-address: 23.20.147.158
mprime:
charm: local:trusty/mprime-1
exposed: false
service-status: {}
relations:
juju-info:
- apache2
subordinate-to:
- apache2


Snappy Ubuntu Core Snap

Finally, let's build a Snap.  Snaps are applications that run in Ubuntu's transactional, atomic OS, Snappy Ubuntu Core.

We need the simple directory structure below (or you can browse it here):

├── meta
│   ├── icon.png
│   ├── icon.svg
│   ├── package.yaml
│   └── readme.md
└── start.sh
1 directory, 5 files

The package.yaml describes what we're actually building, and what capabilities the service needs.  It looks like this:

name: mprime
vendor: Dustin Kirkland 
architecture: [amd64]
icon: meta/icon.png
version: 28.5-11
frameworks:
- docker
services:
- name: mprime
description: "Search for Mersenne Prime Numbers"
start: start.sh
caps:
- docker_client
- networking

And the start.sh launches the service via Docker.

#!/bin/sh
PATH=$PATH:/apps/docker/current/bin/
docker rm -v -f mprime
docker run --name mprime -d kirkland/mprime
docker wait mprime

Now, we can build the snap like so:

$ snappy build .
Generated 'mprime_28.5-11_amd64.snap' snap
$ ls -halF *snap
-rw-rw-r-- 1 kirkland kirkland 9.6K Jul 20 12:38 mprime_28.5-11_amd64.snap

First, let's install the Docker framework, upon which we depend:

$ snappy-remote --url ssh://snappy-nuc install docker
=======================================================
Installing docker from the store
Installing docker
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

And now, we can install our locally built Snap.
$ snappy-remote --url ssh://snappy-nuc install mprime_28.5-11_amd64.snap
=======================================================
Installing mprime_28.5-11_amd64.snap from local environment
Installing /tmp/mprime_28.5-11_amd64.snap
2015/07/20 17:44:26 Signature check failed, but installing anyway as requested
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 sideload
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

Alternatively, you can install the snap directly from the Ubuntu Snappy store, where I've already uploaded the mprime snap:

$ snappy-remote --url ssh://snappy-nuc install mprime.kirkland
=======================================================
Installing mprime.kirkland from the store
Installing mprime.kirkland
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 kirkland
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

Conclusion

How long until this Docker image, Juju charm, or Ubuntu Snap finds a Mersenne Prime?  Almost certainly never :-)  I want to be clear: that was never the point of this exercise!

Rather I hope you learned how easy it is to run a Docker image inside either a Juju charm or an Ubuntu snap.  And maybe learned something about prime numbers along the way ;-)

Join us in #docker, #juju, and #snappy on irc.freenode.net.

Cheers,
Dustin

Read more
Michael Hall

Picture by Aaron HoneycuttThe next Ubuntu Global Jam is coming up next month, the weekend of August 7th through the 9th. Last cycle we introduced the Ubuntu Global Jam Packs, and they were such a big hit that we’re bringing them back this cycle.

Jam Packs are a miniaturized version of the conference packs that Canonical has long offered to LoCo Teams who show off Ubuntu at events. These smaller packs are designed specifically for LoCo Teams to use during their own Global Jam events, to help promote Ubuntu in their area and encourage participation with the team.

What’s in the Global Jam Pack?

The Global Jam Pack contains a number of give-away items to use during your team’s Global Jam event. This cycle the packs will contain:

  • 20 DVDs
  • 20 sticker sheets
  • 20 pens
  • 20 notebooks

There will also be one XL t-shirt for the person who is organizing the event.

Who can request a Global Jam Pack?

The Global Jam Pack is available to any LoCo team that is running a Global Jam event. It doesn’t matter if your team has verified status or not, if you are hosting a Global Jam event, you can request a Jam Pack for it.

How do I request a Global Jam Pack?

The first thing you need to do is plan a Global Jam event for your LoCo team. Global Jams happen one weekend each cycle, and are a chance for you to meet up with Ubuntu contributors in your area to work together on improving some aspect of Ubuntu. They don’t require a lot of setup, just pick a day, time and location for everybody to show up.

Once you know when and where you will be holding your event, you need to register it in the LoCo Team Portal, making sure it’s listed as being part of the Ubuntu Global Jam parent event. You can use your event page on the portal to advertise your event, and allow people to register their intention to attend.

Next you will need to fill out a community donations request for your Jam Pack. In there you will be asked for your name and shipping address. In the field for describing your request, be sure to include the link to your team’s Global Jam event.

Need help?

If you need help or advice in organizing a Global Jam event, join #ubuntu-locoteams on Freenode IRC to talk to folks from the community who have experience running them. We’ve also documented some great advice to help you with organization on our wiki, including a list of suggested topics for you to work on during your event.

Read more
Louis

Introduction

Once in a while, I get to tackle issues that have little or no documentation other than the official documentation of the product and the product’s source code.  You may know from experience that product documentation is not always sufficient to get a complete configuration working. This article intend to flesh out a solution to customizing disk configurations using Curtin.

This article take for granted that you are familiar with Maas install mechanisms, that you already know how to customize installations and deploy workloads using Juju.

While my colleagues in the Maas development team have done a tremendous job at keeping the Maas documentation accurate (see Maas documentation), it does only cover the basics when it comes to Maas’s preseed customization, especially when it comes to Curtin’s customization.

Curtin is Maas’s fastpath installer which is meant to replace Debian’s installer (familiarly known as d-i). It does a complete machine installation much faster than with the standard debian method.  But while d-i is well known and it is easy to find example of its use on the web, Curtin does not have the same notoriety and, hence, not as much documentation.

Theory of operation

When the fastpath installer is used to install a maas unit (which is now the default), it will send the content of the files prefixed with curtin_ to the unit being installed.  The curtin_userdata contains cloud-config type commands that will be applied by cloud-init when the unit is installed. If we want to apply a specific partitioning scheme to all of our unit, we can modify this file and every unit will get those commands applied to it when it installs.

But what if we only have one or a few servers that have specific disk layout that require partitioning ?  In the following example, I will suppose that we have one server, named curtintest which has a one terabyte disk (1 TB) and that we want to partition this disk with the following partition table :

  • Partition #1 has the /boot file system and is bootable
  • Partition #2 has the root (/) file system
  • Partition #3 has a 31 Gb file system
  • Partition #4 has 32 Gb of swap space
  • Partition #5 has the remaining disk space

Since only one server has such a disk, the partitioning should be specific to that curtintest server only.

Setting up Curtin development environment

To get to a working Maas partitioning setup, it is preferable to use Curtin’s development environment to test the curtin commands. Using Maas deployment to test each command quickly becomes tedious and time consuming.  There is a description on how to set it up in the README.txt but here are more details here.

Aside from putting all the files under one single directory, the steps described here are the same as the one in the README.txt file :

$ mkdir -p download
$ DLDIR=$(pwd)/download
$ rel="trusty"
$ arch=amd64
$ burl="http://cloud-images.ubuntu.com/$rel/current/"
$ for f in $rel-server-cloudimg-${arch}-root.tar.gz $rel-server-cloudimg-{arch}-disk1.img; do wget "$burl/$f" -O $DLDIR/$f; done
$ ( cd $DLDIR && qemu-img convert -O qcow $rel-server-cloudimg-${arch}-disk1.img $rel-server-cloudimg-${arch}-disk1.qcow2)
$ BOOTIMG="$DLDIR/$rel-server-cloudimg-${arch}-disk1.qcow2"
$ ROOTTGZ="$DLDIR/$rel-server-cloudimg-${arch}-root.tar.gz"
$ mkdir src
$ bzr init-repo src/curtin
$ (cd src/curtin && bzr  branch lp:curtin trunk.dist )
$ (cd src/curtin && bzr  branch trunk.dist trunk)
$ cd src/curtin/trunk

You now have an environment you can use with Curtin to automate installations. You can test it by using the following command which will start a VM and run « curtin install » in it.  Once you get the prompt, login with :

username : ubuntu
password : passw0rd

$ sudo ./tools/launch $BOOTIMG --publish $ROOTTGZ -- curtin install "PUBURL/${ROOTTGZ##*/}"

Using Curtin in the development environment

To test Curtin in its environment, simply remove  — curtin install « PUBURL/${ROOTTGZ##*/} » at the end of the statement. Once logged in, you will find the Curtin executable in /curtin/bin :

ubuntu@ubuntu:~$ sudo -s
root@ubuntu:~# /curtin/bin/curtin --help
usage: main.py [-h] [--showtrace] [--verbose] [--log-file LOG_FILE]
{block-meta,curthooks,extract,hook,in-target,install,net-meta,pac
k,swap}
...

positional arguments:
{block-meta,curthooks,extract,hook,in-target,install,net-meta,pack,swap}

optional arguments:
-h, --help            show this help message and exit
--showtrace
--verbose, -v
--log-file LOG_FILE

Each of Curtin’s commands have their own help :

ubuntu@ubuntu:~$ sudo -s
root@ubuntu:~# /curtin/bin/curtin install --help
usage: main.py install [-h] [-c FILE] [--set key=val] [source [source ...]]

positional arguments:
source what to install

optional arguments:
-h, --help show this help message and exit
-c FILE, --config FILE
read configuration from cfg
--set key=val define a config variable

 

Creating Maas’s Curtin preseed commands

Now that we have our Curtin development environment available, we can use it to come up with a set of commands that will be fed to Curtin by Maas when a unit is created.

Maas uses preseed files located in /etc/maas/preseeds on the Maas server. The curtin_userdata preseed file is the one that we will use as a reference to build our set of partitioning commands.  During the testing phase, we will use the -c option of curtin install along with a configuration file that will mimic the behavior of curtin_userdata.

We will also need to add a fake 1TB disk to Curtin’s development environment so we can use it as a partitioning target. So in the development environment, issue the following command :

$ qemu-img create -f qcow2 boot.disk 1000G Formatting ‘boot.disk’, fmt=qcow2 size=1073741824000 encryption=off cluster_size=65536 lazy_refcounts=off

sudo ./tools/launch $BOOTIMG –publish $ROOTTGZ

ubuntu: ubuntu password: passw0rd

ubuntu@ubuntu:~$ sudo -s root@ubuntu:~# cat /proc/partitions

major minor  #blocks  name

253        0    2306048 vda 253        1    2305024 vda1 253       16        426 vdb 253       32 1048576000 vdc 11        0    1048575 sr0

We can see that the 1000G /dev/vdc is indeed present.  Let’s now start to craft the conffile that will receive our partitioning commands. To test the syntax, we will use two simple commands :

root@ubuntu:~# cat << EOF > conffile 
partitioning_commands:
  builtin: []
  01_partition_make_label: ["/sbin/parted", "/dev/vdc", "-s", "'","mklabel","msdos","'"]
  02_partition_make_part: ["/sbin/parted", "/dev/vdc", "-s", "'","mkpart","primary","1049K","538M","'"] 
sources:
  01_primary: http://192.168.0.13:9923//trusty-server-cloudimg-amd64-root.tar.gz
  EOF

The sources: statement is only there to avoid having to repeat the SOURCE portion of the curtin command and is not to be used in the final Maas configuration. The URL is the address of the server from which you are running the Curtin development environment.

WARNING

The builtin [] statement is VERY important. It is there to override Curtin’s native builtin statement which is to partition the disk using « block-meta simple ».  If it is removed, Curtin will overwrite he partitioning with its default configuration. This comes straight from Scott Moser, the main developer behind Curtin.

Now let’s run the Curtin command :

root@ubuntu:~# /curtin/bin/curtin install -c conffile

Curtin will run its installation sequence and you will see a display which you should be familiar with if you installed units with Maas previously.  The command will most probably exit on error, comlaining about the fact that install-grub received an argument that was not a block device. We do not need to worry about that at the motent.

Once completed, have a look at the partitioning of the /dev/vdc device :

root@ubuntu:~# parted /dev/vdc print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4

The partitioning commands were successful and we have the /dev/vdc disk properly configured.  Now that we know that the mechanism works, let try with a complete configuration file. I have found that it was preferable to start with a fresh 1TB disk :

root@ubuntu:~# poweroff

$ rm -f boot.img

$ qemu-img create -f qcow2 boot.disk 1000G
Formatting ‘boot.disk’, fmt=qcow2 size=1073741824000 encryption=off cluster_size=65536 lazy_refcounts=off

sudo ./tools/launch $BOOTIMG –publish $ROOTTGZ

ubuntu@ubuntu:~$ sudo -s

root@ubuntu:~# cat << EOF > conffile 
partitioning_commands:
  builtin: [] 
  01_partition_announce: ["echo", "'### Partitioning disk ###'"]
  01_partition_make_label: ["/sbin/parted", "/dev/vda", "-s", "'","mklabel","msdos","'"]
  02_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","1049k","538M","'"]
  02_partition_set_flag: ["/sbin/parted", "/dev/vda", "-s", "'","set","1","boot","on","'"]
  04_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","538M","4538M","'"]
  05_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","extended","4538M","1000G","'"]
  06_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","25.5G","57G","'"]
  07_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","57G","89G","'"]
  08_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","89G","1000G","'"]
  09_partition_announce: ["echo", "'### Creating filesystems ###'"]
  10_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda1"]
  11_partition_label_fs: ["/sbin/e2label", "/dev/vda1", "cloudimg-boot"]
  12_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda2"]
  13_partition_label_fs: ["/sbin/e2label", "/dev/vda2", "cloudimg-rootfs"]
  14_partition_mount_fs: ["sh", "-c", "mount /dev/vda2 $TARGET_MOUNT_POINT"]
  15_partition_mkdir: ["sh", "-c", "mkdir $TARGET_MOUNT_POINT/boot"]
  16_partition_mount_fs: ["sh", "-c", "mount /dev/vda1 $TARGET_MOUNT_POINT/boot"]
  17_partition_announce: ["echo", "'### Filling /etc/fstab ###'"]
  18_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-rootfs / ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
  19_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-boot /boot ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
  20_partition_make_swap: ["sh", "-c", "mkswap /dev/vda6"]
  21_partition_make_fstab: ["sh", "-c", "echo '/dev/vda6 none swap sw 0 0' >> $OUTPUT_FSTAB"]
sources: 01_primary: http://192.168.0.13:9923//trusty-server-cloudimg-amd64-root.tar.gz EOF

You will note that I have added a few statement like [« echo », « ‘### Partitioning disk ###' »] that will display some logs during the execution. Those are not necessary.
Now let’s try a second test with the complete configuration file :

root@ubuntu:~# /curtin/bin/curtin install -c conffile

root@ubuntu:~# parted /dev/vdc print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4         boot
2      538MB   4538MB  4000MB  primary   ext4
3      4538MB  1000GB  995GB   extended               lba
5      25.5GB  57.0GB  31.5GB  logical
6      57.0GB  89.0GB  32.0GB  logical
7      89.0GB  1000GB  911GB   logical

We now have a correctly partitioned disk in our development environment. All we need to do now is to carry that over to Maas to see if it works as expected.

Customization of Curtin execution in Maas

The section « How preseeds work in MAAS » give a good outline on how to select the name of the a preseed file to restrict its usage to specific sub-groups of nodes.  In our case, we want our partitioning to apply to only one node : curtintest.  So by following the description in the section « User provided preseeds« , we need to use the following template :

{prefix}_{node_arch}_{node_subarch}_{release}_{node_name}

The fileneme that we need to choose needs to end with our hostname, curtintest. The other elements are :

  • prefix : curtin_userdata
  • osystem : amd64
  • node_subarch : generic
  • release : trusty
  • node_name : curtintest

So according to that, our filename must be curtin_userdata_amd64_generic_trusty_curtintest

On the MAAS server, we do the following :

root@maas17:~# cd /etc/maas/preseeds

root@maas17:~# cp curtin_userdata curtin_userdata_amd64_generic_trusty_curtintest

We now edit this newly created file and add our previously crafted Curtin configuration file just after the following block :

{{if third_party_drivers and driver}}
  early_commands:
  {{py: key_string = ''.join(['\\x%x' % x for x in map(ord, driver['key_binary'])])}}
  driver_00_get_key: /bin/echo -en '{{key_string}}' > /tmp/maas-{{driver['package']}}.gpg
  driver_01_add_key: ["apt-key", "add", "/tmp/maas-{{driver['package']}}.gpg"]
  driver_02_add: ["add-apt-repository", "-y", "deb {{driver['repository']}} {{node.get_distro_series()}} main"]
  driver_03_update_install: ["sh", "-c", "apt-get update --quiet && apt-get --assume-yes install {{driver['package']}}"]
  driver_04_load: ["sh", "-c", "depmod && modprobe {{driver['module']}}"]
  {{endif}}

The complete section should look just like this :

{{if third_party_drivers and driver}}
  early_commands:
  {{py: key_string = ''.join(['\\x%x' % x for x in map(ord, driver['key_binary'])])}}
   driver_00_get_key: /bin/echo -en '{{key_string}}' > /tmp/maas-{{driver['package']}}.gpg
   driver_01_add_key: ["apt-key", "add", "/tmp/maas-{{driver['package']}}.gpg"]
   driver_02_add: ["add-apt-repository", "-y", "deb {{driver['repository']}} {{node.get_distro_series()}} main"]
   driver_03_update_install: ["sh", "-c", "apt-get update --quiet && apt-get --assume-yes install {{driver['package']}}"]
   driver_04_load: ["sh", "-c", "depmod && modprobe {{driver['module']}}"]
  {{endif}}
  partitioning_commands:
   builtin: []
   01_partition_announce: ["echo", "'### Partitioning disk ###'"]
   01_partition_make_label: ["/sbin/parted", "/dev/vda", "-s", "'","mklabel","msdos","'"]
   02_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","1049k","538M","'"]
   02_partition_set_flag: ["/sbin/parted", "/dev/vda", "-s", "'","set","1","boot","on","'"]
   04_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","538M","4538M","'"]
   05_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","extended","4538M","1000G","'"]
   06_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","25.5G","57G","'"]
   07_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","57G","89G","'"]
   08_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","89G","1000G","'"]
   09_partition_announce: ["echo", "'### Creating filesystems ###'"]
   10_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda1"]
   11_partition_label_fs: ["/sbin/e2label", "/dev/vda1", "cloudimg-boot"]
   12_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda2"]
   13_partition_label_fs: ["/sbin/e2label", "/dev/vda2", "cloudimg-rootfs"]
   14_partition_mount_fs: ["sh", "-c", "mount /dev/vda2 $TARGET_MOUNT_POINT"]
   15_partition_mkdir: ["sh", "-c", "mkdir $TARGET_MOUNT_POINT/boot"]
   16_partition_mount_fs: ["sh", "-c", "mount /dev/vda1 $TARGET_MOUNT_POINT/boot"]
   17_partition_announce: ["echo", "'### Filling /etc/fstab ###'"]
   18_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-rootfs / ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
   19_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-boot /boot ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
   20_partition_make_swap: ["sh", "-c", "mkswap /dev/vda6"]
   21_partition_make_fstab: ["sh", "-c", "echo '/dev/vda6 none swap sw 0 0' >> $OUTPUT_FSTAB"]

Now that maas is properly configured for curtintest, complete the test by deploying a charm in a Juju environment where curtintest is properly comissionned.  In that example, curtintest is the only available node so maas will systematically pick it up :

caribou@avogadro:~$ juju status
environment: maas17
machines:
« 0 »:
agent-state: started
agent-version: 1.24.0
dns-name: state-server.maas
instance-id: /MAAS/api/1.0/nodes/node-2555c398-1bf9-11e5-a7c4-525400214658/
series: trusty
hardware: arch=amd64 cpu-cores=1 mem=1024M
state-server-member-status: has-vote
services: {}
networks:
maas-eth0:
provider-id: maas-eth0
cidr: 192.168.100.0/24

caribou@avogadro:~$ juju deploy mysql
Added charm « cs:trusty/mysql-25″ to the environment.

Once the mysql charm has been deployed, connect to the unit to confirm that the partitioning was successful

caribou@avogadro:~$ juju ssh mysql/0
ubuntu@curtintest:~$ sudo -s
root@curtintest:~# parted /dev/vda print
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
 
Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4         boot
2      538MB   4538MB  4000MB  primary   ext4
3      4538MB  1000GB  995GB   extended               lba
5      25.5GB  57.0GB  31.5GB  logical
6      57.0GB  89.0GB  32.0GB  logical
7      89.0GB  1000GB  911GB   logical
ubuntu@curtintest:~$ swapon -s
Filename Type Size Used Priority
/dev/vda6 partition 31249404 0 -1

Conclusion

Customizing disks and partition using curtin is possible but currently not sufficiently documented. I hope that this write up will be helpful.  Sustained development on Curtin is currently done to improve these functionalities so things will definitively get better.

Read more
Ben Howard

With Ubuntu 12.04.2, the kernel team introduced the idea of the "hardware enablement kernel" (HWE), originally intended to support new hardware for bare metal server and desktop. In fact, the documentation indicates that HWE images are not suitable for Virtual or Cloud Computing environments.  The thought was that cloud and virtual environments provide stable hardware and that the newer kernel features would not be needed.

Time has proven this assumption painfully wrong. Take for example the need for drivers in virtual environments. Several of the Cloud providers that we have engaged with have requested the use of the HWE kernel by default. On GCE, the HWE kernels provide support for their NVME disks or multiqueue NIC support. Azure has benefited from having an updated HyperV driver stack resulting in better performance. When we engaged with VMware Air, the 12.04 kernel lacked the necessary drivers.

Perhaps more germane to our Cloud users is that containers are using kernel features. 12.04 users need to use the HWE kernel in order to make use of Docker. The new Ubuntu Fan project will be enabled for 14.04 via the HWE-V kernel for Ubuntu 14.04.3. If you use Ubuntu as your container host, you will likely consider using an HWE kernel.

And with that there has been a steady chorus of people requesting that we provide HWE image builds for AWS. The problem has never been the base builds; building the base bits is fairly easy. The hard part is that by adding base builds, each daily and release build goes form 96 images for AWS to 288 (needless to say that is quite a problem). Over the last few weeks -- largely in my spare time -- I've been working out what it would take to deliver HWE images for AWS.

I am happy to announce that as of today, we are now building HWE-U (3.16) and HWE-V (3.19) Ubuntu 14.04 images for AWS. To be clear, we are not making any behavioral changes to the standard Ubuntu 14.04 images. Unless users opt into using an HWE image on AWS they will continue to get the 3.13 kernel. However, for those who want newer kernels, they now have the choice.

For the time being, only amd64 and i386 builds are being published.. Over the next few weeks, we expect the HWE images to reach full feature parity including release promotions, and indexing. And I fully expect that the HWE-V version of 14.04 will include our recent Fan project once the SRU's complete.

Check them out at http://cloud-images.ubuntu.com/trusty/current/hwe-u and http://cloud-images.ubuntu.com/trusty/current/hwe-v .

As always, feedback is always welcome.

Read more
Ben Howard

[UPDATE] The Image ID's have been updated with the latest builds which now include Docker 1.6.2, the latest LXD and of course the Ubuntu Fan driver. 

This week, Dustin Kirkland announced the Ubuntu Fan Project.  To steal from the description, "The Fan is not a software-defined network, and relies on neither distributed databases nor consensus protocols.  Rather, routes are calculated deterministically and traffic carries no additional overhead beyond routine IP tunneling.  Canonical engineers have already demonstrated The Fan operating at 5Gpbs between two Docker containers on separate hosts."

My team at Canonical is responsible for the production of these images. Once the official SRU's land, I anticipate that we will publish an official stream over at cloud-images.ubuntu.com. But until then, check back here for images and updates. As always, if you have feedback, please hop into #server on FreeNode or send email.

GCE Images

Images for GCE have been published to the "ubuntu-os-cloud-devel" project.

The Images are:
  • daily-ubuntu-docker-lxd-1404-trusty-v20150620
  • daily-ubuntu-docker-lxd-1504-vivid-v20150621
To launch an instance, you might run:
$ gcloud compute instances create \
    --image-project ubuntu-os-cloud-devel \
    --image <IMAGE> <NAME>

You need to make sure that IPIP traffic is enable:
$ gcloud compute firewall-rules create fan2 --allow 4 --source-ranges 10.0.0.0/8

Amazon AWS Images

The AWS images are HVM-only, AMD64 builds. 


Version
Region
HVM-SSD
HVM-Instance
14.04-LTS
eu-central-1
ami-7e94ac63
ami-8e93ab93
sa-east-1
ami-f943c1e4
ami-e742c0fa
ap-northeast-1
ami-543c9b54
ami-b4298eb4
eu-west-1
ami-4ae2a73d
ami-48e7a23f
us-west-1
ami-fbd126bf
ami-6bd3242f
us-west-2
ami-63585c53
ami-875357b7
ap-southeast-2
ami-7de69c47
ami-1de19b27
ap-southeast-1
ami-aca4a0fe
ami-2a9b9f78
us-east-1
ami-95877efe
ami-e58b728e
15.04
eu-central-1
ami-9a94ac87
ami-ae93abb3
sa-east-1
ami-1340c20e
ami-0743c11a
ap-northeast-1
ami-9c3c9b9c
ami-42379042
eu-west-1
ami-a2e2a7d5
ami-e4e7a293
us-west-1
ami-4bd0270f
ami-1dd32459
us-west-2
ami-f9585cc9
ami-1dd32459
ap-southeast-2
ami-5de69c67
ami-01e19b3b
ap-southeast-1
ami-74a5a126
ami-c89b9f9a
us-east-1
ami-29f90042
ami-8d8a73e6

It is important to note that these images are only usable inside of a VPC. Newer AWS users are in VPC by default, but older users may need to create and update their VPC. For example:
$ ec2-authorize --cidr <CIDR_RANGE> --protocol 4 <SECURITY_GROUP>


Read more
Dustin Kirkland

652 Linux containers running on a Laptop?  Are you kidding me???

A couple of weeks ago, at the OpenStack Summit in Vancouver, Canonical released the results of some scalability testing of Linux containers (LXC) managed by LXD.

Ryan Harper and James Page presented their results -- some 536 Linux containers on a very modest little Intel server (16GB of RAM), versus 37 KVM virtual machines.

Ryan has published the code he used for the benchmarking, and I've used to to reproduce the test on my dev laptop (Thinkpad x230, 16GB of RAM, Intel i7-3520M).

I managed to pack a whopping 652 Ubuntu 14.04 LTS (Trusty) containers on my Ubuntu 15.04 (Vivid) laptop!


The system load peaked at 1056 (!!!), but I was using merely 56% of 15.4GB of system memory.  Amazingly, my Unity desktop and Byobu command line were still perfectly responsive, as were the containers that I ssh'd into.  (Aside: makes me wonder if the Linux system load average is accounting for container process correctly...)


Check out the process tree for a few hundred system containers here!

As for KVM, I managed to launch 31 virtual machines without KSM enabled, and 65 virtual machines with KSM enabled and working hard.  So that puts somewhere between 10x - 21x as many containers as virtual machines on the same laptop.

You can now repeat these tests, if you like.  Please share your results with #LXD on Google+ or Twitter!

I'd love to see someone try this in AWS, anywhere from an m3.small to an r3.8xlarge, and share your results ;-)

Density test instructions

## Install lxd
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-git-master
$ sudo apt-get update
$ sudo apt-get install -y lxd bzr
$ cd /tmp
## At this point, it's a good idea to logout/login or reboot
## for your new group permissions to get applied
## Grab the tests, disable the tools download
$ bzr branch lp:~raharper/+junk/density-check
$ cd density-check
$ mkdir lxd_tools
## Periodically squeeze your cache
$ sudo bash -x -c 'while true; do sleep 30; \
echo 3 | sudo tee /proc/sys/vm/drop_caches; \
free; done' &
## Run the LXD test
$ ./density-check-lxd --limit=mem:512m --load=idle release=trusty arch=amd64
## Run the KVM test
$ ./density-check-kvm --limit=mem:512m --load=idle release=trusty arch=amd64

As for the speed-of-launch test, I'll cover that in a follow-up post!

Can you contain your excitement?

Cheers!
Dustin

Read more
Michael Hall

Ubuntu is sponsoring the South East Linux Fest this year in Charlotte North Carolina, and as part of that event we will have a room to use all day Friday, June 12, for an UbuCon. UbuCon is a mini-conference with presentations centered around Ubuntu the project and it’s community.

I’m recruiting speakers to fill the last three hour-long slots, if anybody is willing and able to attend the conference and wants to give a presentation to a room full of enthusiastic Ubuntu users, please email me at mhall119@ubuntu.com. Topic can be anything Ubuntu related, design, development, client, cloud, using it, community, etc.

Read more
Dustin Kirkland


In November of 2006, Canonical held an "all hands" event, which included a team building exercise.  Several teams recorded "Ubuntu commercials".

On one of the teams, Mark "Borat" Shuttleworth amusingly proffered,
"Ubuntu make wonderful things possible, for example, Linux appliance, with Ubuntu preinstalled, we call this -- the fridge!"


Nine years later, that tongue-in-cheek parody is no longer a joke.  It's a "cold" hard reality!

GE Appliances, FirstBuild, and Ubuntu announced a collaboration around a smart refrigerator, available today for $749, running Snappy Ubuntu Core on a Raspberry Pi 2, with multiple USB ports and available in-fridge accessories.  We had one in our booth at IoT World in San Francisco this week!










While the fridge prediction is indeed pretty amazing, the line that strikes me most is actually "Ubuntu make(s) wonderful things possible!"

With emphasis on "things".  As in, "Internet of Things."  The possibilities are absolutely endless in this brave new world of Snappy Ubuntu.  And that is indeed wonderful.

So what are you making with Ubuntu?!?

:-Dustin

Read more
Michael Hall

Ubuntu has been talking a lot about convergence lately, it’s something that we believe is going to be revolutionary and we want to be at the forefront of it. We love the idea of it, but so far we haven’t really had much experience with the reality of it.

image20150423_164034801I got my first taste of that reality two weeks ago, while at a work sprint in London. While Canonical has an office in London, it had other teams sprinting there, so the Desktop sprint I was at was instead held at a hotel. We planned to visit the office one day that week, it would be my first visit to any Canonical office, as well as my first time working at an actual office in several years. However, we also planned to meet up with the UK loco for release drinks that evening. This meant that we had to decide between leaving our laptops at the hotel, thus not having them to work on at the office, or taking them with us, but having to carry them around the pub all evening.

I chose to leave my laptop behind, but I did take my phone (Nexus 4 running Ubuntu) with me. After getting a quick tour of the office, I found a vacant seat at a desk, and pulled out my phone. Most of my day job can be done with the apps on my phone: I have email, I have a browser, I have a terminal with ssh, I can respond to our community everywhere they are active.

I spent the next couple of hours doing work, actual work, on my phone. The only problem I had was that I was doing it on a small screen, and I was burning through my battery. At one point I looked up and realized that the vacant desk I was sitting at was equipped with a laptop docking station. It had also a USB hub and an HDMI monitor cable available. If I had a slimport cable for my phone, I might have been able to plug it into this docking station and both power my phone and get a bigger screen to work with.

If I could have done that, I would have achieved the full reality of convergence, and it would have been just like if I had brought my laptop with me. Only with this I was able to simply slide it into my pocket when it was time to leave for drinks. It was tantalizingly close, I got a little taste of what it’s going to be like, and now I’m craving more of it.

Read more
Michael Hall

A couple of years ago the Ubuntu download page introduced a way for users to make a financial contribution to the ongoing development of Ubuntu and it’s surrounding projects and community. Later a program was established within Canonical to make the money donated specifically for supporting the community available directly to members of the community who would use it to benefit the wider project.

During the last month, at the request of members of the Ubuntu community and the Community Council, we have undertaken a review of the this program. While conducting a more thorough analysis of the what was donated to us and when, it was discovered that we made an error in our initial reporting, which has unfortunately affected the accuracy of all subsequent reports as well.

What Happened?

Our first report, published in May of 2014, combined the amounts donated to the community slider and the amounts dispersed to the community during the previous four financial quarters. In that report we listed the amount donated from April 2013 to June 2013 as being a total of $34,353.63. However, when looking over all of the quarterly donations going back to the start of the program, we realized that this amount actually covered donations made from April 2013 all the way to October 2013.

This means that the figure contains both the amount donated during that Apr-Jun quarter, as well as duplicating the amounts listed as being donated for the Jul-Sep quarter, and a part of the Oct-Dec quarter. The actual amount donated during just the Apr-Jun 2013 quarter was $15,726.72. As a result of this, and the fact that it affected the carry over balanced for all subsequent reports, I have gone back and corrected all of these to reflect the correct figures.

Now for the questions:

Where are the updated reports?

The reports have not moved, you can still access them from the previously published URLs, and they are also listed on a new Reports page on the community website. The original report data has been preserved in a copy which is linked to at the top of each revised report.

Where did the money go?

No money has been lost or taken away from the program, this change is only a correction to the actual state of things. We had originally over-stated the amount that was donated, due to an error when reading the raw donation data at the time the first report was written.

How could a mistake like this happen?

The information we get is a summary of a summary of the raw data. At some point in the process the wrong number was put in the wrong place. All of these reports are manually written and verified, which often catches errors such as this, but in the very first report this error was missed.

Are these numbers trustworthy?

I understand that a reduction in the balance number, in conjunction with questions being raised about the operation of the program, will lead some people to question the honesty of this change. But the fact remains that we were asked to investigate this, we did find a discrepant, and correcting it publicly is the right thing for us to do, regardless of how it may look.

Is the community funding program in trouble?

Absolutely not. Even with this correction there has been more money donated to the community slider than we have been able to use. There’s still a lot more good that can be done, if you think you have a good use for some of it please fill out a request.

Read more
Ben Howard

I am pleased to announce initial Vagrant images [1, 2]. These images are bit-for-bit the same as the KVM images, but have a Cloud-init configuration that allows Snappy to work within the Vagrant workflow.

Vagrant enables a cross platform developer experience on MacOS, Windows or Linux [3].

Note: due to the way that Snappy works, shared file systems within Vagrant is not possible at this time. We are working on getting the shared file system support enabled, but it will take us a little bit to get going.

If you want to use Vagrant packaged in the Ubuntu archives, in a terminal run::

  • sudo apt-get -y install vagrant
  • cd <WORKSPACE>
  • vagrant init http://goo.gl/DO7a9W 
  • vagrant up
  • vagrant ssh
If you use Vagrant from [4] (i.e Windows, Mac or install the latest Vagrant) then you can run:
  • vagrant init ubuntu/ubuntu-15.04-snappy-core-edge-amd64
  • vagrant up
  • vagrant ssh

These images are a work in progress. If you encounter any issues, please report them to "snappy-devel@lists.ubuntu.com" or ping me (utlemming) on Launchpad.net

---

[1] http://cloud-images.ubuntu.com/snappy/15.04/core/edge/current/core-edge-amd64-vagrant.box
[2] https://atlas.hashicorp.com/ubuntu/boxes/ubuntu-15.04-snappy-core-edge-amd64
[3] https://docs.vagrantup.com/v2/why-vagrant/index.html
[4] https://www.vagrantup.com/downloads.html

Read more
Michael Hall

Way back at the dawn of the open source era, Richard Stallman wrote the Four Freedoms which defined what it meant for software to be free. These are:

  • Freedom 0: The freedom to run the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute copies so you can help your neighbor.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

For nearly three decades now they have been the foundation for our movement, the motivation for many of us, and the guiding principle for the decisions we make about what software to use.

But outside of our little corner of humanity, these freedoms are not seen as particularly important. In fact, the fast majority of people are not only happy to use software that violates them, but will often prefer to do so. I don’t even feel the need to provide supporting evidence for this claim, as I’m sure all of you have been on one side or the other of a losing arguement about why using open source software is important.

The problem, it seems, is that people who don’t plan on exercising any of these freedoms, from lack of interest or lack of ability, don’t place the same value on them as those of us who do. That’s why software developers are more likely to prefer open source than non-developers, because they might actually use those freedoms at some point.

But the people who don’t see a personal value in free software are missing a larger, more important freedom. One implied by the first four, though not specifically stated. A fifth freedom if you will, which I define as:

  • Freedom 4: The freedom to have the program improved by a person or persons of your choosing, and make that improvement available back to you and to the public.

Because even though the vast majority of proprietary software users will never be interested in studying or changing the source of the software they use, they will likely all, at some point in time, ask someone else if they can fix it. Who among us hasn’t had a friend or relative ask us to fix their Windows computer? And the true answer is that, without having the four freedoms (and implied fifth), only Microsoft can truly “fix” their OS, the rest of us can only try and undo the damage that’s been done.

So the next time you’re trying to convince someone of the important of free and open software, and they chime in with the fact that don’t want to change it, try pointing out that by using proprietary code they’re limiting their options for getting it fixed when it inevitably breaks.

Read more
Kyle Nitzsche

AptBrowser QML/C++ App

I've made a QML/C++ app called aptBrowser as an exercise in:

  • QML declarative GUI that drives
  • C++ backend threads

That is, the GUI provides buttons (five) that kick off C++ threads that do the backend work and provide the results back to QML.

So the GUI is always responsive (non-blocking).

What aptbrowser does

The user enters a debian package name (and is told if it is not valid) and taps one of five buttons that do the following:
  • Show the packages this package depends on ("Depends")
  • Show the packages this package recommends ("Recommends")
  • Show the packages that depend on this package ("Parent Depends")
  • Show that packages that recommend this package ("Parent Recommends")
  • Show the  apt-cache policy for this package ("Policy")
The data for all but the last ("Policy") are returned as flickable lists of buttons. When you click any one, it becomes the current package and the GUI and displayed data adjusts appropriately.

When you click any of the buttons, the orange indicator square to its left turns purple and starts spinning, and when the c++ backend returns data, its indicator turns orange again and stops spinning.

Note that the Parent Depends and Parent Recommends actions can take a long time. This has nothing to do with this app. This is simply how long it takes to first get a package's parents and then, for each, find its type of relationship (depends or recommends) to our package of interest. Querying the apt cache is time consuming.

Where is aptbrowser

Store

Because the app queries the apt cache, it must run unconfined at the moment, and therefore it cannot go into the store.

The click

This an armhf click pkg for framework ubuntu-sdk.14.10 (compiled against vivid)

    The source 


    • bzr branch lp:aptbrowser

    Screenshots

     



    Read more
    Michael Hall

    A couple of weeks ago I had the opportunity to attend the thirteenth Southern California Linux Expo, more commonly known at SCaLE 13x. It was my first time back in five years, since I attended 9x, and my first time as a speaker. I had a blast at SCaLE, and a wonderful time with UbuCon. If you couldn’t make it this year, it should definitely be on your list of shows to attend in 2016.

    UbuCon

    Thanks to the efforts of Richard Gaskin, we had a room all day Friday to hold an UbuCon. For those of you who haven’t attended an UbuCon before, it’s basically a series of presentations by members of the Ubuntu community on how to use it, contribute to it, or become involved in the community around it. SCaLE was one of the pioneering host conferences for these, and this year they provided a double-sized room for us to use, which we still filled to capacity.

    image20150220_100226891I was given the chance to give not one but two talks during UbuCon, one on community and one on the Ubuntu phone. We also had presentations from my former manager and good friend Jono Bacon, current coworkers Jorge Castro and Marco Ceppi, and inspirational community members Philip Ballew and Richard Gaskin.

    I’d like thank Richard for putting this all together, and for taking such good care of those of us speaking (he made sure we always had mints and water). UbuCon was a huge success because of the amount of time and work he put into it. Thanks also to Canonical for providing us, on rather short notice, a box full of Ubuntu t-shirts to give away. And of course thanks to the SCaLE staff and organizers for providing us the room and all of the A/V equipment in it to use.

    The room was recorded all day, so each of these sessions can be watched now on youtube. My own talks are at 4:00:00 and 5:00:00.

    Ubuntu Booth

    In addition to UbuCon, we also had an Ubuntu booth in the SCaLE expo hall, which was registered and operated by members of the Ubuntu California LoCo team. These guys were amazing, they ran the booth all day over all three days, managed the whole setup and tear down, and did an excellent job talking to everybody who came by and explaining everything from Ubuntu’s cloud offerings, to desktops and even showing off Ubuntu phones.

    image20150221_162940413Our booth wouldn’t have happened without the efforts of Luis Caballero, Matt Mootz, Jose Antonio Rey, Nathan Haines, Ian Santopietro, George Mulak, and Daniel Gimpelevich, so thank you all so much! We also had great support from Carl Richell at System76 who let us borrow 3 of their incredible laptops running Ubuntu to show off our desktop, Canonical who loaned us 2 Nexus 4 phones running Ubuntu as well as one of the Orange Box cloud demonstration boxes, Michael Newsham from TierraTek who sent us a fanless PC and NAS, which we used to display a constantly-repeating video (from Canonical’s marketing team) showing the Ubuntu phone’s Scopes on a television monitor provided to us by Eäär Oden at Video Resources. Oh, and of course Stuart Langridge, who gave up his personal, first-edition Bq Ubuntu phone for the entire weekend so we could show it off at the booth.

    image20150222_132142752Like Ubuntu itself, this booth was not the product of just one organization’s work, but the combination of efforts and resources from many different, but connected, individuals and groups. We are what we are, because of who we all are. So thank you all for being a part of making this booth amazing.

    Read more
    Ben Howard

    Back when we announced that the Ubuntu 14.04 LTS Cloud Images on Azure were using the Hardware Enablement Kernel (HWE), the immediate feedback was "what about 12.04?"


    Well, the next Ubuntu 12.04 Cloud Images on Microsoft Azure will start using the HWE kernel. We have been working with Microsoft to validate using the 3.13 kernel on 12.04 and are pleased with the results and the stability. We spent a lot of time thinking about and testing this change, and in conference with the Ubuntu Kernel, Foundations and Cloud Image teams, feel this change will give the best experience on Microsoft Azure. 

    By default, the HWE kernel is used on official images for Ubuntu 12.04 on VMware Air, Google Compute Engine, and now Microsoft Azure. 

    Any 12.04 Image published to Azure with a serial later than 20140225 will default to the new HWE kernel. 

    Users who want to upgrade their existing instance can simply run:
    • sudo apt-get update
    • sudo apt-get install linux-image-hwe-generic linux-cloud-tools-generic-lts-trusty
    • reboot

    Read more
    Dustin Kirkland

    Gratuitous picture of my pets, the day after we rescued them
    The PetName libraries (Shell, Python, Golang) can generate infinite combinations of human readable UUIDs


    Some Background

    In March 2014, when I first started looking after MAAS as a product manager, I raised a minor feature request in Bug #1287224, noting that the random, 5-character hostnames that MAAS generates are not ideal. You can't read them or pronounce them or remember them easily. I'm talking about hostnames like: sldna, xwknd, hwrdz or wkrpb. From that perspective, they're not very friendly. Certainly not very Ubuntu.

    We're not alone, in that respect. Amazon generates forgettable instance names like i-15a4417c, along with most virtual machine and container systems.


    Meanwhile, there is a reasonably well-known concept -- Zooko's Triangle -- which says that names should be:
    • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable
    • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
    • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
    And, of course we know what XKCD has to say on a somewhat similar matter :-)

    So I proposed a few different ways of automatically generating those names, modeled mostly after Ubuntu's beloved own code naming scheme -- Adjective Animal. To get the number of combinations high enough to model any reasonable MAAS user, though, we used Adjective Noun instead of Adjective Animal.

    I collected a Adjective list and a Noun list from a blog run by moms, in the interest of having a nice, soft, friendly, non-offensive source of words.

    For the most part, the feature served its purpose. We now get memorable, pronounceable names. However, we get a few odd balls in there from time to time. Most are humorous. But some combinations would prove, in fact, to be inappropriate, or perhaps even offensive to some people.

    Accepting that, I started thinking about other solutions.

    In the mean time, I realized that Docker had recently launched something similar, their NamesGenerator, which pairs an Adjective with a Famous Scientist's Last Name (except they have explicitly blacklisted boring_wozniak, because "Steve Wozniak is not boring", of course!).


    Similarly, Github itself now also "suggests" random repo names.



    I liked one part of the Docker approach better -- the use of proper names, rather than random nouns.

    On the other hand, their approach is hard-coded into the Docker Golang source itself, and not usable or portable elsewhere, easily.

    Moreover, there's only a few dozen Adjectives (57) and Names (76), yielding only about 4K combinations (4332) -- which is not nearly enough for MAAS's purposes, where we're shooting for 16M+, with minimal collisions (ie, covering a Class A network).

    Introducing the PetName Libraries

    I decided to scrap the Nouns list, and instead build a Names list. I started with Last Names (like Docker), but instead focused on First Names, and built a list of about 6,000 names from public census data.  I also built a new list of nearly 38,000 Adjectives.

    The combination actually works pretty well! While smelly-Susan isn't particularly charming, it's certainly not an ad hominem attack targeted at any particular Susan! That 6,000 x 38,000 gives us well over 228 million unique combinations!

    Moreover, I also thought about how I could actually make it infinitely extensible... The simple rules of English allow Adjectives to modify Nouns, while Adverbs can recursively modify other Adverbs or Adjectives.   How convenient!

    So I built a word list of Adverbs (13,000) as well, and added support for specifying the "number" of words in a PetName.
    1. If you want 1, you get a random Name 
    2. If you want 2, you get a random Adjective followed by a Name 
    3. If you want 3 or more, you get N-2 Adverbs, an Adjective and a Name 
    Oh, and the separator is now optional, and can be any character or string, with a default of a hyphen, "-".

    In fact:
    • 2 words will generate over 221 million unique combinations, over 227 combinations
    • 3 words will generate over 2.8 trillion unique combinations, over 241 combinations (more than 32-bit space)
    • 4 words can generate over 255 combinations
    • 5 words can generate over 268 combinations (more than 64-bit space)
    Interestingly, you need 10 words to cover 128-bit space!  So it's

    unstoutly-clashingly-assentingly-overimpressibly-nonpermissibly-unfluently-chimerically-frolicly-irrational-wonda

    versus

    b9643037-4a79-412c-b7fc-80baa7233a31

    Shell

    So once the algorithm was spec'd out, I built and packaged a simple shell utility and text word lists, called petname, which are published at:
    The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

    $ sudo apt-add-repository ppa:petname/ppa
    $ sudo apt-get update

    And:
    $ sudo apt-get install petname
    $ petname
    itchy-Marvin
    $ petname -w 3
    listlessly-easygoing-Radia
    $ petname -s ":" -w 5
    onwardly:unflinchingly:debonairly:vibrant:Chandler

    Python

    That's only really useful from the command line, though. In MAAS, we'd want this in a native Python library. So it was really easy to create python-petname, source now published at:
    The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

    $ sudo apt-add-repository ppa:python-petname/ppa
    $ sudo apt-get update

    And:
    $ sudo apt-get install python-petname
    $ python-petname
    flaky-Megan
    $ python-petname -w 4
    mercifully-grimly-fruitful-Salma
    $ python-petname -s "" -w 2
    filthyLaurel

    Using it in your own Python code looks as simple as this:

    $ python
    ⟫⟫⟫ import petname
    ⟫⟫⟫ foo = petname.Generate(3, "_")
    ⟫⟫⟫ print(foo)
    boomingly_tangible_Mikayla

    Golang


    In the way that NamesGenerator is useful to Docker, I though a Golang library might be useful for us in LXD (and perhaps even usable by Docker or others too), so I created:
    Of course you can use "go get" to fetch the Golang package:

    $ export GOPATH=$HOME/go
    $ mkdir -p $GOPATH
    $ export PATH=$PATH:$GOPATH/bin
    $ go get github.com/dustinkirkland/golang-petname

    And also, the packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

    $ sudo apt-add-repository ppa:golang-petname/ppa
    $ sudo apt-get update

    And:
    $ sudo apt-get install golang-petname
    $ golang-petname
    quarrelsome-Cullen
    $ golang-petname -words=1
    Vivian
    $ golang-petname -separator="|" -words=10
    snobbily|oracularly|contemptuously|discordantly|lachrymosely|afterwards|coquettishly|politely|elaborate|Samir

    Using it in your own Golang code looks as simple as this:

    package main
    import (
    "fmt"
    "math/rand"
    "time"
    "github.com/dustinkirkland/golang-petname"
    )
    func main() {
    flag.Parse()
    rand.Seed(time.Now().UnixNano())
    fmt.Println(petname.Generate(2, ""))
    }
    Gratuitous picture of my pets, 7 years later.
    Cheers,
    happily-hacking-Dustin

    Read more
    jdstrand

    Most of this has been discussed on mailing lists, blog entries, etc, while developing Ubuntu Touch, but I wanted to write up something that ties together these conversations for Snappy. This will provide background for the conversations surrounding hardware access for snaps that will be happening soon on the snappy-devel mailing list.

    Background

    Ubuntu Touch has several goals that all apply to Snappy:

    • we want system-image upgrades
    • we want to replace the distro archive model with an app store model for Snappy systems
    • we want developers to be able to get their apps to users quickly
    • we want a dependable application lifecycle
    • we want the system to be easy to understand and to develop on
    • we want the system to be secure
    • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

    Snappy adds a few things to the above (that pertain to this conversation):

    • we want the system to be bulletproof (transactional updates with rollbacks)
    • we want the system to be easy to use for system builders
    • we want the system to be easy to use and understand for admins

    Let’s look at what all these mean more closely.

    system-image upgrades

    • we want system-image upgrades
    • we want the system to be bulletproof (transactional updates with rollbacks)

    We want system-image upgrades so updates are fast, reliable and so people (users, admins, snappy developers, system builders, etc) always know what they have and can depend on it being there. In addition, if an upgrade goes bad, we want a mechanism to be able to rollback the system to a known good state. In order to achieve this, apps need to work within the system and live in their own area and not modify the system in unpredictable ways. The Snappy FHS is designed for this and the security policy enforces that apps follow it. This protects us from malware, sure, but at least as importantly, it protects us from programming errors and well-intentioned clever people who might accidentally break the Snappy promise.

    app store

    • we want to replace the distro archive model with an app store model
    • we want developers to be able to get their apps to users quickly

    Ubuntu is a fantastic distribution and we have a wonderfully rich archive of software that is refreshed on a cadence. However, the traditional distro model has a number of drawbacks and arguably the most important one is that software developers have an extremely high barrier to overcome to get their software into users hands on their own time-frame. The app store model greatly helps developers and users desiring new software because it gives developers the freedom and ability to get their software out there quickly and easily, which is why Ubuntu Touch is doing this now.

    In order to enable developers in the Ubuntu app store, we’ve developed a system where a developer can upload software and have it available to users in seconds with no human review, intervention or snags. We also want users to be able to trust what’s in Ubuntu’s store, so we’ve created store policies that understand the Ubuntu snappy system such that apps do not require any manual review so long as the developer follows the rules. However, the Ubuntu Core system itself is completely flexible– people can install apps that are tightly confined, loosely confined, unconfined, whatever (more on this, below). In this manner, people can develop snaps for their own needs and distribute them however they want.

    It is the Ubuntu store policy that dictates what is in the store. The existing store policy is in place to improve the situation and is based on our experiences with the traditional distro model and attempts to build something app store-like experiences on top of it (eg, MyApps).

    application lifecycle

    • dependable application lifecycle

    This has not been discussed as much with Snappy for Ubuntu Core, but Touch needs to have a good application lifecycle model such that apps cannot run unconstrained and unpredictably in the background. In other words, we want to avoid problems with battery drain and slow systems on Touch. I think we’ve done a good job so far on Touch, and this story is continuing to evolve.

    (I mention application lifecycle in this conversation for completeness and because application lifecycle and security work together via the app’s application id)

    security

    • we want the system to be secure
    • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

    Everyone wants a system that they trust and that is secure, and security is one of the core tenants of Snappy systems. For Ubuntu Touch, we’ve created a
    system that is secure, that is easy to use and understand by users, and that still honors relevant, meaningful Linux traditions. For Snappy, we’ll be adding several additional security features (eg, seccomp, controlled abstract socket communication, firewalling, etc).

    Our security story and app store policies give us something that is between Apple and Google. We have a strong security story that has a number of similarities to Apple, but a lightweight store policy akin to Google Play. In addition to that, our trust model is that apps not needing manual review are untrusted by the OS and have limited access to the system. On Touch we use tasteful, contextual prompting so the user may trust the apps to do things beyond what the OS allows on its own (simple example, app needs access to location, user is prompted at the time of use if the app can access it, user answers and the decision is remembered next time).

    Snappy for Ubuntu Core is different not only because the UI supports a CLI, but also because we’ve defined a Snappy for Ubuntu Core user that is able to run the ‘snappy’ command as someone who is an admin, a system builder, a developer and/or someone otherwise knowledgeable enough to make a more informed trust decision. (This will come up again later, below)

    easy to use

    • we want the system to be easy to understand and to develop on
    • we want the system to be easy to use for system builders
    • we want the system to be easy to use and understand for admins

    We want a system that is easy to use and understand. It is key that developers are able to develop on it, system builders able to get their work done and admins can install and use the apps from the store.

    For Ubuntu Touch, we’ve made a system that is easy to understand and to develop on with a simple declarative permissions model. We’ll refine that for Snappy and make it easy to develop on too. Remember, the security policy is there not just so we can be ‘super secure’ but because it is what gives us the assurances needed for system upgrades, a safe app store and an altogether bulletproof system.

    As mentioned, the system we have designed is super flexible. Specifically, the underlying system supports:

    1. apps working wholly within the security policy (aka, ‘common’ security policy groups and templates)
    2. apps declaring specific exceptions to the security policy
    3. apps declaring to use restricted security policy
    4. apps declaring to run (effectively) unconfined
    5. apps shipping hand-crafted policy (that can be strict or lenient)

    (Keep in mind the Ubuntu App Store policy will auto-accept apps falling under ‘1’ and trigger manual review for the others)

    The above all works today (though it isn’t always friendly– we’re working on that) and the developer is in control. As such, Snappy developers have a plethora of options and can create snaps with security policy for their needs. When the developer wants to ship the app and make it available to all Snappy users via the Ubuntu App Store, then the developer may choose to work within the system to have automated reviews or choose not to and manage the process via manual reviews/commercial relationship with Canonical.

    Moving forward

    The above works really well for Ubuntu Touch, but today there is too much friction with regard to hardware access. We will make this experience better without compromising on any of our goals. How do we put this all together, today, so people can get stuff done with snappy without sacrificing on our goals, making it harder on ourselves in the future or otherwise opening Pandora’s box? We don’t want to relax our security policy, because we can’t make the bulletproof assurances we are striving for and it would be hard to tighten the security. We could also add some temporary security policy that adds only certain accesses (eg, serial devices) but, while useful, this is too inflexible. We also don’t want to have apps declare the accesses themselves to automatically adds the necessary security policy, because this (potentially) privileged access is then hidden from the Snappy for Ubuntu Core user.

    The answer is simple when we remember that the Snappy for Ubuntu Core user (ie, the one who is able to run the snappy command) is knowledgeable enough to make the trust decision for giving an app access to hardware. In other words, let the admin/developer/system builder be in control.

    immediate term

    The first thing we are going to do is unblock people and adjust snappy to give the snappy core user the ability to add specific device access to snap-specific security policy. In essence you’ll install a snap, then run a command to give the snap access to a particular device, then you’re done. This simple feature will unblock developers and snappy users immediately while still supporting our trust-model and goals fully. Plus it will be worth implementing since we will likely always want to support this for maximum flexibility and portability (since people can use traditional Linux APIs).

    The user experience for this will be discussed and refined on the mailing list in the coming days.

    short term

    After that, we’ll build on this and explore ways to make the developer and user experience better through integration with the OEM part and ways of interacting with the underlying system so that the user doesn’t have to necessarily know the device name to add, but can instead be given smart choices (this can have tie-ins to the web interface for snappy too). We’ll want to be thinking about hotpluggable devices as well.

    Since this all builds on the concept of the immediate term solution, it also supports our trust-model and goals fully and is relatively easy to implement.

    future

    Once we have the above in place, we should have a reasonable experience for snaps needing traditional device access. This will give us time to evaluate how people are accessing hardware and see if we can make things even better by using frameworks and/or a hardware abstraction layer. In this manner, snaps can program to an easy to use API and the system can mediate access to the underlying hardware via that API.


    Filed under: canonical, security, ubuntu, ubuntu-server, uncategorized

    Read more
    Ben Howard

    One of the perennial problems in the Cloud is knowing what is the most current image and where to find it. Some Clouds provide a nice GUI console, an API, or some combination. But what has been missing is a "dashboard" showing Ubuntu across multiple Clouds.


    Screenshot
    https://cloud-images.ubuntu.com/locator
    In that light, I am please to announce that we have a new beta Cloud Image Finder. This page shows where official Ubuntu images are available. As with all betas, we have some kinks to work out, like gathering up links for our Cloud Partners (so clicking an Image ID launches an image). I envision that in the future this locator page will be the default landing page for our Cloud Image Page..



    The need for this page became painfully apparent yesterday as I was working through the fallout of the Ghost Vulnerability (aka CVE 2015-0235). The Cloud Image team had spent a good amount of time pushing our images to AWS, Azure, GCE, Joyent and then notifying our partners like Brightbox, DreamCompute, CloudSigma and VMware of new builds. I realized that we needed a single place for our users to just look and see where the builds are available. And so I hacked up the EC2 Locator page to display other clouds.  

    Please note: this new page only shows stable releases. We push a lot of images and did not want to confuse things by showing betas, alphas, dailies or the development builds. Rather, this page will only show images that have been put through the complete QA process and are ready for production work loads. 

    This new locator page is backed by Simple Streams, which is our machine-formatted data service. Simple Streams provides a way of locating images in uniform way across the cloud. Essentially our new Locator Page is just a viewer of the Simple Stream Data.

    Hopefully our users will find this new page useful. Feedback is always welcome. Please feel free to drop me a line (utlemming @ ubuntu dot com). 

    Read more
    Ben Howard

    A few years ago when our fine friends on the kernel team introduced the idea of the "hardware enablement" (HWE) kernel, those of us in the Cloud world looked at it as curiosity. We thought that by in large, the HWE kernel would not be needed or wanted for Virtual Cloud instances.

    And we were wrong.

    So wrong in fact, that the HWE kernel has found its way into the Vagrant Cloud Images, VMware's vCHS, and Google's Compute engine as the default kernel for the Certified Images. The main reason for these requests is that virtual hardware moves at a fairly quick pace. Unlike traditional hardware, Virtual Hardware can be fixed and patched at the speed that software can be deployed.

    The feedback in regards to Azure has been the same: users and Microsoft has asked for the HWE kernel consistently. Microsoft has validated that the HWE kernel (3.16) running Ubuntu 14.04 on Windows Azures passes their validation testing. In our testing, we have validated that the 3.16 kernel works quite well in Azure.

    For Azure users, using the 3.16 HWE kernel brings SMB 2.1 copy file support and updates LIS drivers.

    Therefore, starting with the latest Windows Azure image [1], all the Ubuntu 14.04 images will track the latest hardware enablement kernel. That means that all the goodness in Ubuntu 14.10's kernel will be the default for 14.04 users launching our official images on Windows Azure.

    If you want to install the LTS kernel on your existing instance(s), simply run:

    • sudo apt-get update
    • sudo apt-get install linux-image-virtual-lts-utopic linux-lts-utopic-cloud-tools-common walinuxagent
    • sudo reboot


    [1] b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB

    Read more
    Dustin Kirkland


    With the recent introduction of Snappy Ubuntu, there are now several different ways to extend and update (apt-get vs. snappy) multiple flavors of Ubuntu (Core, Desktop, and Server).

    We've put together this matrix with a few examples of where we think Traditional Ubuntu (apt-get) and Transactional Ubuntu (snappy) might make sense in your environment.  Note that this is, of course, not a comprehensive list.

    Ubuntu Core
    Ubuntu Desktop
    Ubuntu Server
    Traditional apt-get
    Minimal Docker and LXC imagesDesktop, Laptop, Personal WorkstationsBaremetal, MAAS, OpenStack, General Purpose Cloud Images
    Transactional snappy
    Minimal IoT Devices and Micro-Services Architecture Cloud ImagesTouch, Phones, TabletsComfy, Human Developer Interaction (over SSH) in an atomically updated environment

    I've presupposed a few of the questions you might ask, while you're digesting this new landscape...

    Q: I'm looking for the smallest possible Ubuntu image that still supports apt-get...
    A: You want our Traditional Ubuntu Core. This is often useful in building Docker and LXC containers.

    Q: I'm building the next wearable IoT device/drone/robot, and perhaps deploying a fleet of atomically updated micro-services to the cloud...
    A: You want Snappy Ubuntu Core.

    Q: I want to install the best damn Linux on my laptop, desktop, or personal workstation, with industry best security practices, 30K+ freely available open source packages, freely available, with extensive support for hardware devices and proprietary add-ons...
    A: You want the same Ubuntu Desktop that we've been shipping for 10+ years, on time, every time ;-)

    Q: I want that same converged, tasteful Ubuntu experience on your personal, smart devices like my Phones and Tablets...
    A: You want Ubuntu Touch, which is a very graphical human interface focused expression of Snappy Ubuntu.

    Q: I'm deploying Linux onto bare metal servers at scale in the data center, perhaps building IaaS clouds using OpenStack or PaaS cloud using CloudFoundry? And I'm launching general purpose Linux server instances in public clouds (like AWS, Azure, or GCE) and private clouds...
    A: You want the traditional apt-get Ubuntu Server.

    Q: I'm developing and debugging applications, services, or frameworks for Snappy Ubuntu devices or cloud instances?
    A: You want Comfy Ubuntu Server, which is a command line human interface extension of Snappy Ubuntu, with a number of conveniences and amenities (ssh, byobu, manpages, editors, etc.) that won't be typically included in the minimal Snappy Ubuntu Core build. [*Note that the Comfy images will be available very soon]

    Cheers,
    :-Dustin

    Read more