Canonical Voices

Posts tagged with 'planet ubuntu'

Stéphane Graber

This is post 7 out of 10 in the LXC 1.0 blog post series.

Introduction to unprivileged containers

The support of unprivileged containers is in my opinion one of the most important new features of LXC 1.0.

You may remember from previous posts that I mentioned that LXC should be considered unsafe because while running in a separate namespace, uid 0 in your container is still equal to uid 0 outside of the container, meaning that if you somehow get access to any host resource through proc, sys or some random syscalls, you can potentially escape the container and then you’ll be root on the host.

That’s what user namespaces were designed for and implemented. It was a multi-year effort to think them through and slowly push the hundreds of patches required into the upstream kernel, but finally with 3.12 we got to a point where we can start a full system container entirely as a user.

So how do those user namespaces work? Well, simply put, each user that’s allowed to use them on the system gets assigned a range of unused uids and gids, ideally a whole 65536 of them. You can then use those uids and gids with two standard tools called newuidmap and newgidmap which will let you map any of those uids and gids to virtual uids and gids in a user namespace.

That means you can create a container with the following configuration:

lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536

The above means that I have one uid map and one gid map defined for my container which will map uids and gids 0 through 65536 in the container to uids and gids 100000 through 165536 on the host.

For this to be allowed, I need to have those ranges assigned to my user at the system level with:

stgraber@castiana:~$ grep stgraber /etc/sub* 2>/dev/null
/etc/subgid:stgraber:100000:65536
/etc/subuid:stgraber:100000:65536

LXC has now been updated so that all the tools are aware of those unprivileged containers. The standard paths also have their unprivileged equivalents:

  • /etc/lxc/lxc.conf => ~/.config/lxc/lxc.conf
  • /etc/lxc/default.conf => ~/.config/lxc/default.conf
  • /var/lib/lxc => ~/.local/share/lxc
  • /var/lib/lxcsnaps => ~/.local/share/lxcsnaps
  • /var/cache/lxc => ~/.cache/lxc

Your user, while it can create new user namespaces in which it’ll be uid 0 and will have some of root’s privileges against resources tied to that namespace will obviously not be granted any extra privilege on the host.

One such thing is creating new network devices on the host or changing bridge configuration. To workaround that, we wrote a tool called “lxc-user-nic” which is the only SETUID binary part of LXC 1.0 and which performs one simple task.
It parses a configuration file and based on its content will create network devices for the user and bridge them. To prevent abuse, you can restrict the number of devices a user can request and to what bridge they may be added.

An example is my own /etc/lxc/lxc-usernet file:

stgraber veth lxcbr0 10

This declares that the user “stgraber” is allowed up to 10 veth type devices to be created and added to the bridge called lxcbr0.

Between what’s offered by the user namespace in the kernel and that setuid tool, we’ve got all that’s needed to run most distributions unprivileged.

Pre-requirements

All examples and instructions I’ll be giving below are expecting that you are running a perfectly up to date version of Ubuntu 14.04 (codename trusty). That’s a pre-release of Ubuntu so you may want to run it in a VM or on a spare machine rather than upgrading your production computer.

The reason to want something that recent is because the rough requirements for well working unprivileged containers are:

  • Kernel: 3.13 + a couple of staging patches (which Ubuntu has in its kernel)
  • User namespaces enabled in the kernel
  • A very recent version of shadow that supports subuid/subgid
  • Per-user cgroups on all controllers (which I turned on a couple of weeks ago)
  • LXC 1.0 beta2 or higher (released two days ago)
  • A version of PAM with a loginuid patch that’s yet to be in any released version

Those requirements happen to all be true of the current development release of Ubuntu as of two days ago.

LXC pre-built containers

User namespaces come with quite a few obvious limitations. For example in a user namespace you won’t be allowed to use mknod to create a block or character device as being allowed to do so would let you access anything on the host. Same thing goes with some filesystems, you won’t for example be allowed to do loop mounts or mount an ext partition, even if you can access the block device.

Those limitations while not necessarily world ending in day to day use are a big problem during the initial bootstrap of a container as tools like debootstrap, yum, … usually try to do some of those restricted actions and will fail pretty badly.

Some templates may be tweaked to work and workaround such as a modified fakeroot could be used to bypass some of those limitations but the goal of the LXC project isn’t to require all of our users to be distro engineers, so we came up with a much simpler solution.

I wrote a new template called “download” which instead of assembling the rootfs and configuration locally will instead contact a server which contains daily pre-built rootfs and configuration for most common templates.

Those images are built from our Jenkins server using a few machines I have on my home network (a set of powerful x86 builders and a quadcore ARM board). The actual build process is pretty straightforward, a basic chroot is assembled, then the current git master is downloaded, built and the standard templates are run with the right release and architecture, the resulting rootfs is compressed, a basic config and metadata (expiry, files to template, …) is saved, the result is pulled by our main server, signed with a dedicated GPG key and published on the public web server.

The client side is a simple template which contacts the server over https (the domain is also DNSSEC enabled and available over IPv6), grabs signed indexes of all the available images, checks if the requested combination of distribution, release and architecture is supported and if it is, grabs the rootfs and metadata tarballs, validates their signature and stores them in a local cache. Any container creation after that point is done using that cache until the time the cache entries expires at which point it’ll grab a new copy from the server.

The current list of images is (as can be requested by passing –list):

---
DIST      RELEASE   ARCH    VARIANT    BUILD
---
debian    wheezy    amd64   default    20140116_22:43
debian    wheezy    armel   default    20140116_22:43
debian    wheezy    armhf   default    20140116_22:43
debian    wheezy    i386    default    20140116_22:43
debian    jessie    amd64   default    20140116_22:43
debian    jessie    armel   default    20140116_22:43
debian    jessie    armhf   default    20140116_22:43
debian    jessie    i386    default    20140116_22:43
debian    sid       amd64   default    20140116_22:43
debian    sid       armel   default    20140116_22:43
debian    sid       armhf   default    20140116_22:43
debian    sid       i386    default    20140116_22:43
oracle    6.5       amd64   default    20140117_11:41
oracle    6.5       i386    default    20140117_11:41
plamo     5.x       amd64   default    20140116_21:37
plamo     5.x       i386    default    20140116_21:37
ubuntu    lucid     amd64   default    20140117_03:50
ubuntu    lucid     i386    default    20140117_03:50
ubuntu    precise   amd64   default    20140117_03:50
ubuntu    precise   armel   default    20140117_03:50
ubuntu    precise   armhf   default    20140117_03:50
ubuntu    precise   i386    default    20140117_03:50
ubuntu    quantal   amd64   default    20140117_03:50
ubuntu    quantal   armel   default    20140117_03:50
ubuntu    quantal   armhf   default    20140117_03:50
ubuntu    quantal   i386    default    20140117_03:50
ubuntu    raring    amd64   default    20140117_03:50
ubuntu    raring    armhf   default    20140117_03:50
ubuntu    raring    i386    default    20140117_03:50
ubuntu    saucy     amd64   default    20140117_03:50
ubuntu    saucy     armhf   default    20140117_03:50
ubuntu    saucy     i386    default    20140117_03:50
ubuntu    trusty    amd64   default    20140117_03:50
ubuntu    trusty    armhf   default    20140117_03:50
ubuntu    trusty    i386    default    20140117_03:50

The template has been carefully written to work on any system that has a POSIX compliant shell with wget. gpg is recommended but can be disabled if your host doesn’t have it (at your own risks).

The same template can be used against your own server, which I hope will be very useful for enterprise deployments to build templates in a central location and have them pulled by all the hosts automatically using our expiry mechanism to keep them fresh.

While the template was designed to workaround limitations of unprivileged containers, it works just as well with system containers, so even on a system that doesn’t support unprivileged containers you can do:

lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64

And you’ll get a new container running the latest build of Ubuntu 14.04 amd64.

Using unprivileged LXC

Right, so let’s get you started, as I already mentioned, all the instructions below have only been tested on a very recent Ubuntu 14.04 (trusty) installation.
You may want to grab a daily build and run it in a VM.

Install the required packages:

  • sudo apt-get update
  • sudo apt-get dist-upgrade
  • sudo apt-get install lxc systemd-services uidmap

Then, assign yourself a set of uids and gids with:

  • sudo usermod –add-subuids 100000-165536 $USER
  • sudo usermod –add-subgids 100000-165536 $USER
  • sudo chmod +x $HOME

That last one is required because LXC needs it to access ~/.local/share/lxc/ after it switched to the mapped UIDs. If you’re using ACLs, you may instead use “u:100000:x” as a more specific ACL.

Now create ~/.config/lxc/default.conf with the following content:

lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536

And /etc/lxc/lxc-usernet with:

<your username> veth lxcbr0 10

And that’s all you need. Now let’s create our first unprivileged container with:

lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64

You should see the following output from the download template:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created an Ubuntu container (release=trusty, arch=amd64).
The default username/password is: ubuntu / ubuntu
To gain root privileges, please use sudo.

So looks like your first container was created successfully, now let’s see if it starts:

ubuntu@trusty-daily:~$ lxc-start -n p1 -d
ubuntu@trusty-daily:~$ lxc-ls --fancy
NAME  STATE    IPV4     IPV6     AUTOSTART  
------------------------------------------
p1    RUNNING  UNKNOWN  UNKNOWN  NO

It’s running! At this point, you can get a console using lxc-console or can SSH to it by looking for its IP in the ARP table (arp -n).

One thing you probably noticed above is that the IP addresses for the container aren’t listed, that’s because unfortunately LXC currently can’t attach to an unprivileged container’s namespaces. That also means that some fields of lxc-info will be empty and that you can’t use lxc-attach. However we’re looking into ways to get that sorted in the near future.

There are also a few problems with job control in the kernel and with PAM, so doing a non-detached lxc-start will probably result in a rather weird console where things like sudo will most likely fail. SSH may also fail on some distros. A patch has been sent upstream for this, but I just noticed that it doesn’t actually cover all cases and even if it did, it’s not in any released version yet.

Quite a few more improvements to unprivileged containers are to come until the final 1.0 release next month and while we certainly don’t expect all workloads to be possible with unprivileged containers, it’s still a huge improvement on what we had before and a very good building block for a lot more interesting use cases.

Read more
Daniel Holbach

We are starting a blog series where we interview our Ubuntu App Heroes. We want to learn more about how developers found the experience writing apps for Ubuntu, what their plans are, what they do and who they are.

Kicking off the series, we had a quick chat with the two guys working on the beautiful Weather app, Martin Borho and Raúl Yeguas.

Martin Borho Raúl Yeguas

Can you introduce yourselves?

Raúl: My name is Raúl Yeguas, I’m a frontend developer and I live in Seville in Spain. I studied IT at the University of Jaén where I organised some free software events. I’m a great Qt fan and a proud KDE user.

Martin: My name is Martin Borho, I’m 37 years old and I live in Hamburg, Germany. I work as a freelance programmer, mainly coding Python.

When and how did you get involved in the Ubuntu Core Apps project?

Martin: As Ubuntu Touch was announced, there was a little form at the webpage, asking for interested persons willing to contribute. As I was searching for a project I could join at that time, I filled it out….

Rául: I noticed Canonical’s call for developers on QtPlanet. When I subscribed to Canonical’s first announce I thought that it was for helping developers to write their own apps for their platform; but when I received the emails from them asking me what core app I wanted to work on I was so surprised and excited. I’m part of the Core Apps Developers from the beginning.

Have you developed apps before?

Martin: Yes, I’ve started doing a mobile app, named “Ask Ziggy”, on my Nokia N900 in 2010. In 2011 I’ve built an app for Google News called “NewsG” for WebOS. Which I later ported to Qt/QML, to get it on my Nokia N9/N950.

Raúl: Yes, mainly C++/Qt apps and HTML/JS webapps.

What was your experience learning everything involved to work on the Weather app?

Martin: Hmm, initially I had no idea what to expect. After all I have learned quite a few things (and still do). Contributing to a large scale project with people from all over the world is one, how various parts have to fit together is another one. It is fascinating to see how Ubuntu Touch has evolved over the last months.

Raúl: I have to say that this team is awesome. I learned too much from them, mainly about working in team with distant people and about designing new ways to interact with an app.

Weather App Designs

Weather App Designs

Is there anything you are proud of or feel is solved very well in the Weather app?

Raúl: Yes, the gestures to change between daily forecast and hourly forecast. I think is too easy to use and intuitive.

Martin: Hard to say, perhaps: It’s quite easy to add more weather data providers to the app, without having to deal much with the UI part. And having a distinction between fast and slow scrolling, to flip between days, respective hours, is quite nice.

What can new app developers learn from your app?

Martin: Can’t say… as I’m doing Qt/QML only in my spare time I don’t think it’s very sophisticated in that regard.

Raúl: I think that our app has well organised and differentiated graphics components so I think that it could be a good example for learning how to create complex QML components by creating simple parts. It also has a very good API to call weather info providers.

What can users of the app expect in the coming months?

Martin: The integration of Weather Channel as a second weather data provider is nearly finished and will be ready to get merged into trunk very soon. Apart from that, Raúl is currently working on new animated icons, which will be very nice when ready.

Raúl: Yes, expect some new animations for eye-candy and a new weather information provider.

Do you have any other hobbies apart from working on Ubuntu?

Martin: I like biking. And as the stadium of my favourite club is only a 5 minute walk away, I like watching football too … ;-)

Raúl: Yes, like non-IT people have. ;) I like watching movies, playing videogames and traveling. When I have enough time I produce electronic music. But I have to confess that sometimes I contribute on other open source projects \o/

Read more
Stéphane Graber

This is post 6 out of 10 in the LXC 1.0 blog post series.

When talking about container security most people either consider containers as inherently insecure or inherently secure. The reality isn’t so black and white and LXC supports a variety of technologies to mitigate most security concerns.

One thing to clarify right from the start is that you won’t hear any of the LXC maintainers tell you that LXC is secure so long as you use privileged containers. However, at least in Ubuntu, our default containers ship with what we think is a pretty good configuration of both the cgroup access and an extensive apparmor profile which prevents all attacks that we are aware of.

Below I’ll be covering the various technologies LXC supports to let you restrict what a container may do. Just keep in mind that unless you are using unprivileged containers, you shouldn’t give root access to a container to someone whom you’d mind having root access to your host.

Capabilities

The first security feature which was added to LXC was Linux capabilities support. With that feature you can set a list of capabilities that you want LXC to drop before starting the container or a full list of capabilities to retain (all others will be dropped).

The two relevant configurations options are:

  • lxc.cap.drop
  • lxc.cap.keep

Both are lists of capability names as listed in capabilities(7).

This may sound like a great way to make containers safe and for very specific cases it may be, however if running a system container, you’ll soon notice that dropping sys_admin and net_admin isn’t very practical and short of dropping those, you won’t make your container much safer (as root in the container will be able to re-grant itself any dropped capability).

In Ubuntu we use lxc.cap.drop to drop sys_module, mac_admin, mac_override, sys_time which prevent some known problems at container boot time.

Control groups

Control groups are interesting because they achieve multiple things which while interconnected are still pretty different:

  • Resource bean counting
  • Resource quotas
  • Access restrictions

The first two aren’t really security related, though resource quotas will let you avoid some obvious DoS of the host (by setting memory, cpu and I/O limits).

The last is mostly about the devices cgroup which lets you define which character and block devices a container may access and what it can do with them (you can restrict creation, read access and write access for each major/minor combination).

In LXC, configuring cgroups is done with the “lxc.cgroup.*” options which can roughly be defined as: lxc.cgroup.<controller>.<key> = <value>

For example to set a memory limit on p1 you’d add the following to its configuration:

lxc.cgroup.memory.limit_in_bytes = 134217728

This will set a memory limit of 128MB (the value is in bytes) and will be the equivalent to writing that same value to /sys/fs/cgroup/memory/lxc/p1/memory.limit_in_bytes

Most LXC templates only set a few devices controller entries by default:

# Default cgroup limits
lxc.cgroup.devices.deny = a
## Allow any mknod (but not using the node)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
## /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
## consoles
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
## /dev/{,u}random
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
## /dev/pts/*
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 136:* rwm
## rtc
lxc.cgroup.devices.allow = c 254:0 rm
## fuse
lxc.cgroup.devices.allow = c 10:229 rwm
## tun
lxc.cgroup.devices.allow = c 10:200 rwm
## full
lxc.cgroup.devices.allow = c 1:7 rwm
## hpet
lxc.cgroup.devices.allow = c 10:228 rwm
## kvm
lxc.cgroup.devices.allow = c 10:232 rwm

This configuration allows the container (usually udev) to create any device it wishes (that’s the wildcard “m” above) but block everything else (the “a” deny entry) unless it’s listed in one of the allow entries below. This covers everything a container will typically need to function.

You will find reasonably up to date documentation about the available controllers, control files and supported values at:
https://www.kernel.org/doc/Documentation/cgroups/

Apparmor

A little while back we added Apparmor profiles support to LXC.
The Apparmor support is rather simple, there’s one configuration option “lxc.aa_profile” which sets what apparmor profile to use for the container.

LXC will then setup the container and ask apparmor to switch it to that profile right before starting the container. Ubuntu’s LXC profile is rather complex as it aims to prevent any of the known ways of escaping a container or cause harm to the host.

As things are today, Ubuntu ships with 3 apparmor profiles meaning that the supported values for lxc.aa_profile are:

  • lxc-container-default (default value if lxc.aa_profile isn’t set)
  • lxc-container-default-with-nesting (same as default but allows some needed bits for nested containers)
  • lxc-container-default-with-mounting (same as default but allows mounting ext*, xfs and btrfs file systems).
  • unconfined (a special value which will disable apparmor support for the container)

You can also define your own by copying one of the ones in /etc/apparmor.d/lxc/, adding the bits you want, giving it a unique name, then reloading apparmor with “sudo /etc/init.d/apparmor reload” and finally setting lxc.aa_profile to the new profile’s name.

SELinux

The SELinux support is very similar to Apparmor’s. An SELinux context can be set using “lxc.se_context”.

An example would be:

lxc.se_context = unconfined_u:unconfined_r:lxc_t:s0-s0:c0.c1023

Similarly to Apparmor, LXC will switch to the new SELinux context right before starting init in the container. As far as I know, no distributions are setting a default SELinux context at this time, however most distributions build LXC with SELinux support (including Ubuntu, should someone choose to boot their host with SELinux rather than Apparmor).

Seccomp

Seccomp is a fairly recent kernel mechanism which allows for filtering of system calls.
As a user you can write a seccomp policy file and set it using “lxc.seccomp” in the container’s configuration. As always, this policy will only be applied to the running container and will allow or reject syscalls with a pre-defined return value.

An example (though limited and useless) of a seccomp policy file would be:

1
whitelist
103

Which would only allow syscall #103 (syslog) in the container and reject everything else.

Note that seccomp is a rather low level feature and only useful for some very specific use cases. All syscalls have to be referred by their ID instead of their name and those may change between architectures. Also, as things are today, if your host is 64bit and you load a seccomp policy file, all 32bit syscalls will be rejected. We’d need per-personality seccomp profiles to solve that but it’s not been a high priority so far.

User namespace

And last but not least, what’s probably the only way of making a container actually safe. LXC now has support for user namespaces. I’ll go into more details on how to use that feature in a later blog post but simply put, LXC is no longer running as root so even if an attacker manages to escape the container, he’d find himself having the privileges of a regular user on the host.

All this is achieved by assigning ranges of uids and gids to existing users. Those users on the host will then be allowed to clone a new user namespace in which all uids/gids are mapped to uids/gids that are part of the user’s range.

This obviously means that you need to allocate a rather silly amount of uids and gids to each user who’ll be using LXC in that way. In a perfect world, you’d allocate 65536 uids and gids per container and per user. As this would likely exhaust the whole uid/gid range rather quickly on some systems, I tend to go with “just” 65536 uids and gids per user that’ll use LXC and then have the same range shared by all containers.

Anyway, that’s enough details about user namespaces for now. I’ll cover how to actually set that up and use those unprivileged containers in the next post.

Read more
Stéphane Graber

This is post 5 out of 10 in the LXC 1.0 blog post series.

Storage backingstores

LXC supports a variety of storage backends (also referred to as backingstore).
It defaults to “none” which simply stores the rootfs under
/var/lib/lxc/<container>/rootfs but you can specify something else to lxc-create or lxc-clone with the -B option.

Currently supported values are:

directory based storage (“none” and “dir)

This is the default backingstore, the container rootfs is stored under
/var/lib/lxc/<container>/rootfs

The --dir option (when using “dir”) can be used to override the path.

btrfs

With this backingstore LXC will setup a new subvolume for the container which makes snapshotting much easier.

lvm

This one will use a new logical volume for the container.
The LV can be set with --lvname (the default is the container name).
The VG can be set with --vgname (the default is “lxc”).
The filesystem can be set with --fstype (the default is “ext4″).
The size can be set with --fssize (the default is “1G”).
You can also use LVM thinpools with --thinpool

overlayfs

This one is mostly used when cloning containers to create a container based on another one and storing any changes in an overlay.

When used with lxc-create it’ll create a container where any change done after its initial creation will be stored in a “delta0″ directory next to the container’s rootfs.

zfs

Very similar to btrfs, as I’ve not used either of those myself I can’t say much about them besides that it should also create some kind of subvolume for the container and make snapshots and clones faster and more space efficient.

Standard paths

One quick word with the way LXC usually works and where it’s storing its files:

  • /var/lib/lxc (default location for containers)
  • /var/lib/lxcsnap (default location for snapshots)
  • /var/cache/lxc (default location for the template cache)
  • $HOME/.local/share/lxc (default location for unprivileged containers)
  • $HOME/.local/share/lxcsnap (default location for unprivileged snapshots)
  • $HOME/.cache/lxc (default location for unprivileged template cache)

The default path, also called lxcpath can be overridden on the command line with the -P option or once and for all by setting “lxcpath = /new/path” in /etc/lxc/lxc.conf (or $HOME/.config/lxc/lxc.conf for unprivileged containers).

The snapshot directory is always “snap” appended to lxcpath so it’ll magically follow lxcpath. The template cache is unfortunately hardcoded and can’t easily be moved short of relying on bind-mounts or symlinks.

The default configuration used for all containers at creation time is taken from
/etc/lxc/default.conf (no unprivileged equivalent yet).
The templates themselves are stored in /usr/share/lxc/templates.

Cloning containers

All those backingstores only really shine once you start cloning containers.
For example, let’s take our good old “p1″ Ubuntu container and let’s say you want to make a usable copy of it called “p4″, you can simply do:

sudo lxc-clone -o p1 -n p4

And there you go, you’ve got a working “p4″ container that’ll be a simple copy of “p1″ but with a new mac address and its hostname properly set.

Now let’s say you want to do a quick test against “p1″ but don’t want to alter that container itself, yet you don’t want to wait the time needed for a full copy, you can simply do:

sudo lxc-clone -o p1 -n p1-test -B overlayfs -s

And there you go, you’ve got a new “p1-test” container which is entirely based on the “p1″ rootfs and where any change will be stored in the “delta0″ directory of “p1-test”.
The same “-s” option also works with lvm and btrfs (possibly zfs too) containers and tells lxc-clone to use a snapshot rather than copy the whole rootfs across.

Snapshotting

So cloning is nice and convenient, great for things like development environments where you want throw away containers. But in production, snapshots tend to be a whole lot more useful for things like backup or just before you do possibly risky changes.

In LXC we have a “lxc-snapshot” tool which will let you create, list, restore and destroy snapshots of your containers.
Before I show you how it works, please note that “lxc-snapshot” currently doesn’t appear to work with directory based containers. With those it produces an empty snapshot, this should be fixed by the time LXC 1.0 is actually released.

So, let’s say we want to backup our “p1-lvm” container before installing “apache2″ into it, simply run:

echo "before installing apache2" > snap-comment
sudo lxc-snapshot -n p1-lvm -c snap-comment

At which point, you can confirm the snapshot was created with:

sudo lxc-snapshot -n p1-lvm -L -C

Now you can go ahead and install “apache2″ in the container.

If you want to revert the container at a later point, simply use:

sudo lxc-snapshot -n p1-lvm -r snap0

Or if you want to restore a snapshot as its own container, you can use:

sudo lxc-snapshot -n p1-lvm -r snap0 p1-lvm-snap0

And you’ll get a new “p1-lvm-snap0″ container which will contain a working copy of “p1-lvm” as it was at “snap0″.

Read more
David Planella

Christmas has come early in Ubuntu this time around, with a finely wrapped present: dual-booting with Android.

We are thrilled to announce a preview of a new feature for developers: Ubuntu on mobile devices can now run alongside Android on a single handset.

For developers only

Dual boot is not a feature suitable for regular users. It is recommended to be installed only by developers who are comfortable with flashing devices and with their partition layout. Dual boot rewrites the Android recovery partition and those installing it should be intimately familiar with re-flashing it in case something goes wrong.

Multiple Android flavours are supported (AOSP or stock, CyanogenMod) and installation of Ubuntu can be done for all versions available in the phablet-flash channels.

Easy OS switch via apps

With dual boot, switching between OSs had never been easier. No more key combinations or command line interfaces to jump into the next OS: on each side, an app with a simple user interface will enable you to boot back and forth at the tap of a button.

Ubuntu dual boot on Android

The Android app manages the initial installation of Ubuntu, upgrades and rebooting into Ubuntu.

Ubuntu dual boot

On Ubuntu, the dual boot app provides an easy way to reboot into Android.

Installing dual boot

Installing and running dual boot can be done in a few easy steps. In a nutshell, it requires performing a one-off installation of the dual boot app in Android, which will enable you to both install the version of Ubuntu of your choice, and to reboot into Ubuntu.

Install dual boot on your device

Read more
Stéphane Graber

This is post 4 out of 10 in the LXC 1.0 blog post series.

Running foreign architectures

By default LXC will only let you run containers of one of the architectures supported by the host. That makes sense since after all, your CPU doesn’t know what to do with anything else.

Except that we have this convenient package called “qemu-user-static” which contains a whole bunch of emulators for quite a few interesting architectures. The most common and useful of those is qemu-arm-static which will let you run most armv7 binaries directly on x86.

The “ubuntu” template knows how to make use of qemu-user-static, so you can simply check that you have the “qemu-user-static” package installed, then run:

sudo lxc-create -t ubuntu -n p3 -- -a armhf

After a rather long bootstrap, you’ll get a new p3 container which will be mostly running Ubuntu armhf. I’m saying mostly because the qemu emulation comes with a few limitations, the biggest of which is that any piece of software using the ptrace() syscall will fail and so will anything using netlink. As a result, LXC will install the host architecture version of upstart and a few of the networking tools so that the containers can boot properly.

stgraber@castiana:~$ file /bin/ls
/bin/ls: ELF 64-bit LSB  executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, ""BuildID[sha1]"" =e50e0a5dadb8a7f4eaa2fd715cacb9842e157dc7, stripped
stgraber@castiana:~$ sudo lxc-start -n p3 -d
stgraber@castiana:~$ sudo lxc-attach -n p3
root@p3:/# file /bin/ls
/bin/ls: ELF 32-bit LSB  executable, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, ""BuildID[sha1]"" =88ff013a8fd9389747fb1fea1c898547fb0f650a, stripped
root@p3:/# exit
stgraber@castiana:~$ sudo lxc-stop -n p3
stgraber@castiana:~$

Hooks

As we know people like to script their containers and that our configuration can’t always accommodate every single use case, we’ve introduced a set of hooks which you may use.

Those hooks are simple paths to an executable file which LXC will run at some specific time in the lifetime of the container. Those executables will also be passed a set of useful environment variables so they can easily know what container invoked them and what to do.

The currently available hooks are (details in lxc.conf(5)):

  • lxc.hook.pre-start (called before any initialization is done)
  • lxc.hook.pre-mount (called after creating the mount namespace but before mounting anything)
  • lxc.hook.mount (called after the mounts but before pivot_root)
  • lxc.hook.autodev (identical to mount but only called if using autodev)
  • lxc.hook.start (called in the container right before /sbin/init)
  • lxc.hook.post-stop (run after the container has been shutdown)
  • lxc.hook.clone (called when cloning a container into a new one)

Additionally each network section may also define two additional hooks:

  • lxc.network.script.up (called in the network namespace after the interface was created)
  • lxc.network.script.down (called in the network namespace before destroying the interface)

All of those hooks may be specified as many times as you want in the configuration so you can use each hooking point multiple times.

As a simple example, let’s add the following to our “p1″ container:

lxc.hook.pre-start = /var/lib/lxc/p1/pre-start.sh

And create the hook itself at /var/lib/lxc/p1/pre-start.sh:

#!/bin/sh
echo "arguments: $*" > /tmp/test
echo "environment:" >> /tmp/test
env | grep LXC >> /tmp/test

Make it executable (chmod 755) and then start the container.
Checking /tmp/test you should see:

arguments: p1 lxc pre-start
environment:
LXC_ROOTFS_MOUNT=/usr/lib/x86_64-linux-gnu/lxc
LXC_CONFIG_FILE=/var/lib/lxc/p1/config
LXC_ROOTFS_PATH=/var/lib/lxc/p1/rootfs
LXC_NAME=p1

Android containers

I’ve often been asked whether it was possible to run Android in an LXC container. Well, the short answer is yes. However it’s not very simple and it really depends on what you want to do with it.

The first thing you’ll need if you want to do this is get your machine to run an Android kernel, you’ll need to have any modules needed by Android built and loaded before you can start the container.

Once you have that, you’ll need to create a new container by hand.
Let’s put it in “/var/lib/lxc/android/”, in there, you need a configuration file similar to this one:

lxc.rootfs = /var/lib/lxc/android/rootfs
lxc.utsname = armhf

lxc.network.type = none

lxc.devttydir = lxc
lxc.tty = 4
lxc.pts = 1024
lxc.arch = armhf
lxc.cap.drop = mac_admin mac_override
lxc.pivotdir = lxc_putold

lxc.hook.pre-start = /var/lib/lxc/android/pre-start.sh

lxc.aa_profile = unconfined

/var/lib/lxc/android/pre-start.sh is where the interesting bits happen. It needs to be an executable shell script, containing something along the lines of:

#!/bin/sh
mkdir -p $LXC_ROOTFS_PATH
mount -n -t tmpfs tmpfs $LXC_ROOTFS_PATH

cd $LXC_ROOTFS_PATH
cat /var/lib/lxc/android/initrd.gz | gzip -d | cpio -i

# Create /dev/pts if missing
mkdir -p $LXC_ROOTFS_PATH/dev/pts

Then get the initrd for your device and place it in /var/lib/lxc/android/initrd.gz.

At that point, when starting the LXC container, the Android initrd will be unpacked on a tmpfs (similar to Android’s ramfs) and Android’s init will be started which in turn should mount any partition that Android requires and then start all of the usual services.

Because there are no apparmor, cgroup or even network configuration applied to it, the container will have a lot of rights and will typically completely crash the machine. You unfortunately have to be familiar with the way Android works and not be afraid to modify its init scripts if not even its init process to only start the bits you actually want.

I can’t provide a generic recipe there as it completely depends on what you’re interested on, what version of Android and what device you’re using. But it’s clearly possible to do and you may want to look at Ubuntu Touch to see how we’re doing it by default there.

One last note, Android’s init script isn’t in /sbin/init, so you need to tell LXC where to load it with:

lxc-start -n android -- /init

LXC on Android devices

So now that we’ve seen how to run Android in LXC, let’s talk about running Ubuntu on Android in LXC.

LXC has been ported to bionic (Android’s C library) and while not feature-equivalent with its glibc build, it’s still good enough to be used.

Unfortunately due to the kind of low level access LXC requires and the fact that our primary focus isn’t Android, installation could be easier…You won’t be finding LXC on the Google PlayStore and we won’t provide you with a .apk that you can install.

Instead every time something changes in the upstream git branch, we produce a new tarball which can be downloaded here: http://qa.linuxcontainers.org/master/current/android-armel/lxc-android.tar.gz

This build is known to work with Android >= 4.2 but will quite likely work on older versions too.

For this to work, you’ll need to grab your device’s kernel configuration and run lxc-checkconfig against it to see whether it’s compatible with LXC or not. Unfortunately it’s very likely that it won’t be… In that case, you’ll need to go hunt for the kernel source for your device, add the missing feature flags, rebuild it and update your device to boot your updated kernel.

As scary as this may sound, it’s usually not that difficult as long as your device is unlocked and you’re already using an alternate ROM like Cyanogen which usually make their kernel git tree easily available.

Once your device has a working kernel, all you need to do is unpack our tarball as root in your device’s / directory, copy an arm container to /data/lxc/containers/<container name>, get into /data/lxc and run “./run-lxc lxc-start -n <container name>”.
A few seconds later you’ll be greeted by a login prompt.

Read more
Stéphane Graber

This is post 3 out of 10 in the LXC 1.0 blog post series.

Exchanging data with a container

Because containers directly share their filesystem with the host, there’s a lot of things that can be done to pass data into a container or to get stuff out.

The first obvious one is that you can access the container’s root at:
/var/lib/lxc/<container name>/rootfs/

That’s great, but sometimes you need to access data that’s in the container and on a filesystem which was mounted by the container itself (such as a tmpfs). In those cases, you can use this trick:

sudo ls -lh /proc/$(sudo lxc-info -n p1 -p -H)/root/run/

Which will show you what’s in /run of the running container “p1″.

Now, that’s great to have access from the host to the container, but what about having the container access and write data to the host?
Well, let’s say we want to have our host’s /var/cache/lxc shared with “p1″, we can edit /var/lib/lxc/p1/fstab and append:

/var/cache/lxc var/cache/lxc none bind,create=dir

This line means, mount “/var/cache/lxc” from the host as “/var/cache/lxc” (the lack of initial / makes it relative to the container’s root), mount it as a bind-mount (“none” fstype and “bind” option) and create any directory that’s missing in the container (“create=dir”).

Now restart “p1″ and you’ll see /var/cache/lxc in there, showing the same thing as you have on the host. Note that if you want the container to only be able to read the data, you can simply add “ro” as a mount flag in the fstab.

Container nesting

One pretty cool feature of LXC (though admittedly not very useful to most people) is support for nesting. That is, you can run LXC within LXC with pretty much no overhead.

By default this is blocked in Ubuntu as allowing this at the moment requires letting the container mount cgroupfs which will let it escape any cgroup restrictions that’s applied to it. It’s not an issue in most environment, but if you don’t trust your containers at all, then you shouldn’t be using nesting at this point.

So to enable nesting for our “p1″ container, edit /var/lib/lxc/p1/config and add:

lxc.aa_profile = lxc-container-default-with-nesting

And then restart “p1″. Once that’s done, install lxc inside the container. I usually recommend using the same version as the host, though that’s not strictly required.

Once LXC is installed in the container, run:

sudo lxc-create -t ubuntu -n p1

As you’ve previously bind-mounted /var/cache/lxc inside the container, this should be very quick (it shouldn’t rebootstrap the whole environment). Then start that new container as usual.

At that point, you may now run lxc-ls on the host in nesting mode to see exactly what’s running on your system:

stgraber@castiana:~$ sudo lxc-ls --fancy --nesting
NAME    STATE    IPV4                 IPV6   AUTOSTART  
------------------------------------------------------
p1      RUNNING  10.0.3.82, 10.0.4.1  -      NO       
 \_ p1  RUNNING  10.0.4.7             -      NO       
p2      RUNNING  10.0.3.128           -      NO

There’s no real limit to the number of level you can go, though as fun as it may be, it’s hard to imagine why 10 levels of nesting would be of much use to anyone :)

Raw network access

In the previous post I mentioned passing raw devices from the host inside the container. One such container I use relatively often is when working with a remote network over a VPN. That network uses OpenVPN and a raw ethernet tap device.

I needed to have a completely isolated system access that VPN so I wouldn’t get mixed routes and it’d appear just like any other machine to the machines on the remote site.

All I had to do to make this work was set my container’s network configuration to:

lxc.network.type = phys
lxc.network.hwaddr = 00:16:3e:c6:0e:04
lxc.network.flags = up
lxc.network.link = tap0
lxc.network.name = eth0

Then all I have to do is start OpenVPN on my host which will connect and setup tap0, then start the container which will steal that interface and use it as its own eth0.The container will then use DHCP to grab an IP and will behave just like if it was a physical machine connect directly in the remote network.

Read more
Stéphane Graber

This is post 2 out of 10 in the LXC 1.0 blog post series.

More templates

So at this point you should have a working Ubuntu container that’s called “p1″ and was created using the default template called simply enough “ubuntu”.

But LXC supports much more than just standard Ubuntu. In fact, in current upstream git (and daily PPA), we support Alpine Linux, Alt Linux, Arch Linux, busybox, CentOS, Cirros, Debian, Fedora, OpenMandriva, OpenSUSE, Oracle, Plamo, sshd, Ubuntu Cloud and Ubuntu.

All of those can usually be found in /usr/share/lxc/templates. They also all typically have extra advanced options which you can get to by passing “--help” after the “lxc-create” call (the “--” is required to split “lxc-create” options from the template’s).

Writing extra templates isn’t too difficult, they basically are executables (all shell scripts but that’s not a requirement) which take a set of standard arguments and are expected to produce a working rootfs in the path that’s passed to them.

One thing to be aware of is that due to missing tools not all distros can be bootstrapped on all distros. It’s usually best to just try. We’re always interested in making those work on more distros even if that means using some rather weird tricks (like is done in the fedora template) so if you have a specific combination which doesn’t work at the moment, patches are definitely welcome!

Anyway, enough talking for now, let’s go ahead and create an Oracle Linux container that we’ll force to be 32bit.

sudo lxc-create -t oracle -n p2 -- -a i386

On most systems, this will initially fail, telling you to install the “rpm” package first which is needed for bootstrap reasons. So install it and “yum” and then try again.

After some time downloading RPMs, the container will be created, then it’s just a:

sudo lxc-start -n p2

And you’ll be greated by the Oracle Linux login prompt (root / root).

At that point since you started the container without passing “-d” to “lxc-start”, you’ll have to shut it down to get your shell back (you can’t detach from a container which wasn’t started initially in the background).

Now if you are wondering why Ubuntu has two templates. The Ubuntu template which I’ve been using so far does a local bootstrap using “debootstrap” basically building your container from scratch, whereas the Ubuntu Cloud template (ubuntu-cloud) downloads a pre-generated cloud image (identical to what you’d get on EC2 or other cloud services) and starts it. That image also includes cloud-init and supports the standard cloud metadata.

It’s a matter of personal choice which you like best. I personally have a local mirror so the “ubuntu” template is much faster for me and I also trust it more since I know everything was downloaded from the archive in front of me and assembled locally on my machine.

One last note on templates. Most of them use a local cache, so the initial bootstrap of a container for a given arch will be slow, any subsequent one will just be a local copy from the cache and will be much faster.

Auto-start

So what if you want to start a container automatically at boot time?

Well, that’s been supported for a long time in Ubuntu and other distros by using some init scripts and symlinks in /etc, but very recently (two days ago), this has now been implemented cleanly upstream.

So here’s how auto-started containers work nowadays:

As you may know, each container has a configuration file typically under
/var/lib/lxc/<container name>/config

That file is key = value with the list of valid keys being specified in lxc.conf(5).

The startup related values that are available are:

  • lxc.start.auto = 0 (disabled) or 1 (enabled)
  • lxc.start.delay = 0 (delay in second to wait after starting the container)
  • lxc.start.order = 0 (priority of the container, higher value means starts earlier)
  • lxc.group = group1,group2,group3,… (groups the container is a member of)

When your machine starts, an init script will ask “lxc-autostart” to start all containers of a given group (by default, all containers which aren’t in any) in the right order and waiting the specified time between them.

To illustrate that, edit /var/lib/lxc/p1/config and append those lines to the file:

lxc.start.auto = 1
lxc.group = ubuntu

And /var/lib/lxc/p2/config and append those lines:

lxc.start.auto = 1
lxc.start.delay = 5
lxc.start.order = 100

Doing that means that only the p2 container will be started at boot time (since only those without a group are by default), the order value won’t matter since it’s alone and the init script will wait 5s before moving on.

You may check what containers are automatically started using “lxc-ls”:

stgraber@castiana:~$ sudo lxc-ls --fancy
NAME    STATE    IPV4        IPV6                                    AUTOSTART     
---------------------------------------------------------------------------------
p1      RUNNING  10.0.3.128  2607:f2c0:f00f:2751:216:3eff:feb1:4c7f  YES (ubuntu)
p2      RUNNING  10.0.3.165  2607:f2c0:f00f:2751:216:3eff:fe3a:f1c1  YES

Now you can also manually play with those containers using the “lxc-autostart” command which let’s you start/stop/kill/reboot any container marked with lxc.start.auto=1.

For example, you could do:

sudo lxc-autostart -a

Which will start any container that has lxc.start.auto=1 (ignoring the lxc.group value) which in our case means it’ll first start p2 (because of order = 100), then wait 5s (because of delay = 5) and then start p1 and return immediately afterwards.

If at that point you want to reboot all containers that are in the “ubuntu” group, you may do:

sudo lxc-autostart -r -g ubuntu

You can also pass “-L” with any of those commands which will simply print which containers would be affected and what the delays would be but won’t actually do anything (useful to integrate with other scripts).

Freezing your containers

Sometimes containers may be running daemons that take time to shutdown or restart, yet you don’t want to run the container because you’re not actively using it at the time.

In such cases, “sudo lxc-freeze -n <container name>” can be used. That very simply freezes all the processes in the container so they won’t get any time allocated by the scheduler. However the processes will still exist and will still use whatever memory they used to.

Once you need the service again, just call “sudo lxc-unfreeze -n <container name>” and all the processes will be restarted.

Networking

As you may have noticed in the configuration file while you were setting the auto-start settings, LXC has a relatively flexible network configuration.
By default in Ubuntu we allocate one “veth” device per container which is bridged into a “lxcbr0″ bridge on the host on which we run a minimal dnsmasq dhcp server.

While that’s usually good enough for most people. You may want something slightly more complex, such as multiple network interfaces in the container or passing through physical network interfaces, … The details of all of those options are listed in lxc.conf(5) so I won’t repeat them here, however here’s a quick example of what can be done.

lxc.network.type = veth
lxc.network.hwaddr = 00:16:3e:3a:f1:c1
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.name = eth0

lxc.network.type = veth
lxc.network.link = virbr0
lxc.network.name = virt0

lxc.network.type = phys
lxc.network.link = eth2
lxc.network.name = eth1

With this setup my container will have 3 interfaces, eth0 will be the usual veth device in the lxcbr0 bridge, eth1 will be the host’s eth2 moved inside the container (it’ll disappear from the host while the container is running) and virt0 will be another veth device in the virbr0 bridge on the host.

Those last two interfaces don’t have a mac address or network flags set, so they’ll get a random mac address at boot time (non-persistent) and it’ll be up to the container to bring the link up.

Attach

Provided you are running a sufficiently recent kernel, that is 3.8 or higher, you may use the “lxc-attach” tool. It’s most basic feature is to give you a standard shell inside a running container:

sudo lxc-attach -n p1

You may also use it from scripts to run actions in the container, such as:

sudo lxc-attach -n p1 -- restart ssh

But it’s a lot more powerful than that. For example, take:

sudo lxc-attach -n p1 -e -s 'NETWORK|UTSNAME'

In that case, you’ll get a shell that says “root@p1″ (thanks to UTSNAME), running “ifconfig -a” from there will list the container’s network interfaces. But everything else will be that of the host. Also passing “-e” means that the cgroup, apparmor, … restrictions won’t apply to any processes started from that shell.

This can be very useful at times to spawn a software located on the host but inside the container’s network or pid namespace.

Passing devices to a running container

It’s great being able to enter and leave the container at will, but what about accessing some random devices on your host?

By default LXC will prevent any such access using the devices cgroup as a filtering mechanism. You could edit the container configuration to allow the right additional devices and then restart the container.

But for one-off things, there’s also a very convenient tool called “lxc-device”.
With it, you can simply do:

sudo lxc-device add -n p1 /dev/ttyUSB0 /dev/ttyS0

Which will add (mknod) /dev/ttyS0 in the container with the same type/major/minor as /dev/ttyUSB0 and then add the matching cgroup entry allowing access from the container.

The same tool also allows moving network devices from the host to within the container.

Read more
Stéphane Graber

This is post 1 out of 10 in the LXC 1.0 blog post series.

So what’s LXC?

Most of you probably already know the answer to that one, but here it goes:

“LXC is a userspace interface for the Linux kernel containment features.
Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.”

I’m one of the two upstream maintainers of LXC along with Serge Hallyn.
The project is quite actively developed with milestones every month and a stable release coming up in February. It’s so far been developed by 67 contributors from a wide range of backgrounds and companies.

The project is mostly developed on github: http://github.com/lxc
We have a website at: http://linuxcontainers.org
And mailing lists at: http://lists.linuxcontainers.org

LXC 1.0

So what’s that 1.0 release all about?

Well, simply put it’s going to be the first real stable release of LXC and the first we’ll be supporting for 5 years with bugfix releases. It’s also the one which will be included in Ubuntu 14.04 LTS to be released in April 2014.

It’s also going to come with a stable API and a set of bindings, quite a few interesting new features which will be detailed in the next few posts and support for a wide range of host and guest distributions (including Android).

How to get it?

I’m assuming most of you will be using Ubuntu. For the next few posts, I’ll myself be using the current upstream daily builds on Ubuntu 14.04 but we maintain daily builds on 12.04, 12.10, 13.04, 13.10 and 14.04, so if you want the latest upstream code, you can use our PPA.

Alternatively, LXC is also directly in Ubuntu and quite usable since Ubuntu 12.04 LTS. You can choose to use the version which comes with whatever release you are on, or you can use one the backported version we maintain.

If you want to build it yourself, you can do (not recommended when you can simply use the packages for your distribution):

git clone git://github.com/lxc/lxc
cd lxc
sh autogen.sh
# You will probably want to run the configure script with --help and then set the paths
./configure
make
sudo make install

What about that first container?

Oh right, that was actually the goal of this post wasn’t it?

Ok, so now that you have LXC installed, hopefully using the Ubuntu packages, it’s really as simple as:

# Create a "p1" container using the "ubuntu" template and the same version of Ubuntu
# and architecture as the host. Pass "-- --help" to list all available options.
sudo lxc-create -t ubuntu -n p1

# Start the container (in the background)
sudo lxc-start -n p1 -d

# Enter the container in one of those ways## Attach to the container's console (ctrl-a + q to detach)
sudo lxc-console -n p1

## Spawn bash directly in the container (bypassing the console login), requires a >= 3.8 kernel
sudo lxc-attach -n p1

## SSH into it
sudo lxc-info -n p1
ssh ubuntu@<ip from lxc-info>

# Stop the container in one of those ways
## Stop it from within
sudo poweroff

## Stop it cleanly from the outside
sudo lxc-stop -n p1

## Kill it from the outside
sudo lxc-stop -n p1 -k

And there you go, that’s your first container. You’ll note that everything usually just works on Ubuntu. Our kernels have support for all the features that LXC may use and our packages setup a bridge and a DHCP server that the containers will use by default.
All of that is obviously configurable and will be covered in the coming posts.

Read more
Stéphane Graber

So it’s almost the end of the year, I’ve got about 10 days of vacation for the holidays and a bit of time on my hands.

Since I’ve been doing quite a bit of work on LXC lately in prevision for the LXC 1.0 release early next year, I thought that it’d be a good use of some of that extra time to blog about the current state of LXC.

As a result, I’m preparing a series of 10 blog posts covering what I think are some of the most exciting features of LXC. The planned structure is:

While they are all titled LXC 1.0, most of the things I’ll be showing will work just as well on older LXC. However some of the features will need a very very recent version of LXC (as in, current upstream git). I’ll try to make that clear and will explain how to use our stable backports in Ubuntu or current upstream snapshots from our PPA.

I’ll be updating this first blog post with links to all of the posts in the series. So if you want to bookmark or refer to these, please use this post.

Read more
Michael Hall

Convergent File ManagerConvergence is going to be a major theme for Ubuntu 14.04, not just at the OS and Unity 8 levels, but also for the apps that run on it. The Core Apps, those apps that were developed by the community and included by default in the last release, are no exception to this. We want to make sure they all converge neatly and usefully on both a tablet and on the desktop. So once again we are asking for community design input, this time to take the existing application interfaces and extend them to new form factors.

How to submit your designs

We have detailed the kind of features we want to see for each of the Core Apps on a Convergence wiki page. If you have a convergence design idea you would like to submit, send it as a file attachment or link to it online in an email to design@canonical.com along with any additional notes, descriptions, or user stories.  The design team will be reviewing the submitted designs live on their bi-weekly Design Clinics (Dec 4th and Dec 18th) at 1400 UTC.  But before you submit your ideas, keep reading to see what they should include.

Extend what’s there

We don’t want to add too many features this cycle, there’s going to be enough work to do just building the convergence into the app.  Use the existing features and designs as your starting point, and re-imagine those same features and designs on a tablet or desktop.  Design new features or modify existing ones when it makes the experience better on a different form factor, but remember that we want the user to experience it as the same application across the board, so try and keep the differences to a minimum.

Form follows function

There’s more to a good design than just a good looking UI, especially when designing convergence.  Make sure that you take the user’s activity into account, plan out how they will access the different features of the app, make sure it’s both intuitive and simple.  The more detail you put into this the more likely you are to discover possible problems with your designs, or come up with better solutions that you had originally intended.

Think outside the screen

There is more to convergence that just a different screen size, and your designs should take that into consideration.  While it’s important to make good use of the added space in the UI, think about how the user is going to interact with it.  You hold a tablet differently than you do a phone, so make sure your designs work well there.

On the desktop you have even more to think about, when the user has a keyboard and mouse, but likely not a touch screen, you want to make sure the interface isn’t cumbersome.  Think about how scrolling will be different too, while it’s easy to swipe both vertically and horizontally on a phone or tablet, you usually only have a vertical scroll wheel on a desktop mouse.  But, you also have more precise control over a mouse pointer than you do with a finger-tip, so your interface should take advantage of that too.

Resources available to you

Now that you know what’s needed, here are some resources to help you.  Once again we have our community Balsamiq account available to anybody who wants to use it to create mockups (email me if you need an account).  I have created a new project for Core Apps Convergence that you can use to add your designs.  You can then submit links to your designs to the Design Team’s email above.  The Design Team has also provided a detailed Design Guide for Ubuntu SDK apps, including a section on Responsive Layouts that give some suggested patterns for different form factors.  You can also choose to use any tools you are comfortable with, as long as they Design Team and community developers can view it.

Read more
Michael Hall

At the same time that Ubuntu 13.10 was released, we also went live with a new API documentation website here on the Ubuntu Developer Portal. This website will slowly replace our previous static docs, which came in a variety of formats, with a single structured place for all of our developer APIs. This new site, backed by Python and Django, will let us make our API documentation more easily discoverable, more comprehensive, and more interactive over time.

Screenshot from 2013-10-17 09:54:41

We launched the site with only the documentation for the Ubuntu UI Toolkit, as well as upstream QtQuick components. But in the past week we’ve added on to that API documentation for the new Content Hub, which allows confined apps to request access to files (pictures, music, etc) stored outside of their sandbox, as well as a full new section of HTML5 API docs covering the visual components developed to match the look and feel of their Qt/QML counterparts.

Read more
David Planella

Today a major milestone in the history of Ubuntu and the mobile industry has been reached: we’re extremely proud to celebrate the release of Ubuntu 13.10, the free, open source operating system for smartphones, desktop and server.

A release for mobile developers

phone-apps-grid-extended-cut

As of today, Ubuntu is available on the desktop, on servers and on smartphones. Ubuntu’s first ever mobile edition provides an operating system with all applications phone users need for their day-to-day, in addition to a thriving app ecosystem and a platform application authors can target.

This is the first leap on the road to convergence and having an OS to rule all devices and form factors.

Native or web: your choice

The Ubuntu SDK enables developers to easily create applications that make use of the full capabilities of the platform and integrate naturally with the OS. It contains Qt Creator, a full-fledged IDE with code-editing, debugging and device deployment features; the UI toolkit, with a set of widgets and components to be used as building blocks for Ubuntu apps; and detailed developer documentation, including API docs and tutorials.

As part of the app developer story both native and web are first-class citizens. For the native approach, QML combined with JavaScript is the easiest way to write Ubuntu apps, while C++ is also fully supported. The SDK is powered by the widely used Qt framework.

For those writing or porting HTML5 applications, the SDK features various levels of support to cover all web developer needs:

  • HTML5 apps – use web technologies to write apps
  • HTML5 Cordova apps – use web technologies to access native device functions such as camera and sensors
  • Webapps – integrate a website with Ubuntu and launch it as an app

The SDK also uses the full capabilities of OpenGL ES graphics acceleration, providing high-quality 3D rendering for the most demanding games.

Start writing an Ubuntu app ›

From concept to millions of users

With the Ubuntu Software Store Beta, the final big piece of infrastructure that completes the development workflow is now in place. Ubuntu now assists developers throughout the whole app lifecycle: from idea to implementation to publishing and to updates.

Publish your app in Ubuntu ›

Community-driven core apps

app_shocase-700px

As a testament to the stunning result that can be achieved combining a vibrant community of developers, a team of designers and the Ubuntu SDK, we’re also thrilled to announce the availability of the 12 core apps for the phone. Core applications have been designed from the ground up to provide the basic functionality a user needs for their every day, and more. They include:

  • Daily apps: Music, Clock, Weather, Calendar, RSS reader, Calculator
  • Games: Sudoku, Dropping Letters
  • Developer tools: Terminal, File Manager

These apps complement the offer of pre-installed software on the phone, including Dialer, Messaging, Browser, Camera, Gallery, Notes, Contacts and a set of webapps such as Twitter and Facebook.

Core apps have been entirely created by teams of community contributors and Canonical designers. Volunteer contributions have ranged from development, design, QA to bug reporting and support.

We’d like to thank all developers and any contributors who have in any way made the core apps happen. The work you’ve done in the last few months and the commitment you’ve shown to the project is just unbelievable, you rock!

Learn more about Ubuntu core apps ›

Industry-ready: differentiation without fragmentation

phone-naturally-neat-cut

Ubuntu is built for the phone industry. Equally suited for entry-level or high-end smartphones, it provides a powerful, yet lightweight platform with a clear and consistent user experience that can be easily customized for different operators.

At the core of Ubuntu’s design vision, scopes provide dedicated views to find, organize and show a variety of content types. Be it your contacts, your messages, pictures or online videos, dedicated scopes work for you transparently to bring you the best results when you do a search on your device.

Operators can customize the default experience by:

  • Prioritising which results are displayed first
  • Using the Apps scope to return results from multiple stores
  • Customising the home screen for their service, including integrated online payment support
  • Highlighting their own content on the default scopes

Info for operators and OEMs ›
Learn more about scopes ›

Developer.ubuntu.com 2.0

Developer-2-0

Coinciding with the release of the OS, a fully redesigned developer site has been unveiled. The Ubuntu developer site now provides a hub to all resources and information needed to develop and publish different types content for the Ubuntu platform, including:

  • Apps – how to create applications for Ubuntu
  • Scopes – how to create scopes to customize the content shown to users
  • Cloud – how to create charms for Juju cloud deployments
  • Web – how to create webapps to integrate websites into Ubuntu

Each development area has been expanded to add technology overviews, tutorials, development recipes and extensive API documentation to make the development experience easier – and fun!

Go to the Ubuntu developer site ›

Today it’s time to celebrate our first mobile release, enjoy the amazing work that has been done in the past six months and start looking at the next steps to bring Ubuntu to the masses. And while talking about celebration, which better way than actually creating an app for Ubuntu?

Install Ubuntu on your phone

Read more
Michael Hall

App Showdown Winners

The judging is finished and the scores are in, we now have the winners of this year’s Ubuntu App Showdown!  Over the course of six weeks, and using a beta release of the new Ubuntu SDK, our community of app developers were able to put together a number of stunningly beautiful, useful, and often highly entertaining apps.

We had everything from games to productivity tools submitted to the competition, written in QML, C++ and HTML5. Some were ports of apps that already existed on other platforms, but the vast majority were original apps created specifically for the Ubuntu platform. Best of all, these apps are all available to download and install from the new Click store on Ubuntu phones and tablets, so if you have a Nexus device or one with a community image of Ubuntu, you can try these and many more for yourself.  Now, on to the winners!

Original Apps #1: Karma Machine

karma_machine_subredditkarma_machine_contentkarma_machine_commentsKarma Machine is wonderful app for browsing Reddit, and what geek wouldn’t want a good Reddit app?  Developed by Brian Robles, Karma Machine has nearly everything you could want in a Reddit app, and takes advantage of touch gestures to make it easy to upvote and downvote both articles and comments.  It even supports user accounts so you can see your favorite subreddits easily.  On top of it’s functionality, Karma Machine is also visually appealing, with a good mix of animations, overlays and overall use of colors and layouts.  It is simply one of the best Reddit clients on any platform (having written my own Reddit client, that’s saying something!), and having it as an original Ubuntu app makes it a valuable addition to our ecosystem.  With all that, it’s little wonder that Karma Machine was tied for the top spot on the judges list!

Original Apps #1: Saucy Bacon

saucy_bacon_searchsaucy_bacon_toolbarsaucy_bacon_editSomething for the foodies among us, Saucy Bacon is a great way to find and manage recipes for your favorite dish. Backed by food2fork.com, this app lets you search for recipes from all over the web.  You can save them for future reference, and mark your favorites for easy access over and over again.  And since any serious cook is going to modify a recipe to their own tastes, Saucy Bacon even lets you edit recipes downloaded from somewhere else.  You can of course add your own unique recipe to the database as well.  It even lets you add photos to the recipe card directly from the camera, showing off some nice integration with the Ubuntu SDK’s sensor APIs and hardware capabilities.  All of this mouth-watering goodness secured developer Giulio Collura’s Saucy Bacon app a tie for the #1 stop for original Ubuntu apps in our contest.

Ported Apps #1: Snake

snake_introsnake_play2snake_play

The game Snake has taken many forms on many platforms throughout the years.  It’s combination of simple rules and every-increasing difficulty has made it a popular way to kill time for decades.  Developer Brad Wells has taken this classic game from Nokia’s discontinued Meego/Harmattan mobile OS, which used a slightly older version of Qt for app development, and updated it to work on Ubuntu using the Ubuntu SDK components.  Meego had a large number of high quality apps written for it back in it’s day, and this game proves that Ubuntu for phones and tablets can give those apps a new lease on life.

Go and get them all!

The 2013 Ubuntu App Showdown was an opportunity for us to put the new Ubuntu SDK beta through some real-world testing, and kick off a new app ecosystem for Ubuntu.  During the course of these six weeks we’ve received great feedback from our developer community, worked out a large number of bugs in the SDK, and added or plan to add many new features to our platform.

In addition to being some of the first users of the Ubuntu SDK, the app developers were also among the first to use the new Click packaging format and tools as well as the new app upload process that we’ve been working on to reduce review times and ease the process of publishing apps.  The fact that all of the submitted apps have already been published in the new app store is a huge testament to the success of that work, and to the engineers involved in designing and delivering it.

Once again congratulations to Brian Robles, Giulio Collura and Brad Wells, and a big thank you to everybody who participated or helped those who participated, and all of the engineers who have worked on building the Ubuntu SDK, Click tools and app store.  And if you have a supported device, you should try out the latest Ubuntu images, and try these and the many other apps already available for it.  And if you’re an app developer, or want to become an app developer, now is your time to get started with the Ubuntu SDK!

Read more
David Planella

Mostly everything is ready for the judges to start reviewing the Ubuntu App Showdown apps next week, exciting times ahead!

roll-dice

As of now, all applications that were submitted for the App Showdown contest have been reviewed and submitted to the Software Store. They can also be installed and run from the Dash on an Ubuntu phone, just two taps away.

We will be doing a final round of testing on Monday to double-check all apps indeed install and run flawlessly. We will then set up the review forms for the judges and also publish the final list of contest apps.

Thanks to everyone who has participated in the contest. Good luck with the judging, you all rock!

?Roll of the dice? by Katie Harbath under a Creative Commons Attribution 2.0.

Read more
Stéphane Graber

After over 3 months of development and experimentation, I’m now glad to announce that the system images are now the recommended way to deploy and update the 4 supported Ubuntu Touch devices, maguro (Galaxy Nexus), mako (Nexus 4), grouper (Nexus 7) and manta (Nexus 10).

Anyone using one of those devices can choose to switch to the new images using: phablet-flash ubuntu-system

Once that’s done, further updates will be pushed over the air and can be applied through the Updates panel in the System Settings.

Ubuntu Touch Upgrader

You should be getting a new update every few days, whenever an image is deemed of sufficient quality for public consumption. Note that the downloader UI doesn’t yet show progress, so if it doesn’t do anything, it doesn’t mean it’s not working.

Those new images are read-only except for a few selected files and for the user profile and data, this is a base requirement for the delta updates to work properly.
However if the work you’re doing requires installation of extra non-click packages, such as developing on your device using the SDK, you have two options:

  1. Stick to the current flipped images which we’ll continue to generate for the foreseeable future.
  2. Use the experimental writable flag by doing touch /userdata/.writable_image and rebooting your device.
    This will make / writable again, however beware that applying image updates on such a system will lead to unknown results, so if you do choose to use this flag, you’ll have to manually update your device using apt-get (and possibly have to unmount/remount some of the bind-mounted files depending on which package needs to be updated).

From now on, the QA testing effort will focus on those new images rather than the standard flipped ones. I’d also highly recommend to all our application developers to at least test their apps with those images and report any bug that they see in #ubuntu-touch (irc.freenode.net).

 

Read more
Michael Hall

Today we are announcing our second Ubuntu App Showdown! Contestants will have six weeks to build and publish their apps using the new Ubuntu SDK and Ubuntu Touch platform. Both original apps and ported apps, native and HTML 5, will qualify for this competition.

phone-naturally-neat

The winners of this contest will each receive an LG Nexus 4 phone running Ubuntu Touch with their application pre-installed. Furthermore, each of the winners will have an opportunity to have their app included in the default Ubuntu install images for phones and tablets.

All valid entries will also become available for install on Ubuntu Touch devices from the Apps lens in the Dash, using the new Click packages and MyApps submission process.

Judges

The jury will be composed by a team of five judges:

  • Jono Bacon, Ubuntu Community Manager
  • Joey-Elijah Sneddon, writer and editor-in-chief of OMG!Ubuntu
  • Lisette Slegers, User Experience Designer at Canonical
  • Nekhelesh Ramananthan, Ubuntu Touch Core App developer
  • Bill Filler, Engineering Manager for the Phone & Tablet App Team

Review criteria

The jury will judge applications according to the following criteria:

  • General Interest – apps that are of more interest to general phone users will be scored higher. We recommend identifying what most phone users want to see, and identifying gaps that your app could fill.
  • Features – a wide range of useful and interesting features.
  • Quality – a high quality, stable, and bug-free application experience.
  • Design – your app should harness the Ubuntu Design Guidelines so it looks, feels, and operates like an Ubuntu app.
  • Awareness / Promotion – we will award extra points to those of you who blog, tweet, facebook, Google+, and otherwise share updates and information about your app as it progress.

If you are not a programmer and want to share some ideas for cool apps, be sure to add and vote apps on our reddit page.

How To Enter

The contest is free to enter and open to everyone.

The six week period starts on the Wed 7th August 2013!

Enter the Ubuntu App Showdown

Read more
Steve George

Today we are pleased to announce the beta release of the Ubuntu SDK! The SDK is the toolkit that will power Ubuntu’s convergence revolution, giving you one platform and one API for all Ubuntu form factors. This lets you write your app one time, in one way, and it will work everywhere.  You can read the full Ubuntu SDK Beta announcement here.

For the developers who are already writing apps using the Ubuntu SDK most of the beta’s features will already be known, as they have been landing in the daily releases as they become finished. Here’s a list of the features that have been added since the alpha:

  • Cordova Ubuntu HTML5 app template – leverage the Apache Cordova APIs to write Ubuntu apps with web technologies: HTML, JavaScript and CSS. Write your first HTML5 with the Cordova Ubuntu tutorial.
  • Ubuntu SDK HTML5 theme – a companion to all HTML5 apps: stylesheets and JavaScript code to provide the same look and feel as native apps
  • Responsive layout – applications can now adopt a more natural layout depending on form factor (phone, tablet, desktop) and orientation
  • Scope template – Scopes enable operators to prioritise their content, to achieve differentiation without fragmentation. Now easier to create with a code template
  • Click packaging preview – initial implementation of the Click technology to distribute applications. Package your apps with Click at the press of a button
  • Theme engine improvements – a reworked theme engine to make it easier and more flexible to customise the look and feel of your app
  • Unified Actions API – define actions to be used across different Ubuntu technologies: the HUD, App Indicators, the Launcher, the Messaging Menu
  • U1DB integration – the SDK now provides a database API to easily synchronise documents between devices, using the Ubuntu One cloud

Some of the biggest news here is the Cordova support and HTML5 theme, which brings together our goal of making first class HTML5 app that look and feel like native apps.  Cordova support means that apps written using the PhoneGap framework can be easily ported to Ubuntu Touch, and the HTML5 themes, written largely by community developer Adnane Belmadiaf, will allow those apps to match the native SDK components in both the way they look as well as the way the user interacts with them.

The Responsive Layouts, which landed in the daily SDK packages weeks go, gives developers the ability to adjust their application’s GUI dynamically at runtime, depending on the amount of screen space available or any number of other variables.  This is one key to making convergent apps that can adapt to be useful on both small touch screens and large monitors with a keyboard and mouse.

We’ve also put out the first set of Click packaging tools, which will provide an easier way for developers to package and distribute their applications both on their own and through the Ubuntu Software Center.  There is still a lot more work to do before all of the Click infrastructure is in place, but for now developers can start trying getting a feel for it.

All of that and more is now available, so grab the latest SDK packages, read the QML and HTML5 app development tutorials, and get a head start building your convergent Ubuntu application today!

Read more
Stéphane Graber

Some of you may be aware that I along with Barry Warsaw and Ondrej Kubik have been working on image based upgrades for Ubuntu Touch.
This is going to be the official method to update any Ubuntu Touch devices. When using this system, the system will effectively be read-only with updates being downloaded over the air from a central server and applied in a consistent way across all devices.
Design details may be found at: http://wiki.ubuntu.com/ImageBasedUpgrades

After several months of careful design and implementation, we are now ready to get more testers. We are producing daily images for our 4 usual devices, Galaxy Nexus (maguro), Nexus 4 (mako), Nexus 7 (grouper) and Nexus 10 (manta).
At this point, only those devices are supported. We’ll soon be working with the various ports to see how to get them running on the new system.

So what’s working at this point?

  • Daily delta images are generated and published to
    http://system-image.ubuntu.com
  • We have a command line client tool (system-image-cli), an update server and an upgrader sitting in the recovery partition
  • The images usually boot and work

What doesn’t work?

  • Installing apps as the system partition is read-only and we’re waiting for click packages to be fully implemented in our images
  • Data migration. We haven’t implemented any migration script from the current images to the new ones, so switching will wipe everything from your device
  • Possibly quite some more features I haven’t tested yet

So how can I help?

You can help us if:

  • You have one of the 4 supported devices
  • You don’t use that device for your everyday work
  • You don’t need to install any extra apps
  • You don’t care about losing all your existing data
  • You’re usually able to use adb/fastboot to recover from any problems that might happen

If you don’t fit all of the above criteria, please stick to the current flipped images.
If you think you’re able to help us and want to test those new images, then here’s how to switch to them:

  1. Get the latest version of phablet-tools (>= 0.15+13.10.20130720.1-0ubuntu1)
  2. Boot your device
  3. Backup anything you may want to keep as it’ll be wiped clean!!!
  4. Run: phablet-flash --ubuntu-bootstrap
  5. Wait for it to finish downloading and installing
  6. You’re done!
  7. To apply any further update, use: adb shell system-image-cli
    (never use phablet-flash after the initial flash, updates can only be applied through system-image-cli!)

Reverting to standard flipped images:

  • Boot your device
  • Backup anything you may want to keep as it’ll be wiped clean!!!
  • Run: phablet-flash –bootstrap
  • Wait for it to finish downloading and installing
  • You’re back to standard flipped images!

To report bugs, the easiest is to go to:
https://launchpad.net/ubuntu-image-image/+filebug

We also all hangout in #ubuntu-touch on irc.freenode.net

 

Read more
Chris Johnston

I blogged a couple of weeks ago about the addition of the key performance indicators to the Ubuntu QA Dashboard. Since that post the QA team has been hard at work. We have added a bootspeed KPI to the dashboard, giving you a quick look at today vs yesterday. Another cool feature that has been added to the dashboard is the addition of bug information. Previously the dashboard just provided a link to the bug in Launchpad. The dashboard now fetches the bug data from Launchpad and displays it for you when the mouse hovers over the link. No more having to click through to see what the bug is that is causing issues!

The other big things that we have added goes back to one of the most basic types of testing that we do, the smoke test. We have added two new big things to smoke testing in the past few weeks. The first is that you are now able to see, from the dashboard, which tests pass and which fail. The new test case results page shows you quite a number of things about the testing that was done. You get the basics that you see on the other smoke testing pages (total tests, pass/fail/error count, pass rate, as well as image and machine information) but you now see a list of test cases and their return codes. As you see on the results page I linked to, there were 19 test cases that ran plus four setup type commands. The setup commands are shown since it is possible for them to error. Clicking on the individual command types will give you more details about the specific test to include the test suite and the command that was run. This is all valuable data in determining the quality of Ubuntu each day and easily pinpointing any problems that there are.

The final thing that has changed with smoke testing isn’t so much a change in the dashboard but an addition of a type of test. The QA team is now running daily Ubuntu Touch testing with autopilot. Adding the autopilot testing provides us with a new and much better grasp of the daily quality of Ubuntu. Previously, we relied on manual testing for much of the functional testing of Ubuntu to determine if things where the way that they were supposed to be. While the manual testing is still an important part of the overall indication of quality, the addition of the autopilot tests provides us with the ability to test many more things at a much higher frequency than relying completely on manual testing. The autopilot testing results show up in the smoke testing results each day after they run.

Read more