Canonical Voices

Posts tagged with 'canonical voices'

Stéphane Graber

This is post 4 out of 10 in the LXC 1.0 blog post series.

Running foreign architectures

By default LXC will only let you run containers of one of the architectures supported by the host. That makes sense since after all, your CPU doesn’t know what to do with anything else.

Except that we have this convenient package called “qemu-user-static” which contains a whole bunch of emulators for quite a few interesting architectures. The most common and useful of those is qemu-arm-static which will let you run most armv7 binaries directly on x86.

The “ubuntu” template knows how to make use of qemu-user-static, so you can simply check that you have the “qemu-user-static” package installed, then run:

sudo lxc-create -t ubuntu -n p3 -- -a armhf

After a rather long bootstrap, you’ll get a new p3 container which will be mostly running Ubuntu armhf. I’m saying mostly because the qemu emulation comes with a few limitations, the biggest of which is that any piece of software using the ptrace() syscall will fail and so will anything using netlink. As a result, LXC will install the host architecture version of upstart and a few of the networking tools so that the containers can boot properly.

stgraber@castiana:~$ file /bin/ls
/bin/ls: ELF 64-bit LSB  executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, """BuildID[sha1]""" =e50e0a5dadb8a7f4eaa2fd715cacb9842e157dc7, stripped
stgraber@castiana:~$ sudo lxc-start -n p3 -d
stgraber@castiana:~$ sudo lxc-attach -n p3
root@p3:/# file /bin/ls
/bin/ls: ELF 32-bit LSB  executable, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, """BuildID[sha1]""" =88ff013a8fd9389747fb1fea1c898547fb0f650a, stripped
root@p3:/# exit
stgraber@castiana:~$ sudo lxc-stop -n p3
stgraber@castiana:~$

Hooks

As we know people like to script their containers and that our configuration can’t always accommodate every single use case, we’ve introduced a set of hooks which you may use.

Those hooks are simple paths to an executable file which LXC will run at some specific time in the lifetime of the container. Those executables will also be passed a set of useful environment variables so they can easily know what container invoked them and what to do.

The currently available hooks are (details in lxc.conf(5)):

  • lxc.hook.pre-start (called before any initialization is done)
  • lxc.hook.pre-mount (called after creating the mount namespace but before mounting anything)
  • lxc.hook.mount (called after the mounts but before pivot_root)
  • lxc.hook.autodev (identical to mount but only called if using autodev)
  • lxc.hook.start (called in the container right before /sbin/init)
  • lxc.hook.post-stop (run after the container has been shutdown)
  • lxc.hook.clone (called when cloning a container into a new one)

Additionally each network section may also define two additional hooks:

  • lxc.network.script.up (called in the network namespace after the interface was created)
  • lxc.network.script.down (called in the network namespace before destroying the interface)

All of those hooks may be specified as many times as you want in the configuration so you can use each hooking point multiple times.

As a simple example, let’s add the following to our “p1″ container:

lxc.hook.pre-start = /var/lib/lxc/p1/pre-start.sh

And create the hook itself at /var/lib/lxc/p1/pre-start.sh:

#!/bin/sh
echo "arguments: $*" > /tmp/test
echo "environment:" >> /tmp/test
env | grep LXC >> /tmp/test

Make it executable (chmod 755) and then start the container.
Checking /tmp/test you should see:

arguments: p1 lxc pre-start
environment:
LXC_ROOTFS_MOUNT=/usr/lib/x86_64-linux-gnu/lxc
LXC_CONFIG_FILE=/var/lib/lxc/p1/config
LXC_ROOTFS_PATH=/var/lib/lxc/p1/rootfs
LXC_NAME=p1

Android containers

I’ve often been asked whether it was possible to run Android in an LXC container. Well, the short answer is yes. However it’s not very simple and it really depends on what you want to do with it.

The first thing you’ll need if you want to do this is get your machine to run an Android kernel, you’ll need to have any modules needed by Android built and loaded before you can start the container.

Once you have that, you’ll need to create a new container by hand.
Let’s put it in “/var/lib/lxc/android/”, in there, you need a configuration file similar to this one:

lxc.rootfs = /var/lib/lxc/android/rootfs
lxc.utsname = armhf

lxc.network.type = none

lxc.devttydir = lxc
lxc.tty = 4
lxc.pts = 1024
lxc.arch = armhf
lxc.cap.drop = mac_admin mac_override
lxc.pivotdir = lxc_putold

lxc.hook.pre-start = /var/lib/lxc/android/pre-start.sh

lxc.aa_profile = unconfined

/var/lib/lxc/android/pre-start.sh is where the interesting bits happen. It needs to be an executable shell script, containing something along the lines of:

#!/bin/sh
mkdir -p $LXC_ROOTFS_PATH
mount -n -t tmpfs tmpfs $LXC_ROOTFS_PATH

cd $LXC_ROOTFS_PATH
cat /var/lib/lxc/android/initrd.gz | gzip -d | cpio -i

# Create /dev/pts if missing
mkdir -p $LXC_ROOTFS_PATH/dev/pts

Then get the initrd for your device and place it in /var/lib/lxc/android/initrd.gz.

At that point, when starting the LXC container, the Android initrd will be unpacked on a tmpfs (similar to Android’s ramfs) and Android’s init will be started which in turn should mount any partition that Android requires and then start all of the usual services.

Because there are no apparmor, cgroup or even network configuration applied to it, the container will have a lot of rights and will typically completely crash the machine. You unfortunately have to be familiar with the way Android works and not be afraid to modify its init scripts if not even its init process to only start the bits you actually want.

I can’t provide a generic recipe there as it completely depends on what you’re interested on, what version of Android and what device you’re using. But it’s clearly possible to do and you may want to look at Ubuntu Touch to see how we’re doing it by default there.

One last note, Android’s init script isn’t in /sbin/init, so you need to tell LXC where to load it with:

lxc-start -n android -- /init

LXC on Android devices

So now that we’ve seen how to run Android in LXC, let’s talk about running Ubuntu on Android in LXC.

LXC has been ported to bionic (Android’s C library) and while not feature-equivalent with its glibc build, it’s still good enough to be used.

Unfortunately due to the kind of low level access LXC requires and the fact that our primary focus isn’t Android, installation could be easier…You won’t be finding LXC on the Google PlayStore and we won’t provide you with a .apk that you can install.

Instead every time something changes in the upstream git branch, we produce a new tarball which can be downloaded here: https://jenkins.linuxcontainers.org/view/LXC/view/LXC%20builds/job/lxc-build-android/lastSuccessfulBuild/artifact/lxc-android.tar.gz

This build is known to work with Android >= 4.2 but will quite likely work on older versions too.

For this to work, you’ll need to grab your device’s kernel configuration and run lxc-checkconfig against it to see whether it’s compatible with LXC or not. Unfortunately it’s very likely that it won’t be… In that case, you’ll need to go hunt for the kernel source for your device, add the missing feature flags, rebuild it and update your device to boot your updated kernel.

As scary as this may sound, it’s usually not that difficult as long as your device is unlocked and you’re already using an alternate ROM like Cyanogen which usually make their kernel git tree easily available.

Once your device has a working kernel, all you need to do is unpack our tarball as root in your device’s / directory, copy an arm container to /data/lxc/containers/<container name>, get into /data/lxc and run “./run-lxc lxc-start -n <container name>”.
A few seconds later you’ll be greeted by a login prompt.

Read more
Stéphane Graber

This is post 3 out of 10 in the LXC 1.0 blog post series.

Exchanging data with a container

Because containers directly share their filesystem with the host, there’s a lot of things that can be done to pass data into a container or to get stuff out.

The first obvious one is that you can access the container’s root at:
/var/lib/lxc/<container name>/rootfs/

That’s great, but sometimes you need to access data that’s in the container and on a filesystem which was mounted by the container itself (such as a tmpfs). In those cases, you can use this trick:

sudo ls -lh /proc/$(sudo lxc-info -n p1 -p -H)/root/run/

Which will show you what’s in /run of the running container “p1″.

Now, that’s great to have access from the host to the container, but what about having the container access and write data to the host?
Well, let’s say we want to have our host’s /var/cache/lxc shared with “p1″, we can edit /var/lib/lxc/p1/fstab and append:

/var/cache/lxc var/cache/lxc none bind,create=dir

This line means, mount “/var/cache/lxc” from the host as “/var/cache/lxc” (the lack of initial / makes it relative to the container’s root), mount it as a bind-mount (“none” fstype and “bind” option) and create any directory that’s missing in the container (“create=dir”).

Now restart “p1″ and you’ll see /var/cache/lxc in there, showing the same thing as you have on the host. Note that if you want the container to only be able to read the data, you can simply add “ro” as a mount flag in the fstab.

Container nesting

One pretty cool feature of LXC (though admittedly not very useful to most people) is support for nesting. That is, you can run LXC within LXC with pretty much no overhead.

By default this is blocked in Ubuntu as allowing this at the moment requires letting the container mount cgroupfs which will let it escape any cgroup restrictions that’s applied to it. It’s not an issue in most environment, but if you don’t trust your containers at all, then you shouldn’t be using nesting at this point.

So to enable nesting for our “p1″ container, edit /var/lib/lxc/p1/config and add:

lxc.aa_profile = lxc-container-default-with-nesting

And then restart “p1″. Once that’s done, install lxc inside the container. I usually recommend using the same version as the host, though that’s not strictly required.

Once LXC is installed in the container, run:

sudo lxc-create -t ubuntu -n p1

As you’ve previously bind-mounted /var/cache/lxc inside the container, this should be very quick (it shouldn’t rebootstrap the whole environment). Then start that new container as usual.

At that point, you may now run lxc-ls on the host in nesting mode to see exactly what’s running on your system:

stgraber@castiana:~$ sudo lxc-ls --fancy --nesting
NAME    STATE    IPV4                 IPV6   AUTOSTART  
------------------------------------------------------
p1      RUNNING  10.0.3.82, 10.0.4.1  -      NO       
 \_ p1  RUNNING  10.0.4.7             -      NO       
p2      RUNNING  10.0.3.128           -      NO

There’s no real limit to the number of level you can go, though as fun as it may be, it’s hard to imagine why 10 levels of nesting would be of much use to anyone :)

Raw network access

In the previous post I mentioned passing raw devices from the host inside the container. One such container I use relatively often is when working with a remote network over a VPN. That network uses OpenVPN and a raw ethernet tap device.

I needed to have a completely isolated system access that VPN so I wouldn’t get mixed routes and it’d appear just like any other machine to the machines on the remote site.

All I had to do to make this work was set my container’s network configuration to:

lxc.network.type = phys
lxc.network.hwaddr = 00:16:3e:c6:0e:04
lxc.network.flags = up
lxc.network.link = tap0
lxc.network.name = eth0

Then all I have to do is start OpenVPN on my host which will connect and setup tap0, then start the container which will steal that interface and use it as its own eth0.The container will then use DHCP to grab an IP and will behave just like if it was a physical machine connect directly in the remote network.

Read more
Stéphane Graber

This is post 2 out of 10 in the LXC 1.0 blog post series.

More templates

So at this point you should have a working Ubuntu container that’s called “p1″ and was created using the default template called simply enough “ubuntu”.

But LXC supports much more than just standard Ubuntu. In fact, in current upstream git (and daily PPA), we support Alpine Linux, Alt Linux, Arch Linux, busybox, CentOS, Cirros, Debian, Fedora, OpenMandriva, OpenSUSE, Oracle, Plamo, sshd, Ubuntu Cloud and Ubuntu.

All of those can usually be found in /usr/share/lxc/templates. They also all typically have extra advanced options which you can get to by passing “--help” after the “lxc-create” call (the “--” is required to split “lxc-create” options from the template’s).

Writing extra templates isn’t too difficult, they basically are executables (all shell scripts but that’s not a requirement) which take a set of standard arguments and are expected to produce a working rootfs in the path that’s passed to them.

One thing to be aware of is that due to missing tools not all distros can be bootstrapped on all distros. It’s usually best to just try. We’re always interested in making those work on more distros even if that means using some rather weird tricks (like is done in the fedora template) so if you have a specific combination which doesn’t work at the moment, patches are definitely welcome!

Anyway, enough talking for now, let’s go ahead and create an Oracle Linux container that we’ll force to be 32bit.

sudo lxc-create -t oracle -n p2 -- -a i386

On most systems, this will initially fail, telling you to install the “rpm” package first which is needed for bootstrap reasons. So install it and “yum” and then try again.

After some time downloading RPMs, the container will be created, then it’s just a:

sudo lxc-start -n p2

And you’ll be greated by the Oracle Linux login prompt (root / root).

At that point since you started the container without passing “-d” to “lxc-start”, you’ll have to shut it down to get your shell back (you can’t detach from a container which wasn’t started initially in the background).

Now if you are wondering why Ubuntu has two templates. The Ubuntu template which I’ve been using so far does a local bootstrap using “debootstrap” basically building your container from scratch, whereas the Ubuntu Cloud template (ubuntu-cloud) downloads a pre-generated cloud image (identical to what you’d get on EC2 or other cloud services) and starts it. That image also includes cloud-init and supports the standard cloud metadata.

It’s a matter of personal choice which you like best. I personally have a local mirror so the “ubuntu” template is much faster for me and I also trust it more since I know everything was downloaded from the archive in front of me and assembled locally on my machine.

One last note on templates. Most of them use a local cache, so the initial bootstrap of a container for a given arch will be slow, any subsequent one will just be a local copy from the cache and will be much faster.

Auto-start

So what if you want to start a container automatically at boot time?

Well, that’s been supported for a long time in Ubuntu and other distros by using some init scripts and symlinks in /etc, but very recently (two days ago), this has now been implemented cleanly upstream.

So here’s how auto-started containers work nowadays:

As you may know, each container has a configuration file typically under
/var/lib/lxc/<container name>/config

That file is key = value with the list of valid keys being specified in lxc.conf(5).

The startup related values that are available are:

  • lxc.start.auto = 0 (disabled) or 1 (enabled)
  • lxc.start.delay = 0 (delay in second to wait after starting the container)
  • lxc.start.order = 0 (priority of the container, higher value means starts earlier)
  • lxc.group = group1,group2,group3,… (groups the container is a member of)

When your machine starts, an init script will ask “lxc-autostart” to start all containers of a given group (by default, all containers which aren’t in any) in the right order and waiting the specified time between them.

To illustrate that, edit /var/lib/lxc/p1/config and append those lines to the file:

lxc.start.auto = 1
lxc.group = ubuntu

And /var/lib/lxc/p2/config and append those lines:

lxc.start.auto = 1
lxc.start.delay = 5
lxc.start.order = 100

Doing that means that only the p2 container will be started at boot time (since only those without a group are by default), the order value won’t matter since it’s alone and the init script will wait 5s before moving on.

You may check what containers are automatically started using “lxc-ls”:

stgraber@castiana:~$ sudo lxc-ls --fancy
NAME    STATE    IPV4        IPV6                                    AUTOSTART     
---------------------------------------------------------------------------------
p1      RUNNING  10.0.3.128  2607:f2c0:f00f:2751:216:3eff:feb1:4c7f  YES (ubuntu)
p2      RUNNING  10.0.3.165  2607:f2c0:f00f:2751:216:3eff:fe3a:f1c1  YES

Now you can also manually play with those containers using the “lxc-autostart” command which let’s you start/stop/kill/reboot any container marked with lxc.start.auto=1.

For example, you could do:

sudo lxc-autostart -a

Which will start any container that has lxc.start.auto=1 (ignoring the lxc.group value) which in our case means it’ll first start p2 (because of order = 100), then wait 5s (because of delay = 5) and then start p1 and return immediately afterwards.

If at that point you want to reboot all containers that are in the “ubuntu” group, you may do:

sudo lxc-autostart -r -g ubuntu

You can also pass “-L” with any of those commands which will simply print which containers would be affected and what the delays would be but won’t actually do anything (useful to integrate with other scripts).

Freezing your containers

Sometimes containers may be running daemons that take time to shutdown or restart, yet you don’t want to run the container because you’re not actively using it at the time.

In such cases, “sudo lxc-freeze -n <container name>” can be used. That very simply freezes all the processes in the container so they won’t get any time allocated by the scheduler. However the processes will still exist and will still use whatever memory they used to.

Once you need the service again, just call “sudo lxc-unfreeze -n <container name>” and all the processes will be restarted.

Networking

As you may have noticed in the configuration file while you were setting the auto-start settings, LXC has a relatively flexible network configuration.
By default in Ubuntu we allocate one “veth” device per container which is bridged into a “lxcbr0″ bridge on the host on which we run a minimal dnsmasq dhcp server.

While that’s usually good enough for most people. You may want something slightly more complex, such as multiple network interfaces in the container or passing through physical network interfaces, … The details of all of those options are listed in lxc.conf(5) so I won’t repeat them here, however here’s a quick example of what can be done.

lxc.network.type = veth
lxc.network.hwaddr = 00:16:3e:3a:f1:c1
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.name = eth0

lxc.network.type = veth
lxc.network.link = virbr0
lxc.network.name = virt0

lxc.network.type = phys
lxc.network.link = eth2
lxc.network.name = eth1

With this setup my container will have 3 interfaces, eth0 will be the usual veth device in the lxcbr0 bridge, eth1 will be the host’s eth2 moved inside the container (it’ll disappear from the host while the container is running) and virt0 will be another veth device in the virbr0 bridge on the host.

Those last two interfaces don’t have a mac address or network flags set, so they’ll get a random mac address at boot time (non-persistent) and it’ll be up to the container to bring the link up.

Attach

Provided you are running a sufficiently recent kernel, that is 3.8 or higher, you may use the “lxc-attach” tool. It’s most basic feature is to give you a standard shell inside a running container:

sudo lxc-attach -n p1

You may also use it from scripts to run actions in the container, such as:

sudo lxc-attach -n p1 -- restart ssh

But it’s a lot more powerful than that. For example, take:

sudo lxc-attach -n p1 -e -s 'NETWORK|UTSNAME'

In that case, you’ll get a shell that says “root@p1″ (thanks to UTSNAME), running “ifconfig -a” from there will list the container’s network interfaces. But everything else will be that of the host. Also passing “-e” means that the cgroup, apparmor, … restrictions won’t apply to any processes started from that shell.

This can be very useful at times to spawn a software located on the host but inside the container’s network or pid namespace.

Passing devices to a running container

It’s great being able to enter and leave the container at will, but what about accessing some random devices on your host?

By default LXC will prevent any such access using the devices cgroup as a filtering mechanism. You could edit the container configuration to allow the right additional devices and then restart the container.

But for one-off things, there’s also a very convenient tool called “lxc-device”.
With it, you can simply do:

sudo lxc-device add -n p1 /dev/ttyUSB0 /dev/ttyS0

Which will add (mknod) /dev/ttyS0 in the container with the same type/major/minor as /dev/ttyUSB0 and then add the matching cgroup entry allowing access from the container.

The same tool also allows moving network devices from the host to within the container.

Read more
Stéphane Graber

This is post 1 out of 10 in the LXC 1.0 blog post series.

So what’s LXC?

Most of you probably already know the answer to that one, but here it goes:

“LXC is a userspace interface for the Linux kernel containment features.
Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.”

I’m one of the two upstream maintainers of LXC along with Serge Hallyn.
The project is quite actively developed with milestones every month and a stable release coming up in February. It’s so far been developed by 67 contributors from a wide range of backgrounds and companies.

The project is mostly developed on github: http://github.com/lxc
We have a website at: http://linuxcontainers.org
And mailing lists at: http://lists.linuxcontainers.org

LXC 1.0

So what’s that 1.0 release all about?

Well, simply put it’s going to be the first real stable release of LXC and the first we’ll be supporting for 5 years with bugfix releases. It’s also the one which will be included in Ubuntu 14.04 LTS to be released in April 2014.

It’s also going to come with a stable API and a set of bindings, quite a few interesting new features which will be detailed in the next few posts and support for a wide range of host and guest distributions (including Android).

How to get it?

I’m assuming most of you will be using Ubuntu. For the next few posts, I’ll myself be using the current upstream daily builds on Ubuntu 14.04 but we maintain daily builds on 12.04, 12.10, 13.04, 13.10 and 14.04, so if you want the latest upstream code, you can use our PPA.

Alternatively, LXC is also directly in Ubuntu and quite usable since Ubuntu 12.04 LTS. You can choose to use the version which comes with whatever release you are on, or you can use one the backported version we maintain.

If you want to build it yourself, you can do (not recommended when you can simply use the packages for your distribution):

git clone git://github.com/lxc/lxc
cd lxc
sh autogen.sh
# You will probably want to run the configure script with --help and then set the paths
./configure
make
sudo make install

What about that first container?

Oh right, that was actually the goal of this post wasn’t it?

Ok, so now that you have LXC installed, hopefully using the Ubuntu packages, it’s really as simple as:

# Create a "p1" container using the "ubuntu" template and the same version of Ubuntu
# and architecture as the host. Pass "-- --help" to list all available options.
sudo lxc-create -t ubuntu -n p1

# Start the container (in the background)
sudo lxc-start -n p1 -d

# Enter the container in one of those ways## Attach to the container's console (ctrl-a + q to detach)
sudo lxc-console -n p1

## Spawn bash directly in the container (bypassing the console login), requires a >= 3.8 kernel
sudo lxc-attach -n p1

## SSH into it
sudo lxc-info -n p1
ssh ubuntu@<ip from lxc-info>

# Stop the container in one of those ways
## Stop it from within
sudo poweroff

## Stop it cleanly from the outside
sudo lxc-stop -n p1

## Kill it from the outside
sudo lxc-stop -n p1 -k

And there you go, that’s your first container. You’ll note that everything usually just works on Ubuntu. Our kernels have support for all the features that LXC may use and our packages setup a bridge and a DHCP server that the containers will use by default.
All of that is obviously configurable and will be covered in the coming posts.

Read more
Stéphane Graber

So it’s almost the end of the year, I’ve got about 10 days of vacation for the holidays and a bit of time on my hands.

Since I’ve been doing quite a bit of work on LXC lately in prevision for the LXC 1.0 release early next year, I thought that it’d be a good use of some of that extra time to blog about the current state of LXC.

As a result, I’m preparing a series of 10 blog posts covering what I think are some of the most exciting features of LXC. The planned structure is:

While they are all titled LXC 1.0, most of the things I’ll be showing will work just as well on older LXC. However some of the features will need a very very recent version of LXC (as in, current upstream git). I’ll try to make that clear and will explain how to use our stable backports in Ubuntu or current upstream snapshots from our PPA.

I’ll be updating this first blog post with links to all of the posts in the series. So if you want to bookmark or refer to these, please use this post.

Read more
Stéphane Graber

After over 3 months of development and experimentation, I’m now glad to announce that the system images are now the recommended way to deploy and update the 4 supported Ubuntu Touch devices, maguro (Galaxy Nexus), mako (Nexus 4), grouper (Nexus 7) and manta (Nexus 10).

Anyone using one of those devices can choose to switch to the new images using: phablet-flash ubuntu-system

Once that’s done, further updates will be pushed over the air and can be applied through the Updates panel in the System Settings.

Ubuntu Touch Upgrader

You should be getting a new update every few days, whenever an image is deemed of sufficient quality for public consumption. Note that the downloader UI doesn’t yet show progress, so if it doesn’t do anything, it doesn’t mean it’s not working.

Those new images are read-only except for a few selected files and for the user profile and data, this is a base requirement for the delta updates to work properly.
However if the work you’re doing requires installation of extra non-click packages, such as developing on your device using the SDK, you have two options:

  1. Stick to the current flipped images which we’ll continue to generate for the foreseeable future.
  2. Use the experimental writable flag by doing touch /userdata/.writable_image and rebooting your device.
    This will make / writable again, however beware that applying image updates on such a system will lead to unknown results, so if you do choose to use this flag, you’ll have to manually update your device using apt-get (and possibly have to unmount/remount some of the bind-mounted files depending on which package needs to be updated).

From now on, the QA testing effort will focus on those new images rather than the standard flipped ones. I’d also highly recommend to all our application developers to at least test their apps with those images and report any bug that they see in #ubuntu-touch (irc.freenode.net).

 

Read more
Stéphane Graber

Some of you may be aware that I along with Barry Warsaw and Ondrej Kubik have been working on image based upgrades for Ubuntu Touch.
This is going to be the official method to update any Ubuntu Touch devices. When using this system, the system will effectively be read-only with updates being downloaded over the air from a central server and applied in a consistent way across all devices.
Design details may be found at: http://wiki.ubuntu.com/ImageBasedUpgrades

After several months of careful design and implementation, we are now ready to get more testers. We are producing daily images for our 4 usual devices, Galaxy Nexus (maguro), Nexus 4 (mako), Nexus 7 (grouper) and Nexus 10 (manta).
At this point, only those devices are supported. We’ll soon be working with the various ports to see how to get them running on the new system.

So what’s working at this point?

  • Daily delta images are generated and published to
    http://system-image.ubuntu.com
  • We have a command line client tool (system-image-cli), an update server and an upgrader sitting in the recovery partition
  • The images usually boot and work

What doesn’t work?

  • Installing apps as the system partition is read-only and we’re waiting for click packages to be fully implemented in our images
  • Data migration. We haven’t implemented any migration script from the current images to the new ones, so switching will wipe everything from your device
  • Possibly quite some more features I haven’t tested yet

So how can I help?

You can help us if:

  • You have one of the 4 supported devices
  • You don’t use that device for your everyday work
  • You don’t need to install any extra apps
  • You don’t care about losing all your existing data
  • You’re usually able to use adb/fastboot to recover from any problems that might happen

If you don’t fit all of the above criteria, please stick to the current flipped images.
If you think you’re able to help us and want to test those new images, then here’s how to switch to them:

  1. Get the latest version of phablet-tools (>= 0.15+13.10.20130720.1-0ubuntu1)
  2. Boot your device
  3. Backup anything you may want to keep as it’ll be wiped clean!!!
  4. Run: phablet-flash --ubuntu-bootstrap
  5. Wait for it to finish downloading and installing
  6. You’re done!
  7. To apply any further update, use: adb shell system-image-cli
    (never use phablet-flash after the initial flash, updates can only be applied through system-image-cli!)

Reverting to standard flipped images:

  • Boot your device
  • Backup anything you may want to keep as it’ll be wiped clean!!!
  • Run: phablet-flash –bootstrap
  • Wait for it to finish downloading and installing
  • You’re back to standard flipped images!

To report bugs, the easiest is to go to:
https://launchpad.net/ubuntu-image-image/+filebug

We also all hangout in #ubuntu-touch on irc.freenode.net

 

Read more
Stéphane Graber

NorthSec 2013

NorthSec logo

So, when I’m not busy working on Ubuntu, or on LXC, or on Edubuntu, or … I also spend some of my spare time preparing the upcoming NorthSec 2013 security contest which will be held from Friday the 5th of April to Sunday the 7th of April at ETS in downtown Montreal.

NorthSec can be seen as the successor of HackUS 2010 and HackUS 2011 which both were held where I currently live, in Sherbrooke, QC. This year, we’re moving to Montreal, in the hope of attracting more people, especially from other Canadian provinces and from abroad.

I’m personally mostly involved with the internal infrastructure side of things, building the Ubuntu based infrastructure required to simulate the hundreds of servers and services used for the contest. All of that while making sure everything is rock solid and copes extremely well under pressure (considering what our contestants tend to throw at us).

I also usually get involved with some of the tracks, mostly the networking one, trying to think of really twisted setups ranging from taking over an active IPv6 network to hijacking IPs by messing with a badly configured BGP router (taken from past editions).

Outside of our twisted network challenges, we have quite a few more things to offer, here’s the current list of tracks for this year:

  • Trivias (they seem easy but people are known to have wasted hours on them)
  • Web (sql injection, xss anyone?)
  • Binaries (because we know you love those)
  • Networking (my track of choice)
  • Reverse Java

And if anyone manages to finish everything, don’t worry, we’ll come up with more.
As far as I know, we never had a single team get bored in the past two editions ;)

So if you’re interested in computer security, want to try to prove how good you are at finding security flaws and exploiting them or just want to see what that thing is all about, well you should consider a trip to Montreal in early April.
All the details you need are at: http://www.nsec.io/en

If you are a company interested in helping us with sponsorship, I hear that we’re always looking for more sponsors. So if that’s something you can help with, feel free to contact me directly at: stgraber at nsec dot io

Read more
Stéphane Graber

Anyone who met me probably knows that I like to run everything in containers.

A couple of weeks ago, I was attending the Ubuntu Developer Summit in Copenhagen, DK where I demoed how to run OpenGL code from within an LXC container. At that same UDS, all attendees also received a beta key for Steam on Linux.

Yesterday I finally received said key by e-mail and I’ve been experimenting with Steam a bit. Now, my laptop is running the development version of Ubuntu 13.04 and only has 64bit binaries. Steam is 32bit-only and Valve recommends running it on Ubuntu 12.04 LTS.

So I just spent a couple of hours writing a tool called steam-lxc which uses LXC’s new python API and a bunch more python magic to generate an Ubuntu 12.04 LTS 32bit container, install everything that’s needed to run Steam, then install Steam itself and configures some tricks to get direct GPU access and access to pulseaudio for sound.

All in all, it only takes 3 minutes for the script to setup everything I need to run Steam and then start it.

Here’s a (pretty boring) screencast of the script in action:

This script has only been tested with Intel hardware on Ubuntu 13.04 64bit at this point, but the PPA contains builds for Ubuntu 12.04 and Ubuntu 12.10 too.

To get it on your machine just do:

  • sudo apt-add-repository ppa:ubuntu-lxc/stable
  • sudo apt-get update
  • sudo apt-get install steam-lxc
  • sudo mkdir -p /var/lib/lxc /var/cache/lxc

Then once that’s all installed, set it up with sudo steam-lxc create. This can take somewhere from 5 minutes to an hour depending on your internet connection.

And once the environment is all setup, you can start steam with sudo steam-lxc run.

The code can be found at: https://code.launchpad.net/~ubuntu-lxc/lxc/steam-lxc

You can leave your feedback as comment here and if you want to improve the script, merge proposals are more than welcome.
I don’t have any hardware requiring proprietary drivers but I’d expect steam to fail on such hardware as the drivers won’t get properly installed in the container. Adding code to deal with those is pretty easy and I’d love to get some patches for that!

Have fun!

Read more
Stéphane Graber

(tl;dr: Edubuntu 14.04 will include a new Edubuntu Server and Edubuntu tablet edition with a lot of cool new features including a full feature Active Directory compatible domain.)

Now that Edubuntu 12.10 is out the door and the Ubuntu Developer Summit in Copenhagen is just a week away, I thought it’d be an appropriate time to share our vision for Edubuntu 14.04.

This was so far only discussed in person with Jonathan Carter and a bit on IRC with other Edubuntu developers but I think it’s time to make our plans a bit more visible so we can get more feedback and hopefully get interested people together next week at UDS.

There are three big topics I’d like to talk about. Edubuntu desktop, Edubuntu server and Edubuntu tablet.

Edubuntu desktop

Edubuntu desktop is what we’ve been offering since the first Edubuntu release and what we’ll obviously continue to offer pretty much as it’s today.
It’s not an area I plan on spending much time working on personally but I expect Jonathan to drive most of the work around this.

Basically what the Edubuntu desktop needs nowadays is a better application selection, better testing, better documentation, making sure our application selection works on all our supported platforms and is properly translated.

We’ll also have to refocus some of our efforts and will likely drop some things like our KDE desktop package that hasn’t been updated in years and was essentially doubling our maintenance work which is why we stopped supporting it officially in 12.04.

There are a lot of cool new tools we’ve heard of recently and that really should be packaged and integrated in Edubuntu.

Edubuntu Server

Edubuntu Server will be a new addition to the Edubuntu project, expected to ship in its final form in 14.04 and will be supported for 5 years as part of the LTS.

This is the area I’ll be spending most of my Edubuntu time on as it’s going to be using a lot of technologies I’ve been involved with over the years to offer what I hope will be an amazing server experience.

Edubuntu Server will essentially let you manage a network of Edubuntu, Ubuntu or Windows clients by creating a full featured domain (using samba4).

From the same install DVD as Edubuntu Desktop, you’ll be able to simply choose to install a new Edubuntu Server and create a new domain, or if you already have an Edubuntu domain or even an Active Directory domain, you’ll be able to join an extra server to add extra scalibility or high-availability.

On top of that core domain feature, you’ll be able to add extra roles to your Edubuntu Server, the initial list is:

  • Web hosting platform – Will let you deploy new web services using JuJu so schools in your district or individual teachers can easily get their own website.
  • File server – A standard samba3 file server so all your domain members can easily store and retrieve files.
  • Backup server – Will automatically backup the important data from your servers and if you wish, from your clients too.
  • Schooltool – A school management web service, taking care of all the day to day school administration.

LTSP will also be part of that system as part of Edubuntu Terminal Server which will let you, still from our single install media, install as many new terminal servers as you want, automatically joining the domain, using the centralized authentication, file storage and backup capabilities of your Edubuntu Server.

As I mentioned, the Edubuntu DVD will let you install Edubuntu Desktop, Edubuntu Server and Edubuntu Terminal Server. You’ll simply be asked at installation time whether you want to join an Edubuntu Server or Active Directory domain or if you want your machine to be standalone.

Once installed, Edubuntu Server will be managed through a web interface driving LXC behind the scene to deploy new services, upgrade individual services or deploy new web services using JuJu.
Our goal is to have Edubuntu Server offer an appliance-like experience, never requiring any command line access to the system and easily supporting upgrades from a version to another.

For those wondering what the installation process will look like, I have some notes of the changes available at: http://paste.ubuntu.com/1289041/
I’m expecting to have the installer changes implemented by the time we start building our first 13.04 images.

The rest of Edubuntu Server will be progressively landing during the 13.04 cycle with an early version of the system being released with Edubuntu 13.04, possibly with only a limited selection of roles and without initial support for multiple servers and Active Directory integration.

While initially Edubuntu branded, our hope is that this work will be re-usable by Ubuntu and may one day find its way into Ubuntu Server.
Doing this as part of Edubuntu will give us more time and more flexibility to get it right, build a community around it and get user feedback before we try to get the rest of the world to use it too.

Edubuntu Tablet

During the Edubuntu 12.10 development cycle, the Edubuntu Council approved the sponsorship of 5 tablets by Revolution Linux which were distributed to some of our developers.

We’ve been doing daily armhf builds of Edubuntu, refined our package selections to properly work on ARM and spent countless hours fighting to get our tablet to boot (a ZaTab from ZaReason).
Even though it’s been quite a painful experience so far, we’re still planning on offering a supported armhf tablet image for 14.04, running something very close to our standard Edubuntu Desktop and also featuring integration with Edubuntu Server.

With all the recent news about Ubuntu on the Nexus 7, we’ll certainly be re-discussing what our main supported platform will be during next week’s UDS but we’re certainly planning on releasing 13.04 with experimental tablet support.

LTS vs non-LTS

For those who read our release announcement or visited our website lately, you certainly noticed the emphasis on using the LTS releases.
We really think that most Edubuntu users want something that’s stable, very well tested with regular updates and a long support time, so we’re now always recommending the use of the latest LTS release.

That doesn’t mean we’ll stop doing non-LTS release like the Mythbuntu folks recently decided to do, pretty far from that. What it means however is that we’ll more freely experiment in non-LTS releases so we can easily iterate through our ideas and make sure we release something well polished and rock solid for our LTS releases.

Conclusion

I’m really really looking forward to Edubuntu 14.04. I think the changes we’re planning will help our users a lot and will make it easier than ever to get school districts and individual schools to switch to Edubuntu for both their backend infrastructure with Edubuntu Server and their clients with Edubuntu Desktop and Edubuntu Tablet.

Now all we need is your ideas and if you have some, your time to make it all happen. We usually hang out in #edubuntu on freenode and can also be contacted on the edubuntu-devel mailing-list.

For those of you going to UDS, we’ll try to get an informational session on Edubuntu Server scheduled on top of our usual Edubuntu session. If you’re there and want to know more or want to help, please feel free to grab Jonathan or I in the hallway, at the bar or at one of the evening activities.

Read more
Stéphane Graber

One of our top goals for LXC upstream work during the Ubuntu 12.10 development cycle was reworking the LXC library and turn it from a private library mostly used by the other lxc-* commands into something that’s easy for developers to work with and is accessible from other languages with some bindings.

Although the current implementation isn’t complete enough to consider the API stable and some changes will still happen to it over the months to come, we have pushed the initial implementation to the LXC staging branch on github and put it into the lxc package of Ubuntu 12.10.

The initial version comes with a python3 binding packaged as python3-lxc, that’s what I’ll use now to give you an idea of what’s possible with the API. Note that as we don’t have full user namespaces support at the moment, any code using the LXC API needs to run as root.

First, let’s start with the basics, creating a container, starting it, getting its IP and stopping it:

#!/usr/bin/python3
import lxc
container = lxc.Container("my_container")
container.create("ubuntu", {"release": "precise", "architecture": "amd64"})
container.start()
print(container.get_ips(timeout=10))
container.shutdown(timeout=10)
container.destroy()

So, pretty simple.
It’s also possible to modify the container’s configuration using the .get_config_item(key) and .set_config_item(key, value) functions. For those keys supporting multiple values, a list will be returned and a list will be accepted as a value by .set_config_item.

Network configuration can be accessed through the .network property which is essentially a list of all network interfaces of the container, properties can be changed that way or through .set_config_item and saved to the config file with .save_config().

The API isn’t terribly well documented at this point, help messages are present for all functions but there’s no generated html help yet.

To get a better idea of the functions exported by the API, you may want to look at the API test script. This script uses all the functions and properties exported by the python module so it should be a reasonable reference.

Read more
Stéphane Graber

A few months ago, I received two test SIM cards for Orange Poland’s new IPv6 network.

The interesting thing about this network is that it’s running IPv6 in a fairly unusual configuration and it was interesting to see how to get that work on Ubuntu.

This network uses two separate APNs, one for IPv4 (internet) and one for IPv6 (internetipv6).
Using two separate APNs is certainly easier on the carrier’s infrastructure side as they can get IPv6 online without actually changing anything on the IPv4 equipement, however that means that any client wanting to use both protocols at once needs to use multiple PDP contexts.

I’m now going to detail how to manually configure ppp to connect to such a network:
/etc/ppp/peers/orange

noauth
connect "/usr/sbin/chat -e -f /etc/ppp/peers/orange-connect"
/dev/ttyACM0
460800
+ipv6

/etc/ppp/peers/orange-connect

TIMEOUT 5
ABORT BUSY
ABORT 'NO CARRIER'
ABORT VOICE
ABORT 'NO DIALTONE'
ABORT 'NO ANSWER'
ABORT DELAYED
ABORT ERROR
'' \nAT
TIMEOUT 12
OK ATH
OK ATE1
OK 'AT+CGDCONT=1,"IP","internet"'
OK 'AT+CGDCONT=2,"IPV6","internetipv6"'
OK ATD*99#
CONNECT ""

Then all that’s needed is a good old:

pon orange

And a few seconds later, I’m getting the following on ppp0:

ppp0      Link encap:Point-to-Point Protocol  
          inet addr:87.96.119.169  P-t-P:10.6.6.6  Mask:255.255.255.255
          inet6 addr: 2a00:f40:2100:ac9:8c1e:da60:93e2:c234/64 Scope:Global
          inet6 addr: 2a00:f40:2100:ac9::1/64 Scope:Global
          inet6 addr: fe80::1/10 Scope:Link
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:3 
          RX bytes:354 (354.0 B)  TX bytes:767 (767.0 B

This config should work for any mobile network using a similar setup (likely to become more and more popular as the various RIRs are running out of IPv4).

Sadly ModemManager/NetworkManager don’t support mutliple PDP contexts yet, though it’s being discussed upstream, so we can hope to see something land soon.

Apparently multiple PDP contexts support is also dependant on hardware. In my case, I’ve been using an “old” Nokia E51 over USB as I didn’t have any luck getting that to work with an Android phone. My Nokia N900 also worked but required a custom kernel to be installed first to properly handle IPv6.

Read more
Stéphane Graber

With the DNS changes in Ubuntu 12.04, most development machines running with libvirt and lxc end up running quite a few DNS servers.

These DNS servers work fine when queried from a system on their network, but aren’t integrated with the main dnsmasq instance and so won’t let you resolve your VM and containers from outside of their respective networks.

One way to solve that is to install yet another DNS resolver and use it to redirect between the various dnsmasq instances. That can quickly become tricky to setup and doesn’t integrate too well with resolvconf and NetworkManager.

Seeing a lot of people wondering how to solve that problem, I took a few minutes yesterday to come up with an ssh configuration that’d allow one to access their containers and VM using their name.

The result is the following, to add to your ~/.ssh/config file:

Host *.lxc
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(host $(echo %h | sed "s/\.lxc//g") 10.0.3.1 | tail -1 | awk '{print $NF}') %p

Host *.libvirt
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(host $(echo %h | sed "s/\.libvirt//g") 192.168.122.1 | tail -1 | awk '{print $NF}') %p

After that, things like:

  • ssh user@myvm.libvirtu
  • ssh ubuntu@mycontainer.lxc

Will just work.

For LXC, you may also want to add a “User ubuntu” line to that config as it’s the default user for LXC containers on Ubuntu.
If you configured your bridges with a non-default subnet, you’ll also need to update the IPs or add more sections to the config.

These also turn off StrictHostKeyChecking and UserKnownHostsFile as my VMs and containers are local to my machine (reducing risk of MITM attacks) and tend to exist only for a few hours, to then be replaced by a completely different one with a different SSH host key. Depending on your setup, you may want to remove these lines.

Read more
~apw

The Internet has been alive with doom saying since the IPv4 global address pool was parcelled out.  Now I do not subscribe to the view that the Internet is going to end imminently, but I do feel that if the technical people out there do not start playing with IPv6 soon then what hope is there for the masses?

In the UK getting native IPv6 is not a trivial task, only one ISP I can find seems to offer it and of course it is not the one I am with.  So what options do I have?  Well there are a number of different types of IPv4 tunnelling techniques such as 6to4 but these seem to require the ability to handle the transition on your NAT router, not an option here.  The other is a proper 6in4 tunnel to a tunnel broker but this needs an end-point.

As I have a local server that makes a sensible anchor for such a tunnel.  Talking round with those in the know I settled on getting a tunnel from Hurricane Electric (HE), a company which gives out tunnels to individuals for free and seems to have local presence for their tunnel hosts.  HE even supply you with tools to cope with your endpoint having a dynamic address, handy.  So with an HE tunnel configuration in hand I set about making my backup server into my IPv6 gateway.

First I had to ensure that protocol 41 (the tunnelling protocol) was being forwarded to the appropriate host.  This is a little tricky as this required me to talk to the configurator for my wireless router.  With that passed on to my server I was able to start configuring the tunnel.

Following the instructions on my HE tunnel broker page, a simple cut-n-paste into /etc/network/interfaces added the new tunnel network device, a quick ifup and my server started using IPv6.  Interestingly my apt-cacher-ng immediately switched backhaul of its incoming IPv4 requests to IPv6 no configuration needed.

Enabling IPv6 for the rest of the network was surprisingly easy.  I had to install and configure radv with my assigned prefix.  It also passed out information on the HE DNS servers, prioritising IPv6 in DNS lookup results.  No changes were required for any of the client systems; well other than enabling firewalls.  Win.

Overall IPv6 is still not simple as it is hard to obtain native IPv6 support, but if you can get it onto your network the client side is working very well indeed.

Read more
Stéphane Graber

Quite a few people have been asking for a status update of LXC in Ubuntu as of Ubuntu 12.04 LTS. This post is meant as an overview of the work we did over the past 6 months and pointers to more detailed blog posts for some of the new features.

What’s LXC?

LXC is a userspace tool controlling the kernel namespaces and cgroup features to create system or application containers.

To give you an idea:

  • Feels like somewhere between a chroot and a VM
  • Can run a full distro using the “host” kernel
  • Processes running in a container are visible from the outside
  • Doesn’t require any specific hardware, works on all supported architectures

A libvirt driver for LXC exists (libvirt-lxc), however it doesn’t use the “lxc” userspace tool even though it uses the same kernel features.

Making LXC easier

One of the main focus for 12.04 LTS was to make LXC dead easy to use, to achieve this, we’ve been working on a few different fronts fixing known bugs and improving LXC’s default configuration.

Creating a basic container and starting it on Ubuntu 12.04 LTS is now down to:

sudo apt-get install lxc
sudo lxc-create -t ubuntu -n my-container
sudo lxc-start -n my-container

This will default to using the same version and architecture as your machine, additional option are obviously available (–help will list them). Login/Password are ubuntu/ubuntu.

Another thing we worked on to make LXC easier to work with is reducing the number of hacks required to turn a regular system into a container down to zero.
Starting with 12.04, we don’t do any modification to a standard Ubuntu system to get it running in a container.
It’s now even possible to take a raw VM image and have it boot in a container!

The ubuntu-cloud template also lets you get one of our EC2/cloud images and have it start as a container instead of a cloud instance:

sudo apt-get install lxc cloud-utils
sudo lxc-create -t ubuntu-cloud -n my-cloud-container
sudo lxc-start -n my-cloud-container

And finally, if you want to test the new cool stuff, you can also use juju with LXC:

[ ! -f ~/.ssh/id_rsa.pub ] && ssh-keygen -t rsa
sudo apt-get install juju apt-cacher-ng zookeeper lxc libvirt-bin --no-install-recommends
sudo adduser $USER libvirtd
juju bootstrap
sed -i "s/ec2/local/" ~/.juju/environments.yaml
echo " data-dir: /tmp/juju" >> ~/.juju/environments.yaml
juju bootstrap
juju deploy mysql
juju deploy wordpress
juju add-relation wordpress mysql
juju expose wordpress

# To tail the logs
juju debug-log

# To get the IPs and status
juju status

Making LXC safer

Another main focus for LXC in Ubuntu 12.04 was to make it safe. John Johansen did an amazing work of extending apparmor to let us implement per-container apparmor profiles and prevent most known dangerous behaviours from happening in a container.

NOTE: Until we have user namespaces implemented in the kernel and used by the LXC we will NOT say that LXC is root safe, however the default apparmor profile as shipped in Ubuntu 12.04 LTS is blocking any armful action that we are aware of.

This mostly means that write access to /proc and /sys are heavily restricted, mounting filesystems is also restricted, only allowing known-safe filesystems to be mounted by default. Capabilities are also restricted in the default LXC profile to prevent a container from loading kernel modules or control apparmor.

More details on this are available here:

Other cool new stuff

Emulated architecture containers

It’s now possible to use qemu-user-static with LXC to run containers of non-native architectures, for example:

sudo apt-get install lxc qemu-user-static
sudo lxc-create -n my-armhf-container -t ubuntu -- -a armhf
sudo lxc-start -n my-armhf-container

Ephemeral containers

Quite a bit of work also went into lxc-start-ephemeral, the tool letting you start a copy of an existing container using an overlay filesystem, discarding any change you make on shutdown:

sudo apt-get install lxc
sudo lxc-create -n my-container -t ubuntu
sudo lxc-start-ephemeral -o my-container

Container nesting

You can now start a container inside a container!
For that to work, you first need to create a new apparmor profile as the default one doesn’t allow this for security reason.
I already did that for you, so the few commands below will download it and install it in /etc/apparmor.d/lxc/lxc-with-nesting. This profile (or something close to it) will ship in Ubuntu 12.10 as an example of alternate apparmor profile for container.

sudo apt-get install lxc
sudo lxc-create -t ubuntu -n my-host-container -t ubuntu
sudo wget https://www.stgraber.org/download/lxc-with-nesting -O /etc/apparmor.d/lxc/lxc-with-nesting
sudo /etc/init.d/apparmor reload
sudo sed -i "s/#lxc.aa_profile = unconfined/lxc.aa_profile = lxc-container-with-nesting/" /var/lib/lxc/my-host-container/config
sudo lxc-start -n my-host-container
(in my-host-container) sudo apt-get install lxc
(in my-host-container) sudo stop lxc
(in my-host-container) sudo sed -i "s/10.0.3/10.0.4/g" /etc/default/lxc
(in my-host-container) sudo start lxc
(in my-host-container) sudo lxc-create -n my-sub-container -t ubuntu
(in my-host-container) sudo lxc-start -n my-sub-container

Documentation

Outside of the existing manpages and blog posts I mentioned throughout this post, Serge Hallyn did a very good job at creating a whole section dedicated to LXC in the Ubuntu Server Guide.
You can read it here: https://help.ubuntu.com/12.04/serverguide/lxc.html

Next steps

Next week we have the Ubuntu Developer Summit in Oakland, CA. There we’ll be working on the plans for LXC in Ubuntu 12.10. We currently have two sessions scheduled:

If you want to make sure the changes you want will be in Ubuntu 12.10, please make sure to join these two sessions. It’s possible to participate remotely to the Ubuntu Developer Summit, through IRC and audio streaming.

My personal hope for LXC in Ubuntu 12.10 is to have a clean liblxc library that can be used to create bindings and be used in languages like python. Working towards that goal should make it easier to do automated testing of LXC and cleanup our current tools.

I hope this post made you want to try LXC or for existing users, made you discover some of the new features that appeared in Ubuntu 12.04. We’re actively working on improving LXC both upstream and in Ubuntu, so do not hesitate to report bugs (preferably with “ubuntu-bug lxc”).

Read more
Stéphane Graber

One thing that we’ve been working on for LXC in 12.04 is getting rid of any remaining LXC specific hack in our templates. This means that you can now run a perfectly clean Ubuntu system in a container without any change.

To better illustrate that, here’s a guide on how to boot a standard Ubuntu VM in a container.

First, you’ll need an Ubuntu VM image in raw disk format. The next few steps also assume a default partitioning where the first primary partition is the root device. Make sure you have the lxc package installed and up to date and lxcbr0 enabled (the default with recent LXC).

Then run kpartx -a vm.img this will create loop devices in /dev/mapper for your VM partitions, in the following configuration I’m assuming /dev/mapper/loop0p1 is the root partition.

Now write a new LXC configuration file (myvm.conf in my case) containing:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.utsname = myvminlxc

lxc.tty = 4
lxc.pts = 1024
lxc.rootfs = /dev/mapper/loop0p1
lxc.arch = amd64
lxc.cap.drop = sys_module mac_admin

lxc.cgroup.devices.deny = a
# Allow any mknod (but not using the node)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
#lxc.cgroup.devices.allow = c 4:0 rwm
#lxc.cgroup.devices.allow = c 4:1 rwm
# /dev/{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 254:0 rwm
#fuse
lxc.cgroup.devices.allow = c 10:229 rwm
#tun
lxc.cgroup.devices.allow = c 10:200 rwm
#full
lxc.cgroup.devices.allow = c 1:7 rwm
#hpet
lxc.cgroup.devices.allow = c 10:228 rwm
#kvm
lxc.cgroup.devices.allow = c 10:232 rwm

The bits in bold may need updating if you’re not using the same architecture, partition scheme or bridges as I’m.

Then finally, run: lxc-start -n myvminlxc -f myvm.conf

And watch your VM boot in an LXC container.

I did this test with a desktop VM using network manager so it didn’t mind LXC’s random MAC address, server VMs might get stuck for a minute at boot time because of that though.
In such case, either clean /etc/udev/rules.d/70-persistent-net.rules or set “lxc.network.hwaddr” to the same mac address as your VM.

Once done, run kpartx -d vm.img to remove the loop devices.

Read more
Stéphane Graber

DNS in Ubuntu 12.04

Anyone who’s been using 12.04 over the past month or so may have noticed some pretty significant changes in the way we do DNS resolving in Ubuntu.

This is the result of the implementation of: foundations-p-dns-resolving

Here is a description of the two big changes that happened:

Switch to resolvconf for /etc/resolv.conf management

resolvconf is a set of script and hooks managing DNS resolution. The most notable difference for the user is that any change manually done to /etc/resolv.conf will be lost as it gets overwritten next time something triggers resolvconf. Instead, resolvconf uses DHCP client hooks, a Network Manager plugin and /etc/network/interfaces to generate a list of nameservers and domain to put in /etc/resolv.conf.

For more details, I’d highly encourage you to read resolvconf’s manpage but here are a few answers to common questions:

  • I use static IP configuration, where should I put my DNS configuration?
    The DNS configuration for a static interface should go as “dns-nameservers”, “dns-search” and “dns-domain” entries added to the interface in /etc/network/interfaces
  • How can I override resolvconf’s configuration or append some entries to it?
    Resolvconf has a /etc/resolvconf/resolv.conf.d/ directory that can contain “base”, “head”, “original” and “tail” files. All in resolv.conf format.
    • base: Used when no other data can be found
    • head: Used for the header of resolv.conf, can be used to ensure a DNS server is always the first one in the list
    • original: Just a backup of your resolv.conf at the time of resolvconf installation
    • tail: Any entry in tail is appended at the end of the resulting resolv.conf. In some cases, upgrading from a previous Ubuntu release, will make tail a symlink to original (when we think you manually modified resolv.conf in the past)
  • I really don’t want resolvconf, how can I disable it?
    I certainly wouldn’t recommend disabling resolvconf but you can do it by making /etc/resolv.conf a regular file instead of a symlink.
    Though please note that you may then be getting inconsistent /etc/resolv.conf when multiple software are fighting to change it.

This change affects all Ubuntu installs except for Ubuntu core.

Using dnsmasq as local resolver by default on desktop installations

That’s the second big change of this release. On a desktop install, your DNS server is going to be “127.0.0.1″ which points to a NetworkManager-managed dnsmasq server.

This was done to better support split DNS for VPN users and to better handle DNS failures and fallbacks. This dnsmasq server isn’t a caching server for security reason to avoid risks related to local cache poisoning and users eavesdropping on other’s DNS queries on a multi-user system.

The big advantage is that if you connect to a VPN, instead of having all your DNS traffic be routed through the VPN like in the past, you’ll instead only send DNS queries related to the subnet and domains announced by that VPN. This is especially interesting for high latency VPN links where everything would be slowed down in the past.

As for dealing with DNS failures, dnsmasq often sends the DNS queries to more than one DNS servers (if you received multiple when establishing your connection) and will detect bogus/dead ones and simply ignore them until they start returning sensible information again. This is to compare against the libc’s way of doing DNS resolving where the state of the DNS servers can’t be saved (as it’s just a library) and so every single application has to go through the same, trying the first DNS, waiting for it to timeout, using the next one.

Now for the most common questions:

  • How to know what DNS servers I’m using (since I can’t just “cat /etc/resolv.conf”)?
    “nm-tool” can be used to get information about your existing connections in Network Manager. It’s roughly the same data you’d get in the GUI “connection information”.
    Alternatively, you can also read dnsmasq’s configuration from /run/nm-dns-dnsmasq.conf
  • I really don’t want a local resolver, how can I turn it off?
    To turn off dnsmasq in Network Manager, you need to edit /etc/NetworkManager/NetworkManager.conf and comment the “dns=dnsmasq” line (put a # in front of it) then do a “sudo restart network-manager”.

Bugs and feedback

Although we’ve been doing these changes more than a month ago and we’ve been looking pretty closely at bug reports, there may be some we haven’t found yet.

Issues related to resolvconf should be reported with:
ubuntu-bug resolvconf

Issues related to the dnsmasq configuration should be reported with:
ubuntu-bug network-manager

And finally, actual dnsmasq bugs and crashed should be reported with:
ubuntu-bug dnsmasq

In all cases, please try to include the following information:

  • How was your system installed (desktop, alternate, netinstall, …)?
  • Whether it’s a clean install or an upgrade?
  • Tarball of /etc/resolvconf and /run/resolvconf
  • Content of /run/nm-dns-dnsmasq.conf
  • Your /var/log/syslog
  • Your /etc/network/interfaces
  • And obviously a detailed description of your problem

Read more
Stéphane Graber

Every year, I try to set a few hours aside to work on one of my upstream projects, pastebinit.

This is one of these projects which mostly “just works” with quite a lot of users and quite a few of them sending merge proposals and fixes in bug reports.

I’m planning on uploading pastebinit 1.3 right before Feature Freeze, either on Wednesday or early Thursday, delaying the release as much as possible to get a few last translations in.

If you speak any language other than English, please go to:
https://translations.launchpad.net/pastebinit

Any help getting this as well translated as possible would be appreciated, for Ubuntu users, you’ll have to deal with it for the next 5 years, so it’s kind of important :)

Now the changes, they’re pretty minimal but still will make some people happy I’m sure:

  • Finally merged pbget/pbput/pbputs from Dustin Kirkland, these 3 tools let you securely push and retrieve files using a pastebin. It’s using a mix of base64, tar and gpg as well as some wget and parsing to retrieve the data.
    These are nice scripts to use with pastebinit, though please don’t send huge files to the pastebins, they really aren’t meant for that ;)
  • Removed stikked.com from the supported pastebins as it’s apparently dead.
  • Now the new pastebins:
  • paste.debian.net should now work fine with the ‘-f’ (format) option, thanks for their work on making their form pastebinit-friendly.
  • pastebinit should now load pastebin definition files properly from multiple locations.
    Starting with /usr/share/pastebin.d, then going through /etc/pastebin.d, /usr/local/etc/pastebin.d, ~/.pastebin.d and finally <wherever pastebinit is>/.pastebin.d
  • A few other minor improvements and fixes merged by Rolf Leggewie over the last year or so, thanks again for taking care of these!

Testing of the current trunk before release would also be greatly appreciate, you can get the code with: bzr branch lp:pastebinit

Bug reports are welcome at: https://launchpad.net/pastebinit/+filebug

Read more
Stéphane Graber

It took a while to get some apt resolver bugs fixed, a few packages marked for multi-arch and some changes in the Ubuntu LXC template, but since yesterday, you can now run (using up to date Precise):

  • sudo apt-get install lxc qemu-user-static
  • sudo lxc-create -n armhf01 -t ubuntu — -a armhf -r precise
  • sudo lxc-start -n armhf01
  • Then login with root as both login and password

And enjoy an armhf system running on your good old x86 machine.

Now, obviously it’s pretty far from what you’d get on real ARM hardware.
It’s using qemu’s user space CPU emulation (qemu-user-static), so won’t be particularly fast, will likely use a lot of CPU and may give results pretty different from what you’d expect on real hardware.

Also, because of limitations in qemu-user-static, a few packages from the “host” architecture are installed in the container. These are mostly anything that requires the use of ptrace (upstart) or the use of netlink (mountall, iproute and isc-dhcp-client).
This is the bare minimum I needed to install to get the rest of the container to work using armhf binaries. I obviously didn’t test everything and I’m sure quite a few other packages will fail in such environment.

This feature should be used as an improvement on top of a regular armhf chroot using qemu-user-static and not as a replacement for actual ARM hardware (obviously), but it’s cool to have around and nice to show what LXC can do.

I confirmed it to work for armhf and armel, powerpc should also work, though it didn’t succeed to debootstrap when I tried it earlier today.

Enjoy!

Read more
Stéphane Graber

One of my focus for this cycle is to get Ubuntu’s support for complex networking working in a predictable way. The idea was to review exactly what’s happening at boot time, get a list of possible scenario that are used on servers in corporate environment and make sure these always work.

Bonding

Bonding basically means aggregating multiple physical link into one virtual link for high availability and load balancing. There are different ways of setting up such a link though the industry standard is 802.3ad (LACP – Link Aggregation Control Protocol). In that mode your server will negotiate with your switch to establish an aggregate link, then send monitoring packets to detect failure. LACP also does load balancing (MAC, IP and protocol based depending on hardware support).

One problem we had since at least Ubuntu 10.04 LTS is that Ubuntu’s boot sequence is event based, including bringing up network interfaces. The good old “ifup -a” is only done at the end of the boot sequence to try and fix anything that wasn’t brought up through events.

Unfortunately that meant that if your server takes a long time to detect the hardware, your bond would be initialised before the network cards have been detected, giving you a bond0 without a MAC address, making DHCP queries fail in pretty weird ways and making bridging or tagging fail with “Operation not permitted”.
As that all depends on hardware detection timing, it was racy, giving you random results at boot time.

Thankfully that should now be all fixed in 12.04, the new ifenslave-2.6 I uploaded a few weeks ago now initialises the bond whenever the first slave appears. If no slave appeared by the time we get to the catch-all “ifup -a”, it’ll simply wait for up to an additional minute for a slave to appear before giving up and continuing the boot sequence.
To avoid another common race condition where a bridge is brought up with a bond as one of its members before the bond is ready, ifenslave will now detect a bond is part of a bridge and add it only once ready.

Tagging

Another pretty common thing on corporate networks is the use of VLANs (802.1q), letting you create up to 4096 virtual networks on one link.
In the past, Ubuntu would rely on the catch all “ifup -a” to create any required vlan interface, once again, that’s a problem when an interface that depends on that vlan interface is initialised before the vlan interface is created.

To fix that, Ubuntu 12.04′s vlan package now ships with a udev rule that triggers the creation of the vlan interface whenever its parent interface is created.

Bridging

Bridging on Linux can be seen as creating a virtual switch on your system (including STP support).

Bridges have been working pretty well for a while on Ubuntu as we’ve been shipping a udev rule similar to the one for vlans for a few releases already. Members are simply added to the bridge as they appear on the system. The changes to ifenslave and the vlan package make sure that even bond interfaces with VLANs get properly added to bridges.

Complex network configuration example

My current test setup for networking on Ubuntu 12.04 is actually something I’ve been using on my network for years.

As you may know, I’m also working on LXC (Linux Containers), so my servers usually run somewhere between 15 and 80 containers, each of these container has a virtual ethernet interface that’s bridged.
I have one bridge per network zone, each of these network zone being a separate VLAN. These VLANs are created on top of a two gigabit link bond.

At boot time, the following happens (roughly):

  1. One of the two network interfaces appear
  2. The bond is initialised and the first interface is enslaved
  3. This triggers the creation of all the VLAN interfaces
  4. Creating the VLAN interfaces triggers the creation of all the bridges
  5. All the VLAN interfaces are added to their respective bridge
  6. The other network interface appear and gets added to the bond

My /etc/network/interfaces can be found here:
http://www.stgraber.org/download/complex-interfaces

This contains the very strict minimum needed for LACP to work. One thing worth noting is that the two physical interfaces are listed before bond0, this is to ensure that even if the events don’t work and we have to rely on the fallback “ifup -a”, the interfaces will be initialised in the right order avoiding the 60s delay.

Please note that this example will only reliably work with Ubuntu Precise (to become 12.04 LTS). It’s still a correct configuration for previous releases but race conditions may give you a random result.

I’ll be trying to push these changes to Ubuntu 11.10 as they are pretty easy to backport there, however it’d be much harder and very likely dangerous to backport these to even older releases.
For these, the only recommendation I can give is to add some “pre-up sleep 5″ or similar to your bridges and vlan interfaces to make sure whatever interface they depend on exists and is ready by the time the “ifup -a” call is reached.

IPv6

Another interesting topic for 12.04 is IPv6, as a release that’ll be supported for 5 years on both servers and desktops, it’s important to get IPv6 right.

Ubuntu 12.04 LTS will be the first Ubuntu release shipping with IPv6 private extensions turned on by default. Ubuntu 11.10 already brought most of what’s needed for IPv6 support on the desktop and in the installer, supporting SLAAC (stateless autoconfiguration), stateless DHCPv6 and stateful DHCPv6.

Once we get a new ifupdown in Ubuntu Precise, we’ll have full support for IPv6 also for people that aren’t using Network Manager (mostly servers) which should at this point give us support for any IPv6 setup you may find.

The userspace has been working pretty well with IPv6 for years. I recently made my whole network dual-stack and now have all my servers and services defaulting to IPv6 for a total of 40% of my network traffic (roughly 1.5TB a month of IPv6 traffic). The only user space related problem I noticed is the lack of IPv6 support in Nagios’ nrpe plugin, meaning I can’t start converting servers to single stack IPv6 as I’d loose monitoring …

I also wrote a python script using pypcap to give me the percentage of ipv6 and ipv4 traffic going through a given interface, the script can be found here: http://www.stgraber.org/download/v6stats.py (start with: python v6stats.py eth0)

What now ?

At this point, I think Ubuntu Precise is pretty much ready as far as networking is concerned. The only remaining change is the new ifupdown and the small installer change that comes with it for DHCPv6 support.

If you have a spare machine or VM, now is the time to grab a recent build of Ubuntu Precise and make sure whatever network configuration you use in production works reliably for you and that any hack you needed in the past can now be dropped.
If your configuration doesn’t work at all or fails randomly, please file a bug so we can have a look at your configuration and fix any bug (or documentation).

Read more