Canonical Voices

Posts tagged with 'containers'

Stéphane Graber

LXD logo

USB devices in containers

It can be pretty useful to pass USB devices to a container. Be that some measurement equipment in a lab or maybe more commonly, an Android phone or some IoT device that you need to interact with.

Similar to what I wrote recently about GPUs, LXD supports passing USB devices into containers. Again, similarly to the GPU case, what’s actually passed into the container is a Unix character device, in this case, a /dev/bus/usb/ device node.

This restricts USB passthrough to those devices and software which use libusb to interact with them. For devices which use a kernel driver, the module should be installed and loaded on the host, and the resulting character or block device be passed to the container directly.

Note that for this to work, you’ll need LXD 2.5 or higher.

Example (Android debugging)

As an example which quite a lot of people should be able to relate to, lets run a LXD container with the Android debugging tools installed, accessing a USB connected phone.

This would for example allow you to have your app’s build system and CI run inside a container and interact with one or multiple devices connected over USB.

First, plug your phone over USB, make sure it’s unlocked and you have USB debugging enabled:

stgraber@dakara:~$ lsusb
Bus 002 Device 003: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 002: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 021: ID 17ef:6047 Lenovo 
Bus 001 Device 031: ID 046d:082d Logitech, Inc. HD Pro Webcam C920
Bus 001 Device 004: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 005: ID 046d:0a01 Logitech, Inc. USB Headset
Bus 001 Device 033: ID 0fce:51da Sony Ericsson Mobile Communications AB 
Bus 001 Device 003: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 002: ID 072f:90cc Advanced Card Systems, Ltd ACR38 SmartCard Reader
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Spot your phone in that list, in my case, that’d be the “Sony Ericsson Mobile” entry.

Now let’s create our container:

stgraber@dakara:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1

And install the Android debugging client:

stgraber@dakara:~$ lxc exec c1 -- apt install android-tools-adb
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following NEW packages will be installed:
 android-tools-adb
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 68.2 kB of archives.
After this operation, 198 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/universe amd64 android-tools-adb amd64 5.1.1r36+git20160322-0ubuntu3 [68.2 kB]
Fetched 68.2 kB in 0s (0 B/s) 
Selecting previously unselected package android-tools-adb.
(Reading database ... 25469 files and directories currently installed.)
Preparing to unpack .../android-tools-adb_5.1.1r36+git20160322-0ubuntu3_amd64.deb ...
Unpacking android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...

We can now attempt to list Android devices with:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached

Since we’ve not passed any USB device yet, the empty output is expected.

Now, let’s pass the specific device listed in “lsusb” above:

stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce productid=51da
Device sony added to c1

And try to list devices again:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

To get a shell, you can then use:

stgraber@dakara:~$ lxc exec c1 -- adb shell
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
E5823:/ $

LXD USB devices support hotplug by default. So unplugging the device and plugging it back on the host will have it removed and re-added to the container.

The “productid” property isn’t required, you can set only the “vendorid” so that any device from that vendor will be automatically attached to the container. This can be very convenient when interacting with a number of similar devices or devices which change productid depending on what mode they’re in.

stgraber@dakara:~$ lxc config device remove c1 sony
Device sony removed from c1
stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce
Device sony added to c1
stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

The optional “required” property turns off the hotplug behavior, requiring the device be present for the container to be allowed to start.

More details on USB device properties can be found here.

Conclusion

We are surrounded by a variety of odd USB devices, a good number of which come with possibly dodgy software, requiring a specific version of a specific Linux distribution to work. It’s sometimes hard to accommodate those requirements while keeping a clean and safe environment.

LXD USB device passthrough helps a lot in such cases, so long as the USB device uses a libusb based workflow and doesn’t require a specific kernel driver.

If you want to add a device which does use a kernel driver, locate the /dev node it creates, check if it’s a character or block device and pass that to LXD as a unix-char or unix-block type device.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

LXD logo

GPU inside a container

LXD supports GPU passthrough but this is implemented in a very different way than what you would expect from a virtual machine. With containers, rather than passing a raw PCI device and have the container deal with it (which it can’t), we instead have the host setup with all needed drivers and only pass the resulting device nodes to the container.

This post focuses on NVidia and the CUDA toolkit specifically, but LXD’s passthrough feature should work with all other GPUs too. NVidia is just what I happen to have around.

The test system used below is a virtual machine with two NVidia GT 730 cards attached to it. Those are very cheap, low performance GPUs, that have the advantage of existing in low-profile PCI cards that fit fine in one of my servers and don’t require extra power.
For production CUDA workloads, you’ll want something much better than this.

Note that for this to work, you’ll need LXD 2.5 or higher.

Host setup

Install the CUDA tools and drivers on the host:

wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo apt update
sudo apt install cuda

Then reboot the system to make sure everything is properly setup. After that, you should be able to confirm that your NVidia GPU is properly working with:

ubuntu@canonical-lxd:~$ nvidia-smi 
Tue Mar 21 21:28:34 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   26C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+

And can check that the CUDA tools work properly with:

ubuntu@canonical-lxd:~$ /usr/local/cuda-8.0/extras/demo_suite/bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GT 730
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3059.4

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3267.4

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			30805.1

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Container setup

First lets just create a regular Ubuntu 16.04 container:

ubuntu@canonical-lxd:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1

Then install the CUDA demo tools in there:

lxc exec c1 -- wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
lxc exec c1 -- dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
lxc exec c1 -- apt update
lxc exec c1 -- apt install cuda-demo-suite-8-0 --no-install-recommends

At which point, you can run:

ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

Which is expected as LXD hasn’t been told to pass any GPU yet.

LXD GPU passthrough

LXD allows for pretty specific GPU passthrough, the details can be found here.
First let’s start with the most generic one, just allow access to all GPUs:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:47:54 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

Now just pass whichever is the first GPU:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu id=0
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:50:37 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

You can also specify the GPU by vendorid and productid:

ubuntu@canonical-lxd:~$ lspci -nnn | grep NVIDIA
02:06.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 730] [10de:1287] (rev a1)
02:07.0 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
02:08.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 730] [10de:1287] (rev a1)
02:09.0 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu vendorid=10de productid=1287
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:52:40 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

Which adds them both as they are exactly the same model in my setup.

But for such cases, you can also select using the card’s PCI ID with:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu pci=0000:02:08.0
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:56:52 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu 
Device gpu removed from c1

And lastly, lets confirm that we get the same result as on the host when running a CUDA workload:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- /usr/local/cuda-8.0/extras/demo_suite/bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GT 730
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3065.4

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3305.8

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			30825.7

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Conclusion

LXD makes it very easy to share one or multiple GPUs with your containers.
You can either dedicate specific GPUs to specific containers or just share them.

There is no of the overhead involved with usual PCI based passthrough and only a single instance of the driver is running with the containers acting just like normal host user processes would.

This does however require that your containers run a version of the CUDA tools which supports whatever version of the NVidia drivers is installed on the host.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

LXD logo

The LXD demo server

The LXD demo server is the service behind https://linuxcontainers.org/lxd/try-it.
We use it to showcase LXD by leading visitors through an interactive tour of LXD’s features.

Rather than use some javascript simulation of LXD and its client tool, we give our visitors a real root shell using a LXD container with nesting enabled. This environment is using all of LXD’s resource limits as well as a very strict firewall to prevent abuses and offer everyone a great experience.

This is done using lxd-demo-server which can be found at: https://github.com/lxc/lxd-demo-server
The lxd-demo-server is a daemon that offers a public REST API for use from a web browser.
It supports:

  • Creating containers from an existing container or from a LXD image
  • Choose what command to execute in the containers on connection
  • Lets you choose specific profiles to apply to the containers
  • An API to record user feedback
  • An API to fetch usage statistics for reporting
  • A number of resource restrictions:
    • CPU
    • Disk quota (if using btrfs or zfs as the LXD storage backend)
    • Processes
    • Memory
    • Number of sessions per IP
    • Time limit for the session
    • Total number of concurrent sessions
  • Requiring the user to read and agree to terms of service
  • Recording all sessions in a sqlite3 database
  • A maintenance mode

All of it is configured through a simple yaml configuration file.

Setting up your own

The LXD demo server is now available as a snap package and interacts with the snap version of LXD. To install it on your own system, all you need to do is:

Make sure you don’t have the deb version of LXD installed

ubuntu@djanet:~$ sudo apt remove --purge lxd lxd-client
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following packages will be REMOVED:
 lxd* lxd-client*
0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.
After this operation, 25.3 MB disk space will be freed.
Do you want to continue? [Y/n] 
(Reading database ... 59776 files and directories currently installed.)
Removing lxd (2.0.9-0ubuntu1~16.04.2) ...
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
Purging configuration files for lxd (2.0.9-0ubuntu1~16.04.2) ...
Removing lxd-client (2.0.9-0ubuntu1~16.04.2) ...
Processing triggers for man-db (2.7.5-1) ...

Install the LXD snap

ubuntu@djanet:~$ sudo snap install lxd
lxd 2.8 from 'canonical' installed

Then configure LXD

ubuntu@djanet:~$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=43]: 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
Would you like to create a new network bridge (yes/no) [default=yes]? 
What should the new bridge be called [default=lxdbr0]? 
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
LXD has been successfully configured.

And finally install lxd-demo-server itself

ubuntu@djanet:~$ sudo snap install lxd-demo-server
lxd-demo-server git from 'stgraber' installed
ubuntu@djanet:~$ sudo snap connect lxd-demo-server:lxd lxd:lxd

At that point, you can hit http://127.0.0.1:8080 and will be greeted with this:

To change the configuration, use:

ubuntu@djanet:~$ sudo lxd-demo-server.configure

And that’s it, you have your own instance of the demo server.

Security

As mentioned at the beginning, the demo server comes with a number of options to prevent users from using all the available resources themselves and bringing the whole thing down.

Those should be tweaked for your particular needs and should also update the total number of concurrent sessions so that you don’t end up over-committing on resources.

On the network side of things, the demo server itself doesn’t do any kind of firewalling or similar network restrictions. If you plan on offering sessions to anyone online, you should make sure that the network which LXD is using is severely restricted and that the host this is running on is also placed in a very restricted part of your network.

Containers handed to strangers should never be using “security.privileged” as that’d be a straight route to getting root privileges on the host. You should also stay away from bind-mounting any part of the host’s filesystem into those containers.

I would also very strongly recommend setting up very frequent security updates on your host and kernel live patching or at least automatic reboot when a new kernel is installed. This should avoid a new kernel security issue from being immediately exploited in your environment.

Conclusion

The LXD demo server was initially written as a quick hack to expose a LXD instance to the Internet so we could let people try LXD online and also offer the upstream team a reliable environment we could have people attempt to reproduce their bugs into.

It’s since grown a bit with new features contributed by users and with improvements we’ve made to the original experience on our website.

We’ve now served over 36000 sessions to over 26000 unique visitors. This has been a great tool for people to try and experience LXD and I hope it will be similarly useful to other projects.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

This is the twelfth and last blog post in this series about LXD 2.0.

LXD logo

Introduction

This is finally it! The last blog post in this series of 12 that started almost a year ago.

If you followed the series from the beginning, you should have been using LXD for quite a bit of time now and be pretty familiar with its day to day operation and capabilities.

But what if something goes wrong? What can you do to track down the problem yourself? And if you can’t, what information should you record so that upstream can track down the problem?

And what if you want to fix issues yourself or help improve LXD by implementing the features you need? How do you build, test and contribute to the LXD code base?

Debugging LXD & filing bug reports

LXD log files

/var/log/lxd/lxd.log

This is the main LXD log file. To avoid filling up your disk very quickly, only log messages marked as INFO, WARNING or ERROR are recorded there by default. You can change that behavior by passing “–debug” to the LXD daemon.

/var/log/lxd/CONTAINER/lxc.conf

Whenever you start a container, this file is updated with the configuration that’s passed to LXC.
This shows exactly how the container will be configured, including all its devices, bind-mounts, …

/var/log/lxd/CONTAINER/forkexec.log

This file will contain errors coming from LXC when failing to execute a command.
It’s extremely rare for anything to end up in there as LXD usually handles errors much before that.

/var/log/lxd/CONTAINER/forkstart.log

This file will contain errors coming from LXC when starting the container.
It’s extremely rare for anything to end up in there as LXD usually handles errors much before that.

CRIU logs (for live migration)

If you are using CRIU for container live migration or live snapshotting there are additional log files recorded every time a CRIU dump is generated or a dump is restored.

Those logs can also be found in /var/log/lxd/CONTAINER/ and are timestamped so that you can find whichever matches your most recent attempt. They will contain a detailed record of everything that’s dumped and restored by CRIU and are far better for understanding a failure than the typical migration/snapshot error message.

LXD debug messages

As mentioned above, you can switch the daemon to doing debug logging with the –debug option.
An alternative to that is to connect to the daemon’s event interface which will show you all log entries, regardless of the configured log level (even works remotely).

An example for “lxc init ubuntu:16.04 xen” would be:
lxd.log:

INFO[02-24|18:14:09] Starting container action=start created=2017-02-24T23:11:45+0000 ephemeral=false name=xen stateful=false used=1970-01-01T00:00:00+0000
INFO[02-24|18:14:10] Started container action=start created=2017-02-24T23:11:45+0000 ephemeral=false name=xen stateful=false used=1970-01-01T00:00:00+0000

lxc monitor –type=logging:

metadata:
  context: {}
  level: dbug
  message: 'New events listener: 9b725741-ffe7-4bfc-8d3e-fe620fc6e00a'
timestamp: 2017-02-24T18:14:01.025989062-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.341283344-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StorageCoreInit
timestamp: 2017-02-24T18:14:09.341536477-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/containers/xen
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.347709394-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: PUT
    url: /1.0/containers/xen/state
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.357046302-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'New task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:09.358387853-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Started task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:09.358578599-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/operations/2e2cf904-c4c4-4693-881f-57897d602ad3/wait
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.366213106-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolInit
timestamp: 2017-02-24T18:14:09.369636451-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolCheck
timestamp: 2017-02-24T18:14:09.369771164-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.424696767-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
    name: xen
  level: dbug
  message: ContainerUmount
timestamp: 2017-02-24T18:14:09.432723719-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.721067917-05:00
type: logging


metadata:
  context:
    action: start
    created: 2017-02-24 23:11:45 +0000 UTC
    ephemeral: "false"
    name: xen
    stateful: "false"
    used: 1970-01-01 00:00:00 +0000 UTC
  level: info
  message: Starting container
timestamp: 2017-02-24T18:14:09.749808518-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.792551375-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StorageCoreInit
timestamp: 2017-02-24T18:14:09.792961032-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /internal/containers/23/onstart
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.800803501-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolInit
timestamp: 2017-02-24T18:14:09.803190248-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolCheck
timestamp: 2017-02-24T18:14:09.803251188-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.803306055-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Scheduler: container xen started: re-balancing'
timestamp: 2017-02-24T18:14:09.965080432-05:00
type: logging


metadata:
  context:
    action: start
    created: 2017-02-24 23:11:45 +0000 UTC
    ephemeral: "false"
    name: xen
    stateful: "false"
    used: 1970-01-01 00:00:00 +0000 UTC
  level: info
  message: Started container
timestamp: 2017-02-24T18:14:10.162965059-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Success for task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:10.163072893-05:00
type: logging

The format from “lxc monitor” is a bit different from what you’d get in a log file where each entry is condense into a single line, but more importantly you see all those “level: dbug” entries

Where to report bugs

LXD bugs

The best place to report LXD bugs is upstream at https://github.com/lxc/lxd/issues.
Make sure to fill in everything in the bug reporting template as that information saves us a lot of back and forth to reproduce your environment.

Ubuntu bugs

If you find a problem with the Ubuntu package itself, failing to install, upgrade or remove. Or run into issues with the LXD init scripts. The best place to report such bugs is on Launchpad.

On an Ubuntu system, you can do so with: ubuntu-bug lxd
This will automatically include a number of log files and package information for us to look at.

CRIU bugs

Bugs that are related to CRIU which you can spot by the usually pretty visible CRIU error output should be reported on Launchpad with: ubuntu-bug criu

Do note that the use of CRIU through LXD is considered to be a beta feature and unless you are willing to pay for support through a support contract with Canonical, it may take a while before we get to look at your bug report.

Contributing to LXD

LXD is written in Go and hosted on Github.
We welcome external contributions of any size. There is no CLA or similar legal agreement to sign to contribute to LXD, just the usual Developer Certificate of Ownership (Signed-off-by: line).

We have a number of potential features listed on our issue tracker that can make good starting points for new contributors. It’s usually best to first file an issue before starting to work on code, just so everyone knows that you’re doing that work and so we can give some early feedback.

Building LXD from source

Upstream maintains up to date instructions here: https://github.com/lxc/lxd#building-from-source

You’ll want to fork the upstream repository on Github and then push your changes to your branch. We recommend rebasing on upstream LXD daily as we do tend to merge changes pretty regularly.

Running the testsuite

LXD maintains two sets of tests. Unit tests and integration tests. You can run all of them with:

sudo -E make check

To run the unit tests only, use:

sudo -E go test ./...

To run the integration tests, use:

cd test
sudo -E ./main.sh

That latter one supports quite a number of environment variables to test various storage backends, disable network tests, use a ramdisk or just tweak log output. Some of those are:

  • LXD_BACKEND: One of “btrfs”, “dir”, “lvm” or “zfs” (defaults to “dir”)
    Lets your run the whole testsuite with any of the LXD storage drivers.
  • LXD_CONCURRENT: “true” or “false” (defaults to “false”)
    This enables a few extra concurrency tests.
  • LXD_DEBUG: “true” or “false” (defaults to “false”)
    This will log all shell commands and run all LXD commands in debug mode.
  • LXD_INSPECT: “true” or “false” (defaults to “false”)
    This will cause the testsuite to hang on failure so you can inspect the environment.
  • LXD_LOGS: A directory to dump all LXD log files into (defaults to “”)
    The “logs” directory of all spawned LXD daemons will be copied over to this path.
  • LXD_OFFLINE: “true” or “false” (defaults to “false”)
    Disables any test which relies on outside network connectivity.
  • LXD_TEST_IMAGE: path to a LXD image in the unified format (defaults to “”)
    Lets you use a custom test image rather than the default minimal busybox image.
  • LXD_TMPFS: “true” or “false” (defaults to “false”)
    Runs the whole testsuite within a “tmpfs” mount, this can use quite a bit of memory but makes the testsuite significantly faster.
  • LXD_VERBOSE: “true” or “false” (defaults to “false”)
    A less extreme version of LXD_DEBUG. Shell commands are still logged but –debug isn’t passed to the LXC commands and the LXD daemon only runs with –verbose.

The testsuite will alert you to any missing dependency before it actually runs. A test run on a reasonably fast machine can be done under 10 minutes.

Sending your branch

Before sending a pull request, you’ll want to confirm that:

  • Your branch has been rebased on the upstream branch
  • All your commits messages include the “Signed-off-by: First Last <email>” line
  • You’ve removed any temporary debugging code you may have used
  • You’ve squashed related commits together to keep your branch easily reviewable
  • The unit and integration tests all pass

Once that’s all done, open a pull request on Github. Our Jenkins will validate that the commits are all signed-off, a test build on MacOS and Windows will automatically be performed and if things look good, we’ll trigger a full Jenkins test run that will test your branch on all storage backends, 32bit and 64bit and all the Go versions we care about.

This typically takes less than an hour to happen, assuming one of us is around to trigger Jenkins.

Once all the tests are done and we’re happy with the code itself, your branch will be merged into master and your code will be in the next LXD feature release. If the changes are suitable for the LXD stable-2.0 branch, we’ll backport them for you.

Conclusion

I hope this series of blog post has been helpful in understanding what LXD is and what it can do!

This series’ scope was limited to the LTS version of LXD (2.0.x) but we also do monthly feature releases for those who want the latest features. You can find a few other blog posts covering such features listed in the original LXD 2.0 series post.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

LXD logo

LXD on other operating systems?

While LXD and especially its API have been designed in a mostly OS-agnostic way, the only OS supported for the daemon right now is Linux (and a rather recent Linux at that).

However since all the communications between the client and daemon happen over a REST API, there is no reason why our default client wouldn’t work on other operating systems.

And it does. We in fact gate changes to the client on having it build and pass unit tests on Linux, Windows and MacOS.

This means that you can run one or more LXD daemons on Linux systems on your network and then interact with those remotely from any Linux, Windows or MacOS machine.

Setting up your LXD daemon

We’ll be connecting to the LXD daemon over the network, so you’ll need to make sure it’s listening and has a password configured so that new clients can add themselves to the trust store.

This can be done with:

lxc config set core.https_address "[::]:8443"
lxc config set core.trust_password "my-password"

In my case, that remote LXD can be reached with “djanet.maas.mtl.stgraber.net”, you’ll want to replace that with your LXD server’s FQDN or IP in the commands used below.

Windows client

Pre-built native binaries

Our Windows CI service builds a tarball for every commit. You can grab the latest one here:
https://ci.appveyor.com/project/lxc/lxd/branch/master/artifacts

Then unpack the archive and open a command prompt in the directory where you unpacked the lxc.exe binary.

Build from source

Alternatively, you can build it from source, by first installing Go using the latest MSI based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

And then in a command prompt, run:

git config --global http.https://gopkg.in.followRedirects true
go get -v -x github.com/lxc/lxd/lxc

Use Ubuntu on Windows (“bash”)

For this, you need to use Windows 10 and have the Windows subsystem for Linux enabled.
With that done, start an Ubuntu shell by launching “bash”. And you’re done.
The LXD client is installed by default in the Ubuntu 16.04 image.

Interact with the remote server

Regardless of which method you picked, you’ve now got access to the “lxc” command and can add your remote server.

Using the native build does have a few restrictions to do with Windows terminal escape codes, breaking things like the arrow keys and password hiding. The Ubuntu on Windows way uses the Linux version of the LXD client and so doesn’t suffer from those limitations.

MacOS client

Even though we do have MacOS CI through Travis, they don’t host artifacts for us and so don’t have prebuilt binaries for people to download.

Build from source

Similarly to the Windows instructions, you can build the LXD client from source, by first installing Go using the latest DMG based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

Once that’s done, open a new Terminal window and run:

export GOPATH=~/go
go get -v -x github.com/lxc/lxd/lxc
sudo ln -s ~/go/bin/lxc /usr/local/bin/

At which point you can use the “lxc” command.

Conclusion

The LXD client can be built on all the main operating systems and on just about every architecture, this makes it very easy for anyone to interact with existing LXD servers, whether they’re themselves using a Linux machine or not.

Thanks to our pretty strict backward compatibility rules, the version of the client doesn’t really matter. Older clients can talk to newer servers and newer clients can talk to older servers. Obviously in both cases some features will not be available, but normal container worflow operations will work fine.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

LXD logo

What’s Ubuntu Core?

Ubuntu Core is a version of Ubuntu that’s fully transactional and entirely based on snap packages.

Most of the system is read-only. All installed applications come from snap packages and all updates are done using transactions. Meaning that should anything go wrong at any point during a package or system update, the system will be able to revert to the previous state and report the failure.

The current release of Ubuntu Core is called series 16 and was released in November 2016.

Note that on Ubuntu Core systems, only snap packages using confinement can be installed (no “classic” snaps) and that a good number of snaps will not fully work in this environment or will require some manual intervention (creating user and groups, …). Ubuntu Core gets improved on a weekly basis as new releases of snapd and the “core” snap are put out.

Requirements

As far as LXD is concerned, Ubuntu Core is just another Linux distribution. That being said, snapd does require unprivileged FUSE mounts and AppArmor namespacing and stacking, so you will need the following:

  • An up to date Ubuntu system using the official Ubuntu kernel
  • An up to date version of LXD

Creating an Ubuntu Core container

The Ubuntu Core images are currently published on the community image server.
You can launch a new container with:

stgraber@dakara:~$ lxc launch images:ubuntu-core/16 ubuntu-core
Creating ubuntu-core
Starting ubuntu-core

The container will take a few seconds to start, first executing a first stage loader that determines what read-only image to use and setup the writable layers. You don’t want to interrupt the container in that stage and “lxc exec” will likely just fail as pretty much nothing is available at that point.

Seconds later, “lxc list” will show the container IP address, indicating that it’s booted into Ubuntu Core:

stgraber@dakara:~$ lxc list
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+
|     NAME    |  STATE  |          IPV4        |                      IPV6                    |    TYPE    | SNAPSHOTS |
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+
| ubuntu-core | RUNNING | 10.90.151.104 (eth0) | 2001:470:b368:b2b5:216:3eff:fee1:296f (eth0) | PERSISTENT | 0         |
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+

You can then interact with that container the same way you would any other:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap list
Name       Version     Rev  Developer  Notes
core       16.04.1     394  canonical  -
pc         16.04-0.8   9    canonical  -
pc-kernel  4.4.0-45-4  37   canonical  -
root@ubuntu-core:~#

Updating the container

If you’ve been tracking the development of Ubuntu Core, you’ll know that those versions above are pretty old. That’s because the disk images that are used as the source for the Ubuntu Core LXD images are only refreshed every few months. Ubuntu Core systems will automatically update once a day and then automatically reboot to boot onto the new version (and revert if this fails).

If you want to immediately force an update, you can do it with:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap refresh
pc-kernel (stable) 4.4.0-53-1 from 'canonical' upgraded
core (stable) 16.04.1 from 'canonical' upgraded
root@ubuntu-core:~# snap version
snap 2.17
snapd 2.17
series 16
root@ubuntu-core:~#

And then reboot the system and check the snapd version again:

root@ubuntu-core:~# reboot
root@ubuntu-core:~# 

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap version
snap 2.21
snapd 2.21
series 16
root@ubuntu-core:~#

You can get an history of all snapd interactions with

stgraber@dakara:~$ lxc exec ubuntu-core snap changes
ID  Status  Spawn                 Ready                 Summary
1   Done    2017-01-31T05:14:38Z  2017-01-31T05:14:44Z  Initialize system state
2   Done    2017-01-31T05:14:40Z  2017-01-31T05:14:45Z  Initialize device
3   Done    2017-01-31T05:21:30Z  2017-01-31T05:22:45Z  Refresh all snaps in the system

Installing some snaps

Let’s start with the simplest snaps of all, the good old Hello World:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install hello-world
hello-world 6.3 from 'canonical' installed
root@ubuntu-core:~# hello-world
Hello World!

And then move on to something a bit more useful:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install nextcloud
nextcloud 11.0.1snap2 from 'nextcloud' installed

Then hit your container over HTTP and you’ll get to your newly deployed Nextcloud instance.

If you feel like testing the latest LXD straight from git, you can do so with:

stgraber@dakara:~$ lxc config set ubuntu-core security.nesting true
stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install lxd --edge
lxd (edge) git-c6006fb from 'canonical' installed
root@ubuntu-core:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]: 

We detected that you are running inside an unprivileged container.
This means that unless you manually configured your host otherwise,
you will not have enough uid and gid to allocate to your containers.

LXD can re-use your container's own allocation to avoid the problem.
Doing so makes your nested containers slightly less safe as they could
in theory attack their parent container and gain more privileges than
they otherwise would.

Would you like to have your containers share their parent's allocation (yes/no) [default=yes]? 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
Would you like to create a new network bridge (yes/no) [default=yes]? 
What should the new bridge be called [default=lxdbr0]? 
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
LXD has been successfully configured.

And because container inception never gets old, lets run Ubuntu Core 16 inside Ubuntu Core 16:

root@ubuntu-core:~# lxc launch images:ubuntu-core/16 nested-core
Creating nested-core
Starting nested-core 
root@ubuntu-core:~# lxc list
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
|    NAME     |  STATE  |         IPV4        |                       IPV6                    |    TYPE    | SNAPSHOTS |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
| nested-core | RUNNING | 10.71.135.21 (eth0) | fd42:2861:5aad:3842:216:3eff:feaf:e6bd (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+

Conclusion

If you ever wanted to try Ubuntu Core, this is a great way to do it. It’s also a great tool for snap authors to make sure their snap is fully self-contained and will work in all environments.

Ubuntu Core is a great fit for environments where you want to ensure that your system is always up to date and is entirely reproducible. This does come with a number of constraints that may or may not work for you.

And lastly, a word of warning. Those images are considered as good enough for testing, but aren’t officially supported at this point. We are working towards getting fully supported Ubuntu Core LXD images on the official Ubuntu image server in the near future.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

LXD logo

Introduction

So far all my blog posts about LXD have been assuming an Ubuntu host with LXD installed from packages, as a snap or from source.

But LXD is perfectly happy to run on any Linux distribution which has the LXC library available (version 2.0.0 or higher), a recent kernel (3.13 or higher) and some standard system utilities available (rsync, dnsmasq, netcat, various filesystem tools, …).

In fact, you can find packages in the following Linux distributions (let me know if I missed one):

We have also had several reports of LXD being used on Centos and Fedora, where users built it from source using the distribution’s liblxc (or in the case of Centos, from an external repository).

One distribution we’ve seen a lot of requests for is Debian. A native Debian package has been in the works for a while now and the list of missing dependencies has been shrinking quite a lot lately.

But there is an easy alternative that will get you a working LXD on Debian today!
Use the same LXD snap package as I mentioned in a previous post, but on Debian!

Requirements

  • A Debian “testing” (stretch) system
  • The stock Debian kernel without apparmor support
  • If you want to use ZFS with LXD, then the “contrib” repository must be enabled and the “zfsutils-linux” package installed on the system

Installing snapd and LXD

Getting the latest stable LXD onto an up to date Debian testing system is just a matter of running:

apt install snapd
snap install lxd

If you never used snapd before, you’ll have to either logout and log back in to update your PATH, or just update your existing one with:

. /etc/profile.d/apps-bin-path.sh

And now it’s time to configure LXD with:

root@debian:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:
Create a new ZFS pool (yes/no) [default=yes]?
Name of the new ZFS pool [default=lxd]:
Would you like to use an existing block device (yes/no) [default=no]?
Size in GB of the new loop device (1GB minimum) [default=15]:
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
LXD has been successfully configured.

And finally, you can start using LXD:

root@debian:~# lxc launch images:debian/stretch debian
Creating debian
Starting debian

root@debian:~# lxc launch ubuntu:16.04 ubuntu
Creating ubuntu
Starting ubuntu

root@debian:~# lxc launch images:centos/7 centos
Creating centos
Starting centos

root@debian:~# lxc launch images:archlinux archlinux
Creating archlinux
Starting archlinux

root@debian:~# lxc launch images:gentoo gentoo
Creating gentoo
Starting gentoo

And enjoy your fresh collection of Linux distributions:

root@debian:~# lxc list
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
|   NAME    |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| archlinux | RUNNING | 10.250.240.103 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe40:7b1b (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| centos    | RUNNING | 10.250.240.109 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe87:64ff (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| debian    | RUNNING | 10.250.240.111 (eth0) | fd42:46d0:3c40:cca7:216:3eff:feb4:e984 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| gentoo    | RUNNING | 10.250.240.164 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe27:10ca (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| ubuntu    | RUNNING | 10.250.240.80 (eth0)  | fd42:46d0:3c40:cca7:216:3eff:fedc:f0a6 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+

Conclusion

The availability of snapd on other Linux distributions makes it a great way to get the latest LXD running on your distribution of choice.

There are still a number of problems with the LXD snap which may or may not be a blocker for your own use. The main ones at this point are:

  • All containers are shutdown and restarted on upgrades
  • No support for bash completion

If you want non-root users to have access to the LXD daemon. Simply make sure that a “lxd” group exists on your system and add whoever you want to manage LXD into that group, then restart the LXD daemon.

Extra information

The snapd website can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

LXD logo

Introduction

For those who haven’t heard of Kubernetes before, it’s defined by the upstream project as:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

It is important to note the “applications” part in there. Kubernetes deploys a set of single application containers and connects them together. Those containers will typically run a single process and so are very different from the full system containers that LXD itself provides.

This blog post will be very similar to one I published last year on running OpenStack inside a LXD container. Similarly to the OpenStack deployment, we’ll be using conjure-up to setup a number of LXD containers and eventually run the Docker containers that are used by Kubernetes.

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have at least 10GB of space for the containers to use and at least 4GB of RAM.

Outside of configuring LXD itself, you will also need to bump some kernel limits with the following commands:

sudo sysctl fs.inotify.max_user_instances=1048576  
sudo sysctl fs.inotify.max_queued_events=1048576  
sudo sysctl fs.inotify.max_user_watches=1048576  
sudo sysctl vm.max_map_count=262144

Setting up the container

Similarly to OpenStack, the conjure-up deployed version of Kubernetes expects a lot more privileges and resource access than LXD would typically provide. As a result, we have to create a privileged container, with nesting enabled and with AppArmor disabled.

This means that not very much of LXD’s security features will still be in effect on this container. Depending on how you feel about this, you may choose to run this on a different machine.

Note that all of this however remains better than instructions that would have you install everything directly on your host machine. If only by making it very easy to remove it all in the end.

lxc init ubuntu:16.04 kubernetes -c security.privileged=true -c security.nesting=true -c linux.kernel_modules=ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
printf "lxc.cap.drop=\nlxc.aa_profile=unconfined\n" | lxc config set kubernetes raw.lxc -
lxc start kubernetes

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get Kubernetes going.

lxc exec kubernetes -- apt update
lxc exec kubernetes -- apt dist-upgrade -y
lxc exec kubernetes -- apt install squashfuse -y
lxc exec kubernetes -- ln -s /bin/true /usr/local/bin/udevadm
lxc exec kubernetes -- snap install conjure-up --classic

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec kubernetes -- lxd init

And that’s it for the container configuration itself, now we can deploy Kubernetes!

Deploying Kubernetes with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy Kubernetes.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec kubernetes -- sudo -u ubuntu -i conjure-up
  • Select “Kubernetes Core”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy Kubernetes. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Interact with your new Kubernetes

We can ask juju to deploy a new kubernetes workload, in this case 5 instances of “microbot”:

root@kubernetes:~# sudo -u ubuntu -i
ubuntu@kubernetes:~$ juju run-action kubernetes-worker/0 microbot replicas=5
Action queued with id: 1d1e2997-5238-4b86-873c-ad79660db43f

You can then grab the service address from the Juju action output:

ubuntu@kubernetes:~$ juju show-action-output 1d1e2997-5238-4b86-873c-ad79660db43f
results:
 address: microbot.10.97.218.226.xip.io
status: completed
timing:
 completed: 2017-01-13 10:26:14 +0000 UTC
 enqueued: 2017-01-13 10:26:11 +0000 UTC
 started: 2017-01-13 10:26:12 +0000 UTC

Now actually using the Kubernetes tools, we can check the state of our new pods:

ubuntu@kubernetes:~$ kubectl.conjure-up-kubernetes-core-be8 get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 21m
microbot-1855935831-cn4bs 0/1 ContainerCreating 0 18s
microbot-1855935831-dh70k 0/1 ContainerCreating 0 18s
microbot-1855935831-fqwjp 0/1 ContainerCreating 0 18s
microbot-1855935831-ksmmp 0/1 ContainerCreating 0 18s
microbot-1855935831-mfvst 1/1 Running 0 18s
nginx-ingress-controller-bj5gh 1/1 Running 0 21m

After a little while, you’ll see everything’s running:

ubuntu@kubernetes:~$ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 23m
microbot-1855935831-cn4bs 1/1 Running 0 2m
microbot-1855935831-dh70k 1/1 Running 0 2m
microbot-1855935831-fqwjp 1/1 Running 0 2m
microbot-1855935831-ksmmp 1/1 Running 0 2m
microbot-1855935831-mfvst 1/1 Running 0 2m
nginx-ingress-controller-bj5gh 1/1 Running 0 23m

At which point, you can hit the service URL with:

ubuntu@kubernetes:~$ curl -s http://microbot.10.97.218.226.xip.io | grep hostname
 <p class="centered">Container hostname: microbot-1855935831-fqwjp</p>

Running this multiple times will show you different container hostnames as you get load balanced between one of those 5 new instances.

Conclusion

Similar to OpenStack, conjure-up combined with LXD makes it very easy to deploy rather complex big software, very easily and in a very self-contained way.

This isn’t the kind of setup you’d want to run in a production environment, but it’s great for developers, demos and whoever wants to try those technologies without investing into hardware.

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

LXD logo

Introduction

The LXD and AppArmor teams have been working to support loading AppArmor policies inside LXD containers for a while. This support which finally landed in the latest Ubuntu kernels now makes it possible to install snap packages.

Snap packages are a new way of distributing software, directly from the upstream and with a number of security features wrapped around them so that these packages can’t interfere with each other or cause harm to your system.

Requirements

There are a lot of moving pieces to get all of this working. The initial enablement was done on Ubuntu 16.10 with Ubuntu 16.10 containers, but all the needed bits are now progressively being pushed as updates to Ubuntu 16.04 LTS.

The easiest way to get this to work is with:

  • Ubuntu 16.10 host
  • Stock Ubuntu kernel (4.8.0)
  • Stock LXD (2.4.1 or higher)
  • Ubuntu 16.10 container with “squashfuse” manually installed in it

Installing the nextcloud snap

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.

lxc launch ubuntu:16.10 nextcloud
lxc exec nextcloud -- apt update
lxc exec nextcloud -- apt dist-upgrade -y
lxc exec nextcloud -- apt install squashfuse -y

And then, lets install that “nextcloud” snap with:

lxc exec nextcloud -- snap install nextcloud

Finally, grab the container’s IP and access “http://<IP>” with your web browser:

stgraber@castiana:~$ lxc list nextcloud
+-----------+---------+----------------------+----------------------------------------------+
|    NAME   |  STATE  |         IPV4         |                     IPV6                     |
+-----------+---------+----------------------+----------------------------------------------+
| nextcloud | RUNNING | 10.148.195.47 (eth0) | fd42:ee2:5d34:25c6:216:3eff:fe86:4a49 (eth0) |
+-----------+---------+----------------------+----------------------------------------------+

Nextcloud Login screen

Installing the LXD snap in a LXD container

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.
This time with support for nested containers.

lxc launch ubuntu:16.10 lxd -c security.nesting=true
lxc exec lxd -- apt update
lxc exec lxd -- apt dist-upgrade -y
lxc exec lxd -- apt install squashfuse -y

Now lets clear the LXD that came pre-installed with the container so we can replace it by the snap.

lxc exec lxd -- apt remove --purge lxd lxd-client -y

Because we already have a stable LXD on the host, we’ll make things a bit more interesting by installing the latest build from git master rather than the latest stable release:

lxc exec lxd -- snap install lxd --edge

The rest is business as usual for a LXD user:

stgraber@castiana:~$ lxc exec lxd bash
root@lxd:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:

We detected that you are running inside an unprivileged container.
This means that unless you manually configured your host otherwise,
you will not have enough uid and gid to allocate to your containers.

LXD can re-use your container's own allocation to avoid the problem.
Doing so makes your nested containers slightly less safe as they could
in theory attack their parent container and gain more privileges than
they otherwise would.

Would you like to have your containers share their parent's allocation (yes/no) [default=yes]?
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
LXD has been successfully configured.

root@lxd:~# lxd.lxc launch images:archlinux arch
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

Creating arch
Starting arch

root@lxd:~# lxd.lxc list
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                      IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| arch | RUNNING | 10.106.137.64 (eth0) | fd42:2fcd:964b:eba8:216:3eff:fe8f:49ab (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+

And that’s it, you now have the latest LXD build installed inside a LXD container and running an archlinux container for you. That LXD build will update very frequently as we publish new builds to the edge channel several times a day.

Conclusion

It’s great to have snaps now install properly inside LXD containers. Production users can now setup hundreds of different containers, network them the way they want, setup their storage and resource limits through LXD and then install snap packages inside them to get the latest upstream releases of the software they want to run.

That’s not to say that everything is perfect yet. This is all built on some really recent kernel work, using unprivileged FUSE filesystem mounts and unprivileged AppArmor profile stacking and namespacing. There very likely still are some issues that need to get resolved in order to get most snaps to work identically to when they’re installed directly on the host.

If you notice discrepancies between a snap running directly on the host and a snap running inside a LXD container, you’ll want to look at the “dmesg” output, looking for any DENIED entry in there which would indicate AppArmor rejecting some request from the snap.

This typically indicates either a bug in AppArmor itself or in the way the AppArmor profiles are generated by snapd. If you find one of those issues, you can report it in #snappy on irc.freenode.net or file a bug at https://launchpad.net/snappy/+filebug so it can be investigated.

Extra information

More information on snap packages can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

LXD logo

Introduction

When LXD 2.0 shipped with Ubuntu 16.04, LXD networking was pretty simple. You could either use that “lxdbr0” bridge that “lxd init” would have you configure, provide your own or just use an existing physical interface for your containers.

While this certainly worked, it was a bit confusing because most of that bridge configuration happened outside of LXD in the Ubuntu packaging. Those scripts could only support a single bridge and none of this was exposed over the API, making remote configuration a bit of a pain.

That was all until LXD 2.3 when LXD finally grew its own network management API and command line tools to match. This post is an attempt at an overview of those new capabilities.

Basic networking

Right out of the box, LXD 2.3 comes with no network defined at all. “lxd init” will offer to set one up for you and attach it to all new containers by default, but let’s do it by hand to see what’s going on under the hood.

To create a new network with a random IPv4 and IPv6 subnet and NAT enabled, just run:

stgraber@castiana:~$ lxc network create testbr0
Network testbr0 created

You can then look at its config with:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.150.19.1/24
 ipv4.nat: "true"
 ipv6.address: fd42:474b:622d:259d::1/64
 ipv6.nat: "true"
managed: true
type: bridge
usedby: []

If you don’t want those auto-configured subnets, you can go with:

stgraber@castiana:~$ lxc network create testbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true
Network testbr0 created

Which will result in:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.0.3.1/24
 ipv4.nat: "true"
 ipv6.address: none
managed: true
type: bridge
usedby: []

Having a network created and running won’t do you much good if your containers aren’t using it.
To have your newly created network attached to all containers, you can simply do:

stgraber@castiana:~$ lxc network attach-profile testbr0 default eth0

To attach a network to a single existing container, you can do:

stgraber@castiana:~$ lxc network attach testbr0 my-container default eth0

Now, lets say you have openvswitch installed on that machine and want to convert that bridge to an OVS bridge, just change the driver property:

stgraber@castiana:~$ lxc network set testbr0 bridge.driver openvswitch

If you want to do a bunch of changes all at once, “lxc network edit” will let you edit the network configuration interactively in your text editor.

Static leases and port security

One of the nice thing with having LXD manage the DHCP server for you is that it makes managing DHCP leases much simpler. All you need is a container-specific nic device and the right property set.

root@yak:~# lxc init ubuntu:16.04 c1
Creating c1
root@yak:~# lxc network attach testbr0 c1 eth0
root@yak:~# lxc config device set c1 eth0 ipv4.address 10.0.3.123
root@yak:~# lxc start c1
root@yak:~# lxc list c1
+------+---------+-------------------+------+------------+-----------+
| NAME |  STATE  |        IPV4       | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-------------------+------+------------+-----------+
|  c1  | RUNNING | 10.0.3.123 (eth0) |      | PERSISTENT | 0         |
+------+---------+-------------------+------+------------+-----------+

And same goes for IPv6 but with the “ipv6.address” property instead.

Similarly, if you want to prevent your container from ever changing its MAC address or forwarding traffic for any other MAC address (such as nesting), you can enable port security with:

root@yak:~# lxc config device set c1 eth0 security.mac_filtering true

DNS

LXD runs a DNS server on the bridge. On top of letting you set the DNS domain for the bridge (“dns.domain” network property), it also supports 3 different operating modes (“dns.mode”):

  • “managed” will have one DNS record per container, matching its name and known IP addresses. The container cannot alter this record through DHCP.
  • “dynamic” allows the containers to self-register in the DNS through DHCP. So whatever hostname the container sends during the DHCP negotiation ends up in DNS.
  • “none” is for a simple recursive DNS server without any kind of local DNS records.

The default mode is “managed” and is typically the safest and most convenient as it provides DNS records for containers but doesn’t let them spoof each other’s records by sending fake hostnames over DHCP.

Using tunnels

On top of all that, LXD also supports connecting to other hosts using GRE or VXLAN tunnels.

A LXD network can have any number of tunnels attached to it, making it easy to create networks spanning multiple hosts. This is mostly useful for development, test and demo uses, with production environment usually preferring VLANs for that kind of segmentation.

So say, you want a basic “testbr0” network running with IPv4 and IPv6 on host “edfu” and want to spawn containers using it on host “djanet”. The easiest way to do that is by using a multicast VXLAN tunnel. This type of tunnels only works when both hosts are on the same physical segment.

root@edfu:~# lxc network create testbr0 tunnel.lan.protocol=vxlan
Network testbr0 created
root@edfu:~# lxc network attach-profile testbr0 default eth0

This defines a “testbr0” bridge on host “edfu” and sets up a multicast VXLAN tunnel on it for other hosts to join it. In this setup, “edfu” will be the one acting as a router for that network, providing DHCP, DNS, … the other hosts will just be forwarding traffic over the tunnel.

root@djanet:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.lan.protocol=vxlan
Network testbr0 created
root@djanet:~# lxc network attach-profile testbr0 default eth0

Now you can start containers on either host and see them getting IP from the same address pool and communicate directly with each other through the tunnel.

As mentioned earlier, this uses multicast, which usually won’t do you much good when crossing routers. For those cases, you can use VXLAN in unicast mode or a good old GRE tunnel.

To join another host using GRE, first configure the main host with:

root@edfu:~# lxc network set testbr0 tunnel.nuturo.protocol gre
root@edfu:~# lxc network set testbr0 tunnel.nuturo.local 172.17.16.2
root@edfu:~# lxc network set testbr0 tunnel.nuturo.remote 172.17.16.9

And then the “client” host with:

root@nuturo:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.edfu.protocol=gre tunnel.edfu.local=172.17.16.9 tunnel.edfu.remote=172.17.16.2
Network testbr0 created
root@nuturo:~# lxc network attach-profile testbr0 default eth0

If you’d rather use vxlan, just do:

root@edfu:~# lxc network set testbr0 tunnel.edfu.id 10
root@edfu:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

And:

root@nuturo:~# lxc network set testbr0 tunnel.edfu.id 10
root@nuturo:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

The tunnel id is required here to avoid conflicting with the already configured multicast vxlan tunnel.

And that’s how you make cross-host networking easily with recent LXD!

Conclusion

LXD now makes it very easy to define anything from a simple single-host network to a very complex cross-host network for thousands of containers. It also makes it very simple to define a new network just for a few containers or add a second device to a container, connecting it to a separate private network.

While this post goes through most of the different features we support, there are quite a few more knobs that can be used to fine tune the LXD network experience.
A full list can be found here: https://github.com/lxc/lxd/blob/master/doc/configuration.md

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

This is the eleventh blog post in this series about LXD 2.0.

LXD logo

Introduction

First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number of issues that had to be resolved. Yet even after all that, I still wasn’t be able to get networking going properly.

I finally gave up on devstack and tried “conjure-up” to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked!

So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!).

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM.

Remember, we’re running a full OpenStack here, this thing isn’t exactly light!

Setting up the container

OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, we’ll use a privileged container.

We’ll configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed).

Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features.

lxc init ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch, nbd"
printf "lxc.cap.drop=\nlxc.aa_profile=unconfined\n" | lxc config set openstack raw.lxc -
lxc config device add openstack mem unix-char path=/dev/mem
lxc start openstack

 

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get OpenStack going.

lxc exec openstack -- apt update
lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install squashfuse -y
lxc exec openstack -- ln -s /bin/true /usr/local/bin/udevadm
lxc exec openstack -- snap install conjure-up --classic

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec openstack -- lxd init

And that’s it for the container configuration itself, now we can deploy OpenStack!

Deploying OpenStack with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy OpenStack.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec openstack -- sudo -u ubuntu -i conjure-up
  • Select “OpenStack with NovaLXD”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Conjure-Up deploying OpenStack

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Access the dashboard and spawn a container

The dashboard runs inside a container, so you can’t just hit it from your web browser.
The easiest way around this is to setup a NAT rule with:

lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to <IP>

Where “<ip>” is the dashboard IP address conjure-up gave you at the end of the installation.

You can now grab the IP address of the “openstack” container (from “lxc info openstack”) and point your web browser to: http://<container ip>/horizon

This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and you’ll be greeted by the OpenStack dashboard!

oslxd-dashboard

You can now head to the “Project” tab on the left and the “Instances” page. To start a new instance using nova-lxd, click on “Launch instance”, select what image you want, network, … and your instance will get spawned.

Once it’s running, you can assign it a floating IP which will let you reach your instance from within your “openstack” container.

Conclusion

OpenStack is a pretty complex piece of software, it’s also not something you really want to run at home or on a single server. But it’s certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine.

Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves.

It’s also one of the very few cases where multiple level of container nesting actually makes sense!

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

LXD logo

What are snaps?

Snaps were introduced a little while back as a cross-distro package format allowing upstreams to easily generate and distribute packages of their application in a very consistent way, with support for transactional upgrade and rollback as well as confinement through AppArmor and Seccomp profiles.

It’s a packaging format that’s designed to be upstream friendly. Snaps effectively shift the packaging and maintenance burden from the Linux distribution to the upstream, making the upstream responsible for updating their packages and taking action when a security issue affects any of the code in their package.

The upside being that upstream is now in complete control of what’s in the package and can distribute a build of the software that matches their test environment and do so within minutes of the upstream release.

Why distribute LXD as a snap?

We’ve always cared about making LXD available to everyone. It’s available for a number of Linux distribution already with a few more actively working on packaging it.

For Ubuntu, we have it in the archive itself, push frequent stable updates, maintain official backports in the archive and also maintain a number of PPAs to make our releases available to all Ubuntu users.

Doing all that is a lot of work and it makes tracking down bugs that much harder as we have to care about a whole lot of different setups and combination of package versions.

Over the next few months, we hope to move away from PPAs and some of our backports in favor of using our snap package. This will allow a much shorter turnaround time for new releases and give us more control on the runtime environment of LXD, making our lives easier when dealing with bugs.

How to get the LXD snap?

Those instructions have only been tested on fully up to date Ubuntu 16.04 LTS or Ubuntu 16.10 with snapd installed. Please use a system that doesn’t already have LXD containers as the LXD snap will not be able to take over existing containers.

LXD snap example

  1. Make sure you don’t have a packaged version of LXD installed on your system.
    sudo apt remove --purge lxd lxd-client
  2. Create the “lxd” group and add yourself to it.
    sudo groupadd --system lxd
    sudo usermod -G lxd -a <username>
  3. Install LXD itself
    sudo snap install lxd

This will get the current version of LXD from the “stable” channel.
If your user wasn’t already part of the “lxd” group, you may now need to run:

newgrp lxd

Once installed, you can set it up and spawn your first container with:

  1. Configure the LXD daemon
    sudo lxd init
  2. Launch your first container
    lxd.lxc launch ubuntu:16.04 xenial

Channels and updates

The Ubuntu Snap store offers 4 different release “channels” for snaps:

  • stable
  • candidate
  • stable
  • edge

For LXD, we currently use “stable”, “candidate” and “edge”.

  • “stable” contains the latest stable release of LXD.
  • “candidate” is a testing area for “stable”.
    We’ll push new releases there a couple of days before releasing to “stable”.
  • “edge” is the current state of our development tree.
    This channel is entirely automated with uploads triggered after the upstream CI confirms that the development tree looks good.

You can switch between channels by using the “snap refresh” command:

snap refresh lxd --edge

This will cause your system to install the current version of LXD from the “edge” channel.

Be careful when hopping channels though as LXD may break when moving back to an earlier version (going from edge to stable), especially when database schema changes occurred in between.

Snaps automatically update, either on schedule (typically once a day) or through push notifications from the store. On top of that, you can force an update by running “snap refresh lxd”.

Known limitations

Those are all pretty major usability issues and will likely be showstoppers for a lot of people.
We’re actively working with the Snappy team to get those issues addressed as soon as possible and will keep maintaining all our existing packages until such time as those are resolved.

Extra information

More information on snap packages can be found at: http://snapcraft.io
Bug reports for the LXD snap: https://github.com/lxc/lxd-pkg-ubuntu/issues

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

PS: I have not forgotten about the remaining two posts in the LXD 2.0 series, the next post has been on hold for a while due to some issues with OpenStack/devstack.

Read more
Stéphane Graber

This is the tenth blog post in this series about LXD 2.0.

LXD logo

Introduction

Juju is Canonical’s service modeling and deployment tool. It supports a very wide range of cloud providers to make it easy for you to deploy any service you want on any cloud you want.

On top of that, Juju 2.0 also includes support for LXD, both for local deployments, ideal for development and as a way to co-locate services on a cloud instance or physical machine.

This post will focus on the local use case, going through the experience of a LXD user without any pre-existing Juju experience.

 

Requirements

This post assumes that you already have LXD 2.0 installed and configured (see previous posts) and that you’re running it on Ubuntu 16.04 LTS.

Setting up Juju

The first thing to do is to install Juju 2.0. On Ubuntu 16.04, it’s as simple as:

stgraber@dakara:~$ sudo apt install juju
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 juju-2.0
Suggested packages:
 juju-core
The following NEW packages will be installed:
 juju juju-2.0
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 39.7 MB of archives.
After this operation, 269 MB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 juju-2.0 amd64 2.0~beta7-0ubuntu1.16.04.1 [39.6 MB]
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 juju all 2.0~beta7-0ubuntu1.16.04.1 [9,556 B]
Fetched 39.7 MB in 0s (53.4 MB/s)
Selecting previously unselected package juju-2.0.
(Reading database ... 255132 files and directories currently installed.)
Preparing to unpack .../juju-2.0_2.0~beta7-0ubuntu1.16.04.1_amd64.deb ...
Unpacking juju-2.0 (2.0~beta7-0ubuntu1.16.04.1) ...
Selecting previously unselected package juju.
Preparing to unpack .../juju_2.0~beta7-0ubuntu1.16.04.1_all.deb ...
Unpacking juju (2.0~beta7-0ubuntu1.16.04.1) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up juju-2.0 (2.0~beta7-0ubuntu1.16.04.1) ...
Setting up juju (2.0~beta7-0ubuntu1.16.04.1) ...

Once that’s done, we can bootstrap a new “controller” using LXD. This means that Juju will not modify anything on your host, it will instead install its management service inside a LXD container.

Here, we’ll be creating a controller called “test” with:

stgraber@dakara:~$ juju bootstrap localhost test
Creating Juju controller "local.test" on localhost/localhost
Bootstrapping model "admin"
Starting new instance for initial controller
Launching instance
 - juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0
Installing Juju agent on bootstrap instance
Preparing for Juju GUI 2.1.2 release installation
Waiting for address
Attempting to connect to 10.178.150.72:22
Logging to /var/log/cloud-init-output.log on remote host
Running apt-get update
Running apt-get upgrade
Installing package: curl
Installing package: cpu-checker
Installing package: bridge-utils
Installing package: cloud-utils
Installing package: cloud-image-utils
Installing package: tmux
Fetching tools: curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ' --retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/agent/2.0-beta7/juju-2.0-beta7-xenial-amd64.tgz]>
Bootstrapping Juju machine agent
Starting Juju machine agent (jujud-machine-0)
Bootstrap agent installed
Waiting for API to become available: upgrade in progress (upgrade in progress)
Waiting for API to become available: upgrade in progress (upgrade in progress)
Waiting for API to become available: upgrade in progress (upgrade in progress)
Bootstrap complete, local.test now available.

This should take about a minute, at which point you’ll see a new LXD container running:

stgraber@dakara:~$ lxc list juju-
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
|                         NAME                        |  STATE  |          IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0 | RUNNING | 10.178.150.72 (eth0) |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+

On the Juju side of things, you can confirm that it’s responding and that nothing is running yet:

stgraber@dakara:~$ juju status
[Services] 
NAME STATUS EXPOSED CHARM 

[Units] 
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE 

[Machines] 
ID STATE DNS INS-ID SERIES AZ

You can also access the Juju GUI in your web browser with:

stgraber@dakara:~$ juju gui
Opening the Juju GUI in your browser.
If it does not open, open this URL:
https://10.178.150.72:17070/gui/97fa390d-96ad-44df-8b59-e15fdcfc636b/

Juju web UI

Though I prefer the command line so that’s what I’ll be using next.

Deploying a minecraft server

So lets start with something very trivial, just deploy a service that uses a single Juju unit in a single container.

stgraber@dakara:~$ juju deploy cs:trusty/minecraft
Added charm "cs:trusty/minecraft-3" to the model.
Deploying charm "cs:trusty/minecraft-3" with the charm series "trusty".

This should return pretty much immediately. It however doesn’t mean the service is already up and running. Instead you’ll want to look at “juju status”:

stgraber@dakara:~$ juju status
[Services] 
NAME STATUS EXPOSED CHARM 
minecraft maintenance false cs:trusty/minecraft-3 

[Units] 
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE 
minecraft/1 maintenance executing 2.0-beta7 1 10.178.150.74 (install) Installing java 

[Machines] 
ID STATE DNS INS-ID SERIES AZ 
1 started 10.178.150.74 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-1 trusty 

Here we can see it’s currently busy installing java in the LXD container it just created.

stgraber@dakara:~$ lxc list juju-
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
|                         NAME                        |  STATE  |          IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0 | RUNNING | 10.178.150.72 (eth0) |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-1 | RUNNING | 10.178.150.74 (eth0) |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+

After a little while, the service will be done deploying as can be seen here:

stgraber@dakara:~$ juju status
[Services] 
NAME STATUS EXPOSED CHARM 
minecraft active false cs:trusty/minecraft-3 

[Units] 
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE 
minecraft/1 active idle 2.0-beta7 1 25565/tcp 10.178.150.74 Ready 

[Machines] 
ID STATE DNS INS-ID SERIES AZ 
1 started 10.178.150.74 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-1 trusty

At which point you can fire up your minecraft client, point it at 10.178.150.74 on port 25565 and play with your all new minecraft server!

When you want to get rid of it, just run:

stgraber@dakara:~$ juju destroy-service minecraft

Wait a few seconds and everything will be gone.

Deploying a more complex web application

Juju’s main focus is on modeling complex services and deploying them in a scallable way.

To better show that, lets deploy a Juju “bundle”. This bundle is a basic web service, made of a website, an API endpoint, a database, a static web server and a reverse proxy.

So that’s going to expand to 4, inter-connected LXD containers.

stgraber@dakara:~$ juju deploy cs:~charmers/bundle/web-infrastructure-in-a-box
added charm cs:~hp-discover/trusty/node-app-1
service api deployed (charm cs:~hp-discover/trusty/node-app-1 with the series "trusty" defined by the bundle)
annotations set for service api
added charm cs:trusty/mongodb-3
service mongodb deployed (charm cs:trusty/mongodb-3 with the series "trusty" defined by the bundle)
annotations set for service mongodb
added charm cs:~hp-discover/trusty/nginx-4
service nginx deployed (charm cs:~hp-discover/trusty/nginx-4 with the series "trusty" defined by the bundle)
annotations set for service nginx
added charm cs:~hp-discover/trusty/nginx-proxy-3
service nginx-proxy deployed (charm cs:~hp-discover/trusty/nginx-proxy-3 with the series "trusty" defined by the bundle)
annotations set for service nginx-proxy
added charm cs:~hp-discover/trusty/website-3
service website deployed (charm cs:~hp-discover/trusty/website-3 with the series "trusty" defined by the bundle)
annotations set for service website
related mongodb:database and api:mongodb
related website:nginx-engine and nginx:web-engine
related api:website and nginx-proxy:website
related nginx-proxy:website and website:website
added api/0 unit to new machine
added mongodb/0 unit to new machine
added nginx/0 unit to new machine
added nginx-proxy/0 unit to new machine
deployment of bundle "cs:~charmers/bundle/web-infrastructure-in-a-box-10" completed

A few seconds later, you’ll see all the LXD containers running:

stgraber@dakara:~$ lxc list juju-
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
|                         NAME                        |  STATE  |           IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0 | RUNNING | 10.178.150.72 (eth0)  |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-2 | RUNNING | 10.178.150.98 (eth0)  |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-3 | RUNNING | 10.178.150.29 (eth0)  |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-4 | RUNNING | 10.178.150.202 (eth0) |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-5 | RUNNING | 10.178.150.214 (eth0) |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+

After a couple of minutes, all the services should be deployed and running:

stgraber@dakara:~$ juju status
[Services] 
NAME STATUS EXPOSED CHARM 
api unknown false cs:~hp-discover/trusty/node-app-1 
mongodb unknown false cs:trusty/mongodb-3 
nginx unknown false cs:~hp-discover/trusty/nginx-4 
nginx-proxy unknown false cs:~hp-discover/trusty/nginx-proxy-3 
website false cs:~hp-discover/trusty/website-3 

[Relations] 
SERVICE1 SERVICE2 RELATION TYPE 
api mongodb database regular 
api nginx-proxy website regular 
mongodb mongodb replica-set peer 
nginx website nginx-engine subordinate 
nginx-proxy website website regular 

[Units] 
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE 
api/0 unknown idle 2.0-beta7 2 8000/tcp 10.178.150.98 
mongodb/0 unknown idle 2.0-beta7 3 27017/tcp,27019/tcp,27021/tcp,28017/tcp 10.178.150.29 
nginx-proxy/0 unknown idle 2.0-beta7 5 80/tcp 10.178.150.214 
nginx/0 unknown idle 2.0-beta7 4 10.178.150.202 
 website/0 unknown idle 2.0-beta7 10.178.150.202 

[Machines] 
ID STATE DNS INS-ID SERIES AZ 
2 started 10.178.150.98 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-2 trusty 
3 started 10.178.150.29 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-3 trusty 
4 started 10.178.150.202 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-4 trusty 
5 started 10.178.150.214 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-5 trusty

At which point, you can hit the reverse proxy on port 80 with http://10.178.150.214 and you’ll hit the Juju academy web service.

Juju Academy web service

Cleaning everything up

If you want to get rid of all the containers Juju created and don’t mind having to bootstrap again next time, the easiest way to destroy everything is with:

stgraber@dakara:~$ juju destroy-controller test --destroy-all-models
WARNING! This command will destroy the "local.test" controller.
This includes all machines, services, data and other resources.

Continue [y/N]? y
Destroying controller
Waiting for hosted model resources to be reclaimed
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 2 machines
Waiting on 1 model
Waiting on 1 model
All hosted models reclaimed, cleaning up controller machines

And we can confirm that it’s all gone:

stgraber@dakara:~$ lxc list juju-
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

Conclusion

Juju 2.0’s built-in LXD support makes for a very clean way to test a whole variety of services.

There are quite a few pre-made “bundles” for you to deploy in the Juju charm store and even more “charms” that you can use to piece together the architecture you want.

Juju with LXD is the perfect solution for easily developing anything from a small web service to a big scale out infrastructure, all on your own machine, without creating a mess on your system!

Extra information

The Juju website can be found at: http://www.ubuntu.com/cloud/juju
The Juju charm store is available at: https://jujucharms.com

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

This is the ninth blog post in this series about LXD 2.0.

LXD logo

Introduction

One of the very exciting feature of LXD 2.0, albeit experimental, is the support for container checkpoint and restore.

Simply put, checkpoint/restore means that the running container state can be serialized down to disk and then restored, either on the same host as a stateful snapshot of the container or on another host which equates to live migration.

Requirements

To have access to container live migration and stateful snapshots, you need the following:

  • A very recent Linux kernel, 4.4 or higher.
  • CRIU 2.0, possibly with some cherry-picked commits depending on your exact kernel configuration.
  • Run LXD directly on the host. It’s not possible to use those features with container nesting.
  • For migration, the target machine must at least implement the instruction set of the source, the target kernel must at least offer the same syscalls as the source and any kernel filesystem which was mounted on the source must also be mountable on the target.

All the needed dependencies are provided by Ubuntu 16.04 LTS, in which case, all you need to do is install CRIU itself:

apt install criu

Using the thing

Stateful snapshots

A normal container snapshot looks like:

stgraber@dakara:~$ lxc snapshot c1 first
stgraber@dakara:~$ lxc info c1 | grep first
 first (taken at 2016/04/25 19:35 UTC) (stateless)

A stateful snapshot instead looks like:

stgraber@dakara:~$ lxc snapshot c1 second --stateful
stgraber@dakara:~$ lxc info c1 | grep second
 second (taken at 2016/04/25 19:36 UTC) (stateful)

This means that all the container runtime state was serialized to disk and included as part of the snapshot. Restoring one such snapshot is done as you would a stateless one:

stgraber@dakara:~$ lxc restore c1 second
stgraber@dakara:~$

Stateful stop/start

Say you want to reboot your server for a kernel update or similar maintenance. Rather than have to wait for all the containers to start from scratch after reboot, you can do:

stgraber@dakara:~$ lxc stop c1 --stateful

The container state will be written to disk and then picked up the next time you start it.

You can even look at what the state looks like:

root@dakara:~# tree /var/lib/lxd/containers/c1/rootfs/state/
/var/lib/lxd/containers/c1/rootfs/state/
├── cgroup.img
├── core-101.img
├── core-102.img
├── core-107.img
├── core-108.img
├── core-109.img
├── core-113.img
├── core-114.img
├── core-122.img
├── core-125.img
├── core-126.img
├── core-127.img
├── core-183.img
├── core-1.img
├── core-245.img
├── core-246.img
├── core-50.img
├── core-52.img
├── core-95.img
├── core-96.img
├── core-97.img
├── core-98.img
├── dump.log
├── eventfd.img
├── eventpoll.img
├── fdinfo-10.img
├── fdinfo-11.img
├── fdinfo-12.img
├── fdinfo-13.img
├── fdinfo-14.img
├── fdinfo-2.img
├── fdinfo-3.img
├── fdinfo-4.img
├── fdinfo-5.img
├── fdinfo-6.img
├── fdinfo-7.img
├── fdinfo-8.img
├── fdinfo-9.img
├── fifo-data.img
├── fifo.img
├── filelocks.img
├── fs-101.img
├── fs-113.img
├── fs-122.img
├── fs-183.img
├── fs-1.img
├── fs-245.img
├── fs-246.img
├── fs-50.img
├── fs-52.img
├── fs-95.img
├── fs-96.img
├── fs-97.img
├── fs-98.img
├── ids-101.img
├── ids-113.img
├── ids-122.img
├── ids-183.img
├── ids-1.img
├── ids-245.img
├── ids-246.img
├── ids-50.img
├── ids-52.img
├── ids-95.img
├── ids-96.img
├── ids-97.img
├── ids-98.img
├── ifaddr-9.img
├── inetsk.img
├── inotify.img
├── inventory.img
├── ip6tables-9.img
├── ipcns-var-10.img
├── iptables-9.img
├── mm-101.img
├── mm-113.img
├── mm-122.img
├── mm-183.img
├── mm-1.img
├── mm-245.img
├── mm-246.img
├── mm-50.img
├── mm-52.img
├── mm-95.img
├── mm-96.img
├── mm-97.img
├── mm-98.img
├── mountpoints-12.img
├── netdev-9.img
├── netlinksk.img
├── netns-9.img
├── netns-ct-9.img
├── netns-exp-9.img
├── packetsk.img
├── pagemap-101.img
├── pagemap-113.img
├── pagemap-122.img
├── pagemap-183.img
├── pagemap-1.img
├── pagemap-245.img
├── pagemap-246.img
├── pagemap-50.img
├── pagemap-52.img
├── pagemap-95.img
├── pagemap-96.img
├── pagemap-97.img
├── pagemap-98.img
├── pages-10.img
├── pages-11.img
├── pages-12.img
├── pages-13.img
├── pages-1.img
├── pages-2.img
├── pages-3.img
├── pages-4.img
├── pages-5.img
├── pages-6.img
├── pages-7.img
├── pages-8.img
├── pages-9.img
├── pipes-data.img
├── pipes.img
├── pstree.img
├── reg-files.img
├── remap-fpath.img
├── route6-9.img
├── route-9.img
├── rule-9.img
├── seccomp.img
├── sigacts-101.img
├── sigacts-113.img
├── sigacts-122.img
├── sigacts-183.img
├── sigacts-1.img
├── sigacts-245.img
├── sigacts-246.img
├── sigacts-50.img
├── sigacts-52.img
├── sigacts-95.img
├── sigacts-96.img
├── sigacts-97.img
├── sigacts-98.img
├── signalfd.img
├── stats-dump
├── timerfd.img
├── tmpfs-dev-104.tar.gz.img
├── tmpfs-dev-109.tar.gz.img
├── tmpfs-dev-110.tar.gz.img
├── tmpfs-dev-112.tar.gz.img
├── tmpfs-dev-114.tar.gz.img
├── tty.info
├── unixsk.img
├── userns-13.img
└── utsns-11.img

0 directories, 154 files

Restoring the container can be done with a simple:

stgraber@dakara:~$ lxc start c1

Live migration

Live migration is basically the same as the stateful stop/start above, except that the container directory and configuration happens to be moved to another machine too.

stgraber@dakara:~$ lxc list c1
+------+---------+-----------------------+----------------------------------------------+------------+-----------+
| NAME |  STATE  |          IPV4         |                     IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+-----------------------+----------------------------------------------+------------+-----------+
| c1   | RUNNING | 10.178.150.197 (eth0) | 2001:470:b368:4242:216:3eff:fe19:27b0 (eth0) | PERSISTENT | 2         |
+------+---------+-----------------------+----------------------------------------------+------------+-----------+

stgraber@dakara:~$ lxc list s-tollana:
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

stgraber@dakara:~$ lxc move c1 s-tollana:

stgraber@dakara:~$ lxc list c1
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

stgraber@dakara:~$ lxc list s-tollana:
+------+---------+-----------------------+----------------------------------------------+------------+-----------+
| NAME |  STATE  |          IPV4         |                     IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+-----------------------+----------------------------------------------+------------+-----------+
| c1   | RUNNING | 10.178.150.197 (eth0) | 2001:470:b368:4242:216:3eff:fe19:27b0 (eth0) | PERSISTENT | 2         |
+------+---------+-----------------------+----------------------------------------------+------------+-----------+

Limitations

As I said before, checkpoint/restore of containers is still pretty new and we’re still very much working on this feature, fixing issues as we are made aware of them. We do need more people trying this feature and sending us feedback, I would however not recommend using this in production just yet.

The current list of issues we’re tracking is available on Launchpad.

We expect a basic Ubuntu container with a few services to work properly with CRIU in Ubuntu 16.04. However more complex containers, using device passthrough, complex network services or special storage configurations are likely to fail.

Whenever possible, CRIU will fail at dump time, rather than at restore time. In such cases, the source container will keep running, the snapshot or migration will simply fail and a log file will be generated for debugging.

In rare cases, CRIU fails to restore the container, in which case the source container will still be around but will be stopped and will have to be manually restarted.

Sending bug reports

We’re tracking bugs related to checkpoint/restore against the CRIU Ubuntu package on Launchpad. Most of the work to fix those bugs will then happen upstream either on CRIU itself or the Linux kernel, but it’s easier for us to track things this way.

To file a new bug report, head here.

Please make sure to include:

  • The command you ran and the error message as displayed to you
  • Output of “lxc info” (*)
  • Output of “lxc info <container name>”
  • Output of “lxc config show –expanded <container name>”
  • Output of “dmesg” (*)
  • Output of “/proc/self/mountinfo” (*)
  • Output of “lxc exec <container name> — cat /proc/self/mountinfo”
  • Output of “uname -a” (*)
  • The content of /var/log/lxd.log (*)
  • The content of /etc/default/lxd-bridge (*)
  • A tarball of /var/log/lxd/<container name>/ (*)

If reporting a migration bug as opposed to a stateful snapshot or stateful stop bug, please include the data for both the source and target for any of the above which has been marked with a (*).

Extra information

The CRIU website can be found at: https://criu.org

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

The next post in the LXD series is currently blocked on a pending kernel fix, so I figured I’d do an out of series post on how to use the LXD API directly.

LXD logo

Setting up the LXD daemon

The LXD REST API can be accessed over either a local Unix socket or over HTTPs. The protocol in both case is identical, the only difference being that the Unix socket is plain text, relying on the filesystem for authentication.

To enable remote connections to you LXD daemon, run:

lxc config set core.https_address "[::]:8443"

This will have it bind all addresses on port 8443.

To setup a trust relationship with a new client, a password is required, you can set one with:

lxc config set core.trust_password <some random password>

Local or remote

curl over unix socket

As mentioned above, the Unix socket doesn’t need authentication, so with a recent version of curl, you can just do:

stgraber@castiana:~$ curl --unix-socket /var/lib/lxd/unix.socket s/
{"type":"sync","status":"Success","status_code":200,"metadata":["/1.0"]}

Not the most readable output. You can make it a lot more readable by using jq:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket s/ | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": [
  "/1.0"
 ]
}

curl over the network (and client authentication)

The REST API is authenticated by the use of client certificates. LXD generates one when you first use the command line client, so we’ll be using that one, but you could generate your own with openssl if you wanted to.

First, lets confirm that this particular certificate isn’t trusted:

curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/1.0 | jq .metadata.auth
"untrusted"

Now, lets tell the server to add it by giving it the password that we set earlier:

stgraber@castiana:~$ curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/1.0/certificates -X POST -d '{"type": "client", "password": "some-password"}' | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {}
}

And now confirm that we are properly authenticated:

stgraber@castiana:~$ curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/1.0 | jq .metadata.auth
"trusted"

And confirm that things look the same as over the Unix socket:

stgraber@castiana:~$ curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/ | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": [
  "/1.0"
 ]
}

Walking through the API

To keep the commands short, all my examples will be using the local Unix socket, you can add the arguments shown above to get this to work over the HTTPs connection.

Note that in an untrusted environment (so anything but localhost), you should also pass LXD the expected server certificate so that you can confirm that you’re talking to the right machine and aren’t the target of a man in the middle attack.

Server information

You can get server runtime information with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0 | jq .
 {
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "api_extensions": [],
  "api_status": "stable",
  "api_version": "1.0",
  "auth": "trusted",
  "config": {
   "core.https_address": "[::]:8443",
   "core.trust_password": true,
   "storage.zfs_pool_name": "encrypted/lxd"
  },
  "environment": {
   "addresses": [
    "192.168.54.140:8443",
    "10.212.54.1:8443",
    "[2001:470:b368:4242::1]:8443"
   ],
   "architectures": [
    "x86_64",
    "i686"
   ],
   "certificate": "BIG PEM BLOB",
   "driver": "lxc",
   "driver_version": "2.0.0",
   "kernel": "Linux",
   "kernel_architecture": "x86_64",
   "kernel_version": "4.4.0-18-generic",
   "server": "lxd",
   "server_pid": 26227,
   "server_version": "2.0.0",
   "storage": "zfs",
   "storage_version": "5"
  },
  "public": false
 }
}

Everything except the config section is read-only and so doesn’t need to be sent back when updating, so say we want to unset that trust password and have LXD stop listening over https, we can do that with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X PUT -d '{"config": {"storage.zfs_pool_name": "encrypted/lxd"}}' a/1.0 | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {}
}

Operations

For anything that could take more than a second, LXD will use a background operation. That’s to make it easier for the client to do multiple requests in parallel and to limit the number of connections to the server.

You can list all current operations with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "running": [
   "/1.0/operations/008bc02e-21a0-4070-a28c-633b79a46517"
  ]
 }
}

And get more details on it with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/008bc02e-21a0-4070-a28c-633b79a46517 | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "id": "008bc02e-21a0-4070-a28c-633b79a46517",
  "class": "task",
  "created_at": "2016-04-18T22:24:54.469437937+01:00",
  "updated_at": "2016-04-18T22:25:22.42813972+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/blah"
   ]
  },
  "metadata": {
   "download_progress": "48%"
  },
  "may_cancel": false,
  "err": ""
 }
}

In this case, it was me creating a new container called “blah” and the image is tracking the needed download, in this case of the Ubuntu 14.04 image.

You can subscribe to all operation notifications by using the /1.0/events websocket, or if your client isn’t that smart, you can just block on the operation with:

curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/b1f57056-c79b-4d3c-94bf-50b5c47a85ad/wait | jq .

Which will print a copy of the operation status (same as above) once the operation reaches a terminal state (success, failure or canceled).

The other endpoints

The REST API currently has the following endpoints:

  • /
    • /1.0
      • /1.0/certificates
        • /1.0/certificates/<fingerprint>
      • /1.0/containers
        • /1.0/containers/<name>
          • /1.0/containers/<name>/exec
        • /1.0/containers/<name>/files
        • /1.0/containers/<name>/snapshots
        • /1.0/containers/<name>/snapshots/<name>
        • /1.0/containers/<name>/state
        • /1.0/containers/<name>/logs
        • /1.0/containers/<name>/logs/<logfile>
      • /1.0/events
      • /1.0/images
        • /1.0/images/<fingerprint>
          • /1.0/images/<fingerprint>/export
      • /1.0/images/aliases
        • /1.0/images/aliases/<name>
      • /1.0/networks
        • /1.0/networks/<name>
      • /1.0/operations
        • /1.0/operations/<uuid>
          • /1.0/operations/<uuid>/wait
        • /1.0/operations/<uuid>/websocket
      • /1.0/profiles
        • /1.0/profiles/<name>

Detailed documentation on the various actions for each of them can be found here.

Basic container life-cycle

Going through absolutely everything above would make this blog post enormous, so lets just focus on the most basic things, creating a container, starting it, dealing with files a bit, creating a snapshot and deleting the whole thing.

Create

To create a container named “xenial” from an Ubuntu 16.04 image coming from https://cloud-images.ubuntu.com/daily (also known as ubuntu-daily:16.04 in the client), you need to run:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X POST -d '{"name": "xenial", "source": {"type": "image", "protocol": "simplestreams", "server": "https://cloud-images.ubuntu.com/daily", "alias": "16.04"}}' a/1.0/containers | jq .
{
 "type": "async",
 "status": "Operation created",
 "status_code": 100,
 "metadata": {
  "id": "e2714931-470e-452a-807c-c1be19cdff0d",
  "class": "task",
  "created_at": "2016-04-18T22:36:20.935649438+01:00",
  "updated_at": "2016-04-18T22:36:20.935649438+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": null,
  "may_cancel": false,
  "err": ""
 },
 "operation": "/1.0/operations/e2714931-470e-452a-807c-c1be19cdff0d"
}

This confirms that the container creation was received. We can check for progress with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/e2714931-470e-452a-807c-c1be19cdff0d | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "id": "e2714931-470e-452a-807c-c1be19cdff0d",
  "class": "task",
  "created_at": "2016-04-18T22:36:20.935649438+01:00",
  "updated_at": "2016-04-18T22:36:31.135038483+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": {
  "download_progress": "19%"
 },
 "may_cancel": false,
 "err": ""
 }
}

And finally wait until it’s done with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/e2714931-470e-452a-807c-c1be19cdff0d/wait | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "id": "e2714931-470e-452a-807c-c1be19cdff0d",
  "class": "task",
  "created_at": "2016-04-18T22:36:20.935649438+01:00",
  "updated_at": "2016-04-18T22:38:01.385511623+01:00",
  "status": "Success",
  "status_code": 200,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": {
   "download_progress": "100%"
  },
  "may_cancel": false,
  "err": ""
 }
}

Start

Starting the container is done my modifying its running state:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X PUT -d '{"action": "start"}' a/1.0/containers/xenial/state | jq .
{
 "type": "async",
 "status": "Operation created",
 "status_code": 100,
 "metadata": {
  "id": "614ac9f0-f0fc-4351-9e6f-14710fd93542",
  "class": "task",
  "created_at": "2016-04-18T22:39:42.766830946+01:00",
  "updated_at": "2016-04-18T22:39:42.766830946+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": null,
  "may_cancel": false,
  "err": ""
 },
 "operation": "/1.0/operations/614ac9f0-f0fc-4351-9e6f-14710fd93542"
}

If you’re doing this by hand as I am right now, there’s no way you can actually access that operation and wait for it to finish as it’s very very quick and data about past operations disappears 5 seconds after they’re done.

You can however check the container state:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/state | jq .metadata.status
"Running"

Or even get its IP address(es) with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/state | jq .metadata.network.eth0.addresses
[
 {
  "family": "inet",
  "address": "10.212.54.43",
  "netmask": "24",
  "scope": "global"
 },
 {
  "family": "inet6",
  "address": "2001:470:b368:4242:216:3eff:fe17:331c",
  "netmask": "64",
  "scope": "global"
 },
 {
  "family": "inet6",
  "address": "fe80::216:3eff:fe17:331c",
  "netmask": "64",
  "scope": "link"
 }
]

Read a file

Reading a file from the container is ridiculously easy:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/files?path=/etc/hosts
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Push a file

Pushing a file is only more difficult because you need to set the Content-Type to octet-stream:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X POST -H "Content-Type: application/octet-stream" -d 'abc' a/1.0/containers/xenial/files?path=/tmp/a
{"type":"sync","status":"Success","status_code":200,"metadata":{}}

We can then confirm it worked with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/files?path=/tmp/a
abc

Snapshot

To make a snapshot, just run:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X POST -d '{"name": "my-snapshot"}' a/1.0/containers/xenial/snapshots | jq .
{
 "type": "async",
 "status": "Operation created",
 "status_code": 100,
 "metadata": {
  "id": "d68141de-0c13-419c-a21c-13e30de29154",
  "class": "task",
  "created_at": "2016-04-18T22:54:04.148986484+01:00",
  "updated_at": "2016-04-18T22:54:04.148986484+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": null,
  "may_cancel": false,
  "err": ""
 },
 "operation": "/1.0/operations/d68141de-0c13-419c-a21c-13e30de29154"
}

And you can then get all the details about it:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/snapshots/my-snapshot | jq .
{
 "type": "sync",
 "status": "Success",
 "status_code": 200,
 "metadata": {
  "architecture": "x86_64",
  "config": {
   "volatile.base_image": "0b06c2858e2efde5464906c93eb9593a29bf46d069cf8d007ada81e5ab80613c",
   "volatile.eth0.hwaddr": "00:16:3e:17:33:1c",
   "volatile.last_state.idmap": "[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]"
  },
  "created_at": "2016-04-18T21:54:04Z",
  "devices": {
   "root": {
    "path": "/",
    "type": "disk"
   }
  },
  "ephemeral": false,
  "expanded_config": {
   "volatile.base_image": "0b06c2858e2efde5464906c93eb9593a29bf46d069cf8d007ada81e5ab80613c",
   "volatile.eth0.hwaddr": "00:16:3e:17:33:1c",
   "volatile.last_state.idmap": "[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]"
  },
  "expanded_devices": {
   "eth0": {
    "name": "eth0",
    "nictype": "bridged",
    "parent": "lxdbr0",
    "type": "nic"
   },
   "root": {
    "path": "/",
    "type": "disk"
   }
  },
  "name": "xenial/my-snapshot",
  "profiles": [
   "default"
  ],
  "stateful": false
 }
}

Delete

You can’t delete a running container, so first you must stop it with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X PUT -d '{"action": "stop", "force": true}' a/1.0/containers/xenial/state | jq .
{
 "type": "async",
 "status": "Operation created",
 "status_code": 100,
 "metadata": {
  "id": "97945ec9-f9b0-4fa8-aaba-06e41a9bc2a9",
  "class": "task",
  "created_at": "2016-04-18T22:56:18.28952729+01:00",
  "updated_at": "2016-04-18T22:56:18.28952729+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": null,
  "may_cancel": false,
  "err": ""
 },
 "operation": "/1.0/operations/97945ec9-f9b0-4fa8-aaba-06e41a9bc2a9"
}

Then you can delete it with:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X DELETE a/1.0/containers/xenial | jq .
{
 "type": "async",
 "status": "Operation created",
 "status_code": 100,
 "metadata": {
  "id": "439bf4a1-e056-4b76-86ad-bff06169fce1",
  "class": "task",
  "created_at": "2016-04-18T22:56:22.590239576+01:00",
  "updated_at": "2016-04-18T22:56:22.590239576+01:00",
  "status": "Running",
  "status_code": 103,
  "resources": {
   "containers": [
    "/1.0/containers/xenial"
   ]
  },
  "metadata": null,
  "may_cancel": false,
  "err": ""
 },
 "operation": "/1.0/operations/439bf4a1-e056-4b76-86ad-bff06169fce1"
}

And confirm it’s gone:

stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/containers/xenial | jq .
{
 "error": "not found",
 "error_code": 404,
 "type": "error"
}

Conclusion

The LXD API has been designed to be simple yet powerful, it can easily be used through even the most simple client but also supports advanced features to allow more complex clients to be very efficient.

Our REST API is stable which means that any change we make to it will be fully backward compatible with the API as it was in LXD 2.0. We will only be doing additions to it, no removal or change of behavior for the existing end points.

Support for new features can be detected by the client by looking at the “api_extensions” list from GET /1.0. We currently do not advertise any but will no doubt make use of this very soon.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

This is the eighth blog post in this series about LXD 2.0.

LXD logo

Introduction

In the previous post I covered how to run Docker inside LXD which is a good way to get access to the portfolio of application provided by Docker while running in the safety of the LXD environment.

One use case I mentioned was offering a LXD container to your users and then have them use their container to run Docker. Well, what if they themselves want to run other Linux distributions inside their container using LXD, or even allow another group of people to have access to a Linux system by running a container for them?

Turns out, LXD makes it very simple to allow your users to run nested containers.

Nesting LXD

The most simple case can be shown by using an Ubuntu 16.04 image. Ubuntu 16.04 cloud images come with LXD pre-installed. The daemon itself isn’t running as it’s socket-activated so it doesn’t use any resources until you actually talk to it.

So lets start an Ubuntu 16.04 container with nesting enabled:

lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true

You can also set the security.nesting key on an existing container with:

lxc config set <container name> security.nesting true

Or for all containers using a particular profile with:

lxc profile set <profile name> security.nesting true

With that container started, you can now get a shell inside it, configure LXD and spawn a container:

stgraber@dakara:~$ lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true
Creating c1
Starting c1

stgraber@dakara:~$ lxc exec c1 bash
root@c1:~# lxd init
Name of the storage backend to use (dir or zfs): dir

We detected that you are running inside an unprivileged container.
This means that unless you manually configured your host otherwise,
you will not have enough uid and gid to allocate to your containers.

LXD can re-use your container's own allocation to avoid the problem.
Doing so makes your nested containers slightly less safe as they could
in theory attack their parent container and gain more privileges than
they otherwise would.

Would you like to have your containers share their parent's allocation (yes/no)? yes
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
LXD has been successfully configured.

root@c1:~# lxc launch ubuntu:14.04 trusty
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init

Creating trusty
Retrieving image: 100%
Starting trusty

root@c1:~# lxc list
+--------+---------+-----------------------+----------------------------------------------+------------+-----------+
|  NAME  |  STATE  |         IPV4          |                     IPV6                     |    TYPE    | SNAPSHOTS |
+--------+---------+-----------------------+----------------------------------------------+------------+-----------+
| trusty | RUNNING | 10.153.141.124 (eth0) | fd7:f15d:d1d6:da14:216:3eff:fef1:4002 (eth0) | PERSISTENT | 0         |
+--------+---------+-----------------------+----------------------------------------------+------------+-----------+
root@c1:~#

It really is that simple!

The online demo server

As this post is pretty short, I figured I would spend a bit of time to talk about the demo server we’re running. We also just reached the 10000 sessions mark earlier today!

That server is basically just a normal LXD running inside a pretty beefy virtual machine with a tiny daemon implementing the REST API used by our website.

When you accept the terms of service, a new LXD container is created for you with security.nesting enabled as we saw above. You are then attached to that container as you would when using “lxc exec” except that we’re doing it using websockets and javascript.

The containers you then create inside this environment are all nested LXD containers.
You can then nest even further in there if you want to.

We are using the whole range of LXD resource limitations to prevent one user’s actions from impacting the others and pretty closely monitor the server for any sign of abuse.

If you want to run your own similar server, you can grab the code for our website and the daemon with:

git clone https://github.com/lxc/linuxcontainers.org
git clone https://github.com/lxc/lxd-demo-server

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

This is the seventh blog post in this series about LXD 2.0.

LXD logo

Why run Docker inside LXD

As I briefly covered in the first post of this series, LXD’s focus is system containers. That is, we run a full unmodified Linux distribution inside our containers. LXD for all intent and purposes doesn’t care about the workload running in the container. It just sets up the container namespaces and security policies, then spawns /sbin/init and waits for the container to stop.

Application containers such as those implemented by Docker or Rkt are pretty different in that they are used to distribute applications, will typically run a single main process inside them and be much more ephemeral than a LXD container.

Those two container types aren’t mutually exclusive and we certainly see the value of using Docker containers to distribute applications. That’s why we’ve been working hard over the past year to make it possible to run Docker inside LXD.

This means that with Ubuntu 16.04 and LXD 2.0, you can create containers for your users who will then be able to connect into them just like a normal Ubuntu system and then run Docker to install the services and applications they want.

Requirements

There are a lot of moving pieces to make all of this working and we got it all included in Ubuntu 16.04:

  • A kernel with CGroup namespace support (4.4 Ubuntu or 4.6 mainline)
  • LXD 2.0 using LXC 2.0 and LXCFS 2.0
  • A custom version of Docker (or one built with all the patches that we submitted)
  • A Docker image which behaves when confined by user namespaces, or alternatively make the parent LXD container a privileged container (security.privileged=true)

Running a basic Docker workload

Enough talking, lets run some Docker containers!

First of all, you need an Ubuntu 16.04 container which you can get with:

lxc launch ubuntu-daily:16.04 docker -p default -p docker

The “-p default -p docker” instructs LXD to apply both the “default” and “docker” profiles to the container. The default profile contains the basic network configuration while the docker profile tells LXD to load a few required kernel modules and set up some mounts for the container. The docker profile also enables container nesting.

Now lets make sure the container is up to date and install docker:

lxc exec docker -- apt update
lxc exec docker -- apt dist-upgrade -y
lxc exec docker -- apt install docker.io -y

And that’s it! You’ve got Docker installed and running in your container.
Now lets start a basic web service made of two Docker containers:

stgraber@dakara:~$ lxc exec docker -- docker run --detach --name app carinamarina/hello-world-app
Unable to find image 'carinamarina/hello-world-app:latest' locally
latest: Pulling from carinamarina/hello-world-app
efd26ecc9548: Pull complete 
a3ed95caeb02: Pull complete 
d1784d73276e: Pull complete 
72e581645fc3: Pull complete 
9709ddcc4d24: Pull complete 
2d600f0ec235: Pull complete 
c4cf94f61cbd: Pull complete 
c40f2ab60404: Pull complete 
e87185df6de7: Pull complete 
62a11c66eb65: Pull complete 
4c5eea9f676d: Pull complete 
498df6a0d074: Pull complete 
Digest: sha256:6a159db50cb9c0fbe127fb038ed5a33bb5a443fcdd925ec74bf578142718f516
Status: Downloaded newer image for carinamarina/hello-world-app:latest
c8318f0401fb1e119e6c5bb23d1e706e8ca080f8e44b42613856ccd0bf8bfb0d

stgraber@dakara:~$ lxc exec docker -- docker run --detach --name web --link app:helloapp -p 80:5000 carinamarina/hello-world-web
Unable to find image 'carinamarina/hello-world-web:latest' locally
latest: Pulling from carinamarina/hello-world-web
efd26ecc9548: Already exists 
a3ed95caeb02: Already exists 
d1784d73276e: Already exists 
72e581645fc3: Already exists 
9709ddcc4d24: Already exists 
2d600f0ec235: Already exists 
c4cf94f61cbd: Already exists 
c40f2ab60404: Already exists 
e87185df6de7: Already exists 
f2d249ff479b: Pull complete 
97cb83fe7a9a: Pull complete 
d7ce7c58a919: Pull complete 
Digest: sha256:c31cf04b1ab6a0dac40d0c5e3e64864f4f2e0527a8ba602971dab5a977a74f20
Status: Downloaded newer image for carinamarina/hello-world-web:latest
d7b8963401482337329faf487d5274465536eebe76f5b33c89622b92477a670f

With those two Docker containers now running, we can then get the IP address of our LXD container and access the service!

stgraber@dakara:~$ lxc list
+--------+---------+----------------------+----------------------------------------------+------------+-----------+
|  NAME  |  STATE  |         IPV4         |                      IPV6                    |    TYPE    | SNAPSHOTS |
+--------+---------+----------------------+----------------------------------------------+------------+-----------+
| docker | RUNNING | 172.17.0.1 (docker0) | 2001:470:b368:4242:216:3eff:fe55:45f4 (eth0) | PERSISTENT | 0         |
|        |         | 10.178.150.73 (eth0) |                                              |            |           |
+--------+---------+----------------------+----------------------------------------------+------------+-----------+

stgraber@dakara:~$ curl http://10.178.150.73
The linked container said... "Hello World!"

Conclusion

That’s it! It’s really that simple to run Docker containers inside a LXD container.

Now as I mentioned earlier, not all Docker images will behave as well as my example, that’s typically because of the extra confinement that comes with LXD, specifically the user namespace.

Only the overlayfs storage driver of Docker works in this mode. That storage driver may come with its own set of limitation which may further limit how many images will work in this environment.

If your workload doesn’t work properly and you trust the user inside the LXD container, you can try:

lxc config set docker security.privileged true
lxc restart docker

That will de-activate the user namespace and will run the container in privileged mode.
Note however that in this mode, root inside the container is the same uid as root on the host. There are a number of known ways for users to escape such containers and gain root privileges on the host, so you should only ever do that if you’d trust the user inside your LXD container with root privileges on the host.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

This is the sixth blog post in this series about LXD 2.0.

LXD logo

Remote protocols

LXD 2.0 supports two protocols:

  • LXD 1.0 API: That’s the REST API used between the clients and a LXD daemon as well as between LXD daemons when copying/moving images and containers.
  • Simplestreams: The Simplestreams protocol is a read-only, image-only protocol used by both the LXD client and daemon to get image information and import images from some public image servers (like the Ubuntu images).

Everything below will be using the first of those two.

Security

Authentication for the LXD API is done through client certificate authentication over TLS 1.2 using recent ciphers. When two LXD daemons must exchange information directly, a temporary token is generated by the source daemon and transferred through the client to the target daemon. This token may only be used to access a particular stream and is immediately revoked so cannot be re-used.

To avoid Man In The Middle attacks, the client tool also sends the certificate of the source server to the target. That means that for a particular download operation, the target server is provided with the source server URL, a one-time access token for the resource it needs and the certificate that the server is supposed to be using. This prevents MITM attacks and only give temporary access to the object of the transfer.

Network requirements

LXD 2.0 uses a model where the target of an operation (the receiving end) is connecting directly to the source to fetch the data.

This means that you must ensure that the target server can connect to the source directly, updating any needed firewall along the way.

We have a plan to allow this to be reversed and also to allow proxying through the client itself for those rare cases where draconian firewalls are preventing any communication between the two hosts.

Interacting with remote hosts

Rather than having our users have to always provide hostname or IP addresses and then validating certificate information whenever they want to interact with a remote host, LXD is using the concept of “remotes”.

By default, the only real LXD remote configured is “local:” which also happens to be the default remote (so you don’t have to type its name). The local remote uses the LXD REST API to talk to the local daemon over a unix socket.

Adding a remote

Say you have two machines with LXD installed, your local machine and a remote host that we’ll call “foo”.

First you need to make sure that “foo” is listening to the network and has a password set, so get a remote shell on it and run:

lxc config set core.https_address [::]:8443
lxc config set core.trust_password something-secure

Now on your local LXD, we just need to make it visible to the network so we can transfer containers and images from it:

lxc config set core.https_address [::]:8443

Now that the daemon configuration is done on both ends, you can add “foo” to your local client with:

lxc remote add foo 1.2.3.4

(replacing 1.2.3.4 by your IP address or FQDN)

You’ll see something like this:

stgraber@dakara:~$ lxc remote add foo 2607:f2c0:f00f:2770:216:3eff:fee1:bd67
Certificate fingerprint: fdb06d909b77a5311d7437cabb6c203374462b907f3923cefc91dd5fce8d7b60
ok (y/n)? y
Admin password for foo: 
Client certificate stored at server: foo

You can then list your remotes and you’ll see “foo” listed there:

stgraber@dakara:~$ lxc remote list
+-----------------+-------------------------------------------------------+---------------+--------+--------+
|      NAME       |                         URL                           |   PROTOCOL    | PUBLIC | STATIC |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| foo             | https://[2607:f2c0:f00f:2770:216:3eff:fee1:bd67]:8443 | lxd           | NO     | NO     |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| images          | https://images.linuxcontainers.org:8443               | lxd           | YES    | NO     |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| local (default) | unix://                                               | lxd           | NO     | YES    |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| ubuntu          | https://cloud-images.ubuntu.com/releases              | simplestreams | YES    | YES    |
+-----------------+-------------------------------------------------------+---------------+--------+--------+
| ubuntu-daily    | https://cloud-images.ubuntu.com/daily                 | simplestreams | YES    | YES    |
+-----------------+-------------------------------------------------------+---------------+--------+--------+

Interacting with it

Ok, so we have a remote server defined, what can we do with it now?

Well, just about everything you saw in the posts until now, the only difference being that you must tell LXD what host to run against.

For example:

lxc launch ubuntu:14.04 c1

Will run on the default remote (“lxc remote get-default”) which is your local host.

lxc launch ubuntu:14.04 foo:c1

Will instead run on foo.

Listing running containers on a remote host can be done with:

stgraber@dakara:~$ lxc list foo:
+------+---------+---------------------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4        |                     IPV6                      |    TYPE    | SNAPSHOTS |
+------+---------+---------------------+-----------------------------------------------+------------+-----------+
| c1   | RUNNING | 10.245.81.95 (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe43:7994 (eth0) | PERSISTENT | 0         |
+------+---------+---------------------+-----------------------------------------------+------------+-----------+

One thing to keep in mind is that you have to specify the remote host for both images and containers. So if you have a local image called “my-image” on “foo” and want to create a container called “c2” from it, you have to run:

lxc launch foo:my-image foo:c2

Finally, getting a shell into a remote container works just as you would expect:

lxc exec foo:c1 bash

Copying containers

Copying containers between hosts is as easy as it sounds:

lxc copy foo:c1 c2

And you’ll have a new local container called “c2” created from a copy of the remote “c1” container. This requires “c1” to be stopped first, but you could just copy a snapshot instead and do it while the source container is running:

lxc snapshot foo:c1 current
lxc copy foo:c1/current c3

Moving containers

Unless you’re doing live migration (which will be covered in a later post), you have to stop the source container prior to moving it, after which everything works as you’d expect.

lxc stop foo:c1
lxc move foo:c1 local:

This example is functionally identical to:

lxc stop foo:c1
lxc move foo:c1 c1

How this all works

Interactions with remote containers work as you would expect, rather than using the REST API over a local Unix socket, LXD just uses the exact same API over a remote HTTPS transport.

Where it gets a bit trickier is when interaction between two daemons must occur, as is the case for copy and move.

In those cases the following happens:

  1. The user runs “lxc move foo:c1 c1”.
  2. The client contacts the local: remote to check for an existing “c1” container.
  3. The client fetches container information from “foo”.
  4. The client requests a migration token from the source “foo” daemon.
  5. The client sends that migration token as well as the source URL and “foo”‘s certificate to the local LXD daemon alongside the container configuration and devices.
  6. The local LXD daemon then connects directly to “foo” using the provided token
    1. It connects to a first control websocket
    2. It negotiates the filesystem transfer protocol (zfs send/receive, btrfs send/receive or plain rsync)
    3. If available locally, it unpacks the image which was used to create the source container. This is to avoid needless data transfer.
    4. It then transfers the container and any of its snapshots as a delta.
  7. If succesful, the client then instructs “foo” to delete the source container.

Try all this online

Don’t have two machines to try remote interactions and moving/copying containers?

That’s okay, you can test it all online using our demo service.
The included step-by-step walkthrough even covers it!

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net

Read more
Stéphane Graber

LXD logo

Introduction

And that’s it!
After a year and a half of intense work by the LXD team, LXD 2.0 has been released today!

LXD 2.0 is our first production-ready release and also a Long Term Support release, meaning that we will be supporting it with frequent bugfix releases until the 1st of June 2021.

This also completes our collection of 2.0 container tools with LXC 2.0, LXCFS 2.0 and now LXD 2.0 all having been released over the past couple of weeks.

Getting started with LXD

I have recently been writing a bit about LXD 2.0, those posts are a great starting point to understand LXD’s goal and start using it for your own containers.

LXD 2.0 is now available in Ubuntu 16.04, Ubuntu 14.04 (through backports) and in the Ubuntu Core Store.

We expect other Linux distributions to pick it up over the next few weeks!

More information on how to install it can be found here.

Try it online

If you just want to see what LXD is all about without having to start a virtual machine or install it on your own machine, you can try it online straight from our website.

Just head to: https://linuxcontainers.org/lxd/try-it

Project information

Upstream website: https://linuxcontainers.org/lxd/
Release announcement: https://linuxcontainers.org/lxd/news/
Code: https://github.com/lxc/lxd
IRC channel: #lxcontainers on irc.freenode.net
Mailing-lists: https://lists.linuxcontainers.org

Read more
Stéphane Graber

LXD logo

Introduction

Today I’m very pleased to announce the release of LXC 2.0, our second Long Term Support Release! LXC 2.0 is the result of a year of work by the LXC community with over 700 commits done by over 90 contributors!

It joins LXCFS 2.0 which was released last week and will very soon be joined by LXD 2.0 to complete our collection of 2.0 container management tools!

What’s new?

The complete changelog is linked below but the main highlights for me are:

  • More consistent user experience between the various LXC tools.
  • Improved checkpoint/restore support.
  • Complete rework of our CGroup handling code, including support for the CGroup namespace.
  • Cleaned up storage backend subsystem, including the addition of a new Ceph RBD backend.
  • A massive amount of bugfixes.
  • And lastly, we managed to get all that done without breaking our API, so LXC 2.0 is fully API compatible with LXC 1.0.

The focus with this release was stability and maintaining support for all the environments in which LXC shines. We still support all kernels from 2.6.32 though the exact feature set does obviously vary based on kernel features. We also improved support for a bunch of architectures and fixed a lot of bugs and other rough edges.

This is the release you want to run in production for the next few years!

Support length

As mentioned, LXC 2.0 is a Long Term Support release.
This is the second time we do such a release with the first being LXC 1.0.

Long Term Support releases come with a 5 years commitment from upstream to do bugfixes and security updates and release new point releases when enough fixes have accumulated.

The end of life date for the various LXC versions is as follow:

  • LXC 1.0, released February 2014 will EOL on the 1st of June 2019
  • LXC 1.1, released February 2015 will EOL on the 1st of September 2016
  • LXC 2.0, released April 2016 will EOL on the 1st of June 2021

We therefore very strongly recommend LXC 1.1 users to update to LXC 2.0 as we will not be supporting this release for very much longer.

We also recommend production deployments stick to our Long Term Support release.

Project information

Upstream website: https://linuxcontainers.org/lxc/
Release announcement: https://linuxcontainers.org/lxc/news/
Code: https://github.com/lxc/lxc
IRC channel: #lxcontainers on irc.freenode.net
Mailing-lists: https://lists.linuxcontainers.org

Try it online

Want to see what a container with LXC 2.0 installed feels like?
You can get one online to play with here.

Read more