Canonical Voices

Colin Ian King

The stress-ng logo
The latest release of stress-ng contains a mechanism to measure latencies via a cyclic latency test.  Essentially this is just a loop that cycles around performing high precisions sleeps and measures the (extra overhead) latency taken to perform the sleep compared to expected time.  This loop runs with either one of the Round-Robin (rr) or First-In-First-Out real time scheduling polices.

The cyclic test can be configured to specify the sleep time (in nanoseconds), the scheduling type (rr or fifo),  the scheduling priority (1 to 100) and also the sleep method (explained later).

The first 10,000 latency measurements are used to compute various latency statistics:
  • mean latency (aka the 'average')
  • modal latency (the most 'popular' latency)
  • minimum latency
  • maximum latency
  • standard deviation
  • latency percentiles (25%, 50%, 75%, 90%, 95.40%, 99.0%, 99.5%, 99.9% and 99.99%
  • latency distribution (enabled with the --cyclic-dist option)
The latency percentiles indicate the latency at which a percentage of the samples fall into.  For example, the 99% percentile for the 10,000 samples is the latency at which 9,900 samples are equal to or below.

The latency distribution is shown when the --cyclic-dist option is used; one has to specify the distribution interval in nanoseconds and up to the first 100 values in the distribution are output.

For an idle machine, one can invoke just the cyclic measurements with stress-ng as follows:

 sudo stress-ng --cyclic 1 --cyclic-policy fifo \
--cyclic-prio 100 --cyclic-method --clock_ns \
--cyclic-sleep 20000 --cyclic-dist 1000 -t 5
stress-ng: info: [27594] dispatching hogs: 1 cyclic
stress-ng: info: [27595] stress-ng-cyclic: sched SCHED_FIFO: 20000 ns delay, 10000 samples
stress-ng: info: [27595] stress-ng-cyclic: mean: 5242.86 ns, mode: 4880 ns
stress-ng: info: [27595] stress-ng-cyclic: min: 3050 ns, max: 44818 ns, std.dev. 1142.92
stress-ng: info: [27595] stress-ng-cyclic: latency percentiles:
stress-ng: info: [27595] stress-ng-cyclic: 25.00%: 4881 us
stress-ng: info: [27595] stress-ng-cyclic: 50.00%: 5191 us
stress-ng: info: [27595] stress-ng-cyclic: 75.00%: 5261 us
stress-ng: info: [27595] stress-ng-cyclic: 90.00%: 5368 us
stress-ng: info: [27595] stress-ng-cyclic: 95.40%: 6857 us
stress-ng: info: [27595] stress-ng-cyclic: 99.00%: 8942 us
stress-ng: info: [27595] stress-ng-cyclic: 99.50%: 9821 us
stress-ng: info: [27595] stress-ng-cyclic: 99.90%: 22210 us
stress-ng: info: [27595] stress-ng-cyclic: 99.99%: 36074 us
stress-ng: info: [27595] stress-ng-cyclic: latency distribution (1000 us intervals):
stress-ng: info: [27595] stress-ng-cyclic: latency (us) frequency
stress-ng: info: [27595] stress-ng-cyclic: 0 0
stress-ng: info: [27595] stress-ng-cyclic: 1000 0
stress-ng: info: [27595] stress-ng-cyclic: 2000 0
stress-ng: info: [27595] stress-ng-cyclic: 3000 82
stress-ng: info: [27595] stress-ng-cyclic: 4000 3342
stress-ng: info: [27595] stress-ng-cyclic: 5000 5974
stress-ng: info: [27595] stress-ng-cyclic: 6000 197
stress-ng: info: [27595] stress-ng-cyclic: 7000 209
stress-ng: info: [27595] stress-ng-cyclic: 8000 100
stress-ng: info: [27595] stress-ng-cyclic: 9000 50
stress-ng: info: [27595] stress-ng-cyclic: 10000 10
stress-ng: info: [27595] stress-ng-cyclic: 11000 9
stress-ng: info: [27595] stress-ng-cyclic: 12000 2
stress-ng: info: [27595] stress-ng-cyclic: 13000 2
stress-ng: info: [27595] stress-ng-cyclic: 14000 1
stress-ng: info: [27595] stress-ng-cyclic: 15000 9
stress-ng: info: [27595] stress-ng-cyclic: 16000 1
stress-ng: info: [27595] stress-ng-cyclic: 17000 1
stress-ng: info: [27595] stress-ng-cyclic: 18000 0
stress-ng: info: [27595] stress-ng-cyclic: 19000 0
stress-ng: info: [27595] stress-ng-cyclic: 20000 0
stress-ng: info: [27595] stress-ng-cyclic: 21000 1
stress-ng: info: [27595] stress-ng-cyclic: 22000 1
stress-ng: info: [27595] stress-ng-cyclic: 23000 0
stress-ng: info: [27595] stress-ng-cyclic: 24000 1
stress-ng: info: [27595] stress-ng-cyclic: 25000 2
stress-ng: info: [27595] stress-ng-cyclic: 26000 0
stress-ng: info: [27595] stress-ng-cyclic: 27000 1
stress-ng: info: [27595] stress-ng-cyclic: 28000 1
stress-ng: info: [27595] stress-ng-cyclic: 29000 2
stress-ng: info: [27595] stress-ng-cyclic: 30000 0
stress-ng: info: [27595] stress-ng-cyclic: 31000 0
stress-ng: info: [27595] stress-ng-cyclic: 32000 0
stress-ng: info: [27595] stress-ng-cyclic: 33000 0
stress-ng: info: [27595] stress-ng-cyclic: 34000 0
stress-ng: info: [27595] stress-ng-cyclic: 35000 0
stress-ng: info: [27595] stress-ng-cyclic: 36000 1
stress-ng: info: [27595] stress-ng-cyclic: 37000 0
stress-ng: info: [27595] stress-ng-cyclic: 38000 0
stress-ng: info: [27595] stress-ng-cyclic: 39000 0
stress-ng: info: [27595] stress-ng-cyclic: 40000 0
stress-ng: info: [27595] stress-ng-cyclic: 41000 0
stress-ng: info: [27595] stress-ng-cyclic: 42000 0
stress-ng: info: [27595] stress-ng-cyclic: 43000 0
stress-ng: info: [27595] stress-ng-cyclic: 44000 1
stress-ng: info: [27594] successful run completed in 5.00s


Note that stress-ng needs to be invoked using sudo to enable the Real Time FIFO scheduling for the cyclic measurements.

The above example uses the following options:

  • --cyclic 1
    • starts one instance of the cyclic measurements (1 is always recommended)
  • --cyclic-policy fifo 
    • use the real time First-In-First-Out scheduling for the cyclic measurements
  • --cyclic-prio 100 
    • use the maximum scheduling priority  
  • --cyclic-method clock_ns
    • use the clock_nanoseconds(2) system call to perform the high precision duration sleep
  • --cyclic-sleep 20000 
    • sleep for 20000 nanoseconds per cyclic iteration
  • --cyclic-dist 1000 
    • enable latency distribution statistics with an interval of 1000 nanoseconds between each data point.
  • -t 5
    • run for just 5 seconds
From the run above, we can see that 99.5% of latencies were less than 9821 nanoseconds and most clustered around the 4880 nanosecond model point. The distribution data shows that there is some clustering around the 5000 nanosecond point and the samples tail off with a bit of a long tail.

Now for the interesting part. Since stress-ng is packed with many different stressors we can run these while performing the cyclic measurements, for example, we can tell stress-ng to run *all* the virtual memory related stress tests and see how this affects the latency distribution using the following:

 sudo stress-ng --cyclic 1 --cyclic-policy fifo \  
--cyclic-prio 100 --cyclic-method clock_ns \
--cyclic-sleep 20000 --cyclic-dist 1000 \
--class vm --all 1 -t 60s

..the above invokes all the vm class of stressors to run all at the same time (with just one instance of each stressor) for 60 seconds.

The --cyclic-method specifies the delay used on each of the 10,000 cyclic iterations used.  The default (and recommended method) is clock_ns, using the high precision delay.  The available cyclic delay methods are:
  • clock_ns (use the clock_nanosecond() sleep)
  • posix_ns (use the POSIX nanosecond() sleep)
  • itimer (use a high precision clock timer and pause to wait for a signal to measure latency)
  • poll (busy spin-wait on clock_gettime() to eat cycles for a delay.
All the delay mechanisms use the CLOCK_REALTIME system clock for timing.

I hope this is plenty of cyclic measurement functionality to get some useful latency benchmarks against various kernel components when using some or a mix of the stress-ng stressors.  Let me know if I am missing some other cyclic measurement options and I can see if I can add them in.

Keep stressing and measuring those systems!

Read more
Dustin Kirkland

Thank you to Oracle Cloud for inviting me to speak at this month's CloudAustin Meetup hosted by Rackspace.

I very much enjoyed deploying Canonical Kubernetes on Ubuntu in the Oracle Cloud, and then exploring Kubernetes a bit, how it works, the architecture, and a simple workload within.  I'm happy to share my slides below, and you can download a PDF here:


If you're interested in learning more, check out:
It was a great audience, with plenty of good questions, pizza, and networking!

I'm pleased to share my slide deck here.

Cheers,
Dustin

Read more
admin

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS Sprint

The Canonical MAAS team sprinted at Canonical’s London offices this week. The purpose was to review the previous development cycle & release (MAAS 2.2), as well as discuss and finalize the plans and goals for the next development release cycle (MAAS 2.3).

MAAS 2.3 (current development release)

The team has been working on the following features and improvements:

  • New Feature – support for ‘upstream’ proxy (API only)Support for upstream proxies has landed in trunk. This iteration contains API only support. The team continues to work on the matching UI support for this feature.
  • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress so far is:
    • Prepared the MAAS CI infrastructure to fully support Git once the transition is complete.
    • Started working on creating new processes for PR’s auto-testing and landing.
  • Django 1.11 transition – The team continues to work through the Django 1.11 transition; we’re down to 130 unittest failures!
  • Network Beaconing & better network discovery – Prototype beacons have now been sent and received! The next steps will be to work on the full protocol implementation, followed by making use of beaconing to enhance rack registration. This will provide a better out-of-the-box experience for MAAS; interfaces which share network connectivity will no longer be assumed to be on separate fabrics.
  • Started the removal of ‘tgt’ as a dependency – We have started the removal of ‘tgt’ as a dependency. This simplies the boot process by not loading ephemeral images from tgt, but rather, having the initrd download and load the ephemeral environment.
  • UI Improvements
    • Performance Improvements – Improved the loading of elements in the Device Discovery, Node listing and Events page, which greatly improve UI performance.
    • LP #1695312 – The button to edit dynamic range says ‘Edit’ while it should say ‘Edit reserved range’
    • Remove auto-save on blur for the Fabric details summary row. Applied static content when not in edit mode.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

  • LP: #1678339 – allow physical (and bond) interfaces to be placed on VLANs with a known 802.1q tag.
  • LP: #1652298 – Improve loading of elements in the device discovery page

Read more
Sciri

Note: Community TFTP documentation is on the Ubuntu Wiki but this short guide adds extra steps to help secure and safeguard your TFTP server.

Every Data Centre Engineer should have a TFTP server somewhere on their network whether it be running on a production host or running on their own notebook for disaster recovery. And since TFTP is lightweight without any user authentication care should be taken to prevent access to or overwriting of critical files.

The following example is similar to the configuration I run on my personal Ubuntu notebook and home Ubuntu servers. This allows me to do switch firmware upgrades and backup configuration files regardless of environment since my notebook is always with me.

Step 1: Install TFTP and TFTP server

$ sudo apt update; sudo apt install tftp-hpa tftpd-hpa

Step 2: Configure TFTP server

The default configuration below allows switches and other devices to download files but, if you have predictable filenames, then anyone can download those files if you configure TFTP Server on your notebook. This can lead to dissemination of copyrighted firmware images or config files that may contain passwords and other sensitive information.

# /etc/default/tftpd-hpa

TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/var/lib/tftpboot"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure"

Instead of keeping any files directly in the /var/lib/tftpboot base directory I’ll use mktemp to create incoming and outgoing directories with hard-to-guess names. This prevents guessing common filenames.

First create an outgoing directory owned by root mode 755. Files in this directory should be owned by root to prevent unauthorized or accidental overwriting. You wouldn’t want your expensive Cisco IOS firmware image accidentally or maliciously overwritten.

$ cd /var/lib/tftpboot
$ sudo chmod 755 $(sudo mktemp -d XXXXXXXXXX --suffix=-outgoing)

Next create incoming directory owned by tftp mode 700 . This allows tftpd-hpa to create files in this directory if configured to do so.

$ sudo chown tftp:tftp $(sudo mktemp -d XXXXXXXXXX --suffix=-incoming)
$ ls -1
ocSZiwPCkH-outgoing
UHiI443eTG-incoming

Configure tftpd-hpa to allow creation of new files. Simply add –create to TFTP_OPTIONS in /etc/default/tftpd-hpa.

# /etc/default/tftpd-hpa

TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/var/lib/tftpboot"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure --create"

And lastly restart tftpd-hpa.

$ sudo /etc/init.d/tftpd-hpa restart
[ ok ] Restarting tftpd-hpa (via systemctl): tftpd-hpa.service.

Step 3: Firewall rules

If you have a software firewall enabled you’ll need to allow access to port 69/udp. Either add this rule to your firewall scripts if you manually configure iptables or run the following UFW command:

$ sudo ufw allow tftp

Step 4: Transfer files

Before doing a firmware upgrade or other possibly destructive maintenance I always backup my switch config and firmware.

cisco-switch#copy running-config tftp://192.168.0.1/UHiI443eTG-incoming/config-cisco-switch
Address or name of remote host [192.168.0.1]? 
Destination filename [UHiI443eTG-incoming/config-cisco-switch]? 
 
 !!
3554 bytes copied in 0.388 secs (9160 bytes/sec)
cisco-switch#copy flash:?
flash:c1900-universalk9-mz.SPA.156-3.M2.bin flash:ccpexp flash:cpconfig-19xx.cfg flash:home.shtml
flash:vlan.dat

cisco-switch#copy flash:c1900-universalk9-mz.SPA.156-3.M2.bin tftp://192.168.0.1/UHiI443eTG-incoming/c1900-universalk9-mz.SPA.156-3.M2.bin 
Address or name of remote host [192.168.0.1]? 
Destination filename [UHiI443eTG-incoming/c1900-universalk9-mz.SPA.156-3.M2.bin]? 
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
85258084 bytes copied in 172.692 secs (493700 bytes/sec)

Files in incoming will be owned by tftp mode 666 (world writable) by default. Remember to move those files to your outgoing directory and change ownership to root mode 644 for safe keeping.

Once you’re sure your switch config and firmware is safely backed up it’s safe to copy new firmware to flash or do any other required destructive maintenance.

Step 5: Prevent TFTP access

It’s good practice on a notebook to deny services when not actively in-use. Assuming you have a software firewall be sure to deny access to your TFTP server when on the road or when connected to hostile networks.

$ sudo ufw deny tftp
Rule updated
Rule updated (v6)
$ sudo ufw status
Status: active

To Action From
-- ------ ----
CUPS ALLOW Anywhere 
OpenSSH DENY Anywhere 
69/udp DENY Anywhere 
CUPS (v6) ALLOW Anywhere (v6) 
OpenSSH (v6) DENY Anywhere (v6) 
69/udp (v6) DENY Anywhere (v6)

Read more
facundo


Hace un par de semanas finalmente salió el trámite de la Inspección General de Justicia sin nada para revisar o modificar... ¡está formada legalmente la Asociación Civil Python Argentina!

Estatuto todo sellado

Ahora estamos trabajando a full para sacar el CUIT en la AFIP, lo que nos va a permitir abrir una cuenta en el banco. De esta manera los chicos que están organizando la PyCon ya van a poder darle luz verde a los sponsors para que pongan plata.

Más allá de ayudar organizativamente en la PyCon y en otros eventos, son cuatro las cosas que queremos empujar el primer par de años:

  • Becas de viaje: porque creemos que hay mucho valor en que la gente se conozca, así que trataremos de ayudar a que la gente pueda viajar a eventos que se organicen en el país
  • Traducciones en eventos: si van a venir disertantes grosos que no hablen castellano, hacer lo posible para que la mayoría pueda entenderlos
  • Descuentos en cursos: estamos barajando un par de modalidades
  • Sitio web de PyAr y otra infraestructura: tenemos que dar un salto en seriedad a la hora de mantener los distintos servicios que da el grupo

Para eso (y para los costos operativos) básicamente vamos a necesitar dinero :) La Asociación se va a financiar de dos maneras, principalmente...

Una es por aporte de los socios. La idea es que los socios, que se beneficiarían directa e indirectamente por la Asociación Civil, pongan un manguito por mes para ayudar a hacer cosas.

El otro mecanismo es por aporte directo de empresas (de las cuales esperamos un manguito más grande, posiblemente anual).

Ya les contaremos bien cuales serán los mecanismos, montos, y eso. ¡Estén atentos!

Read more
Alan Griffiths

Mir release 0.26.3

Mir 0.26.3 for all!

By itself Mir 0.26.3 isn’t a very interesting release, just a few minor bugfixes: [https://launchpad.net/mir/0.26/0.26.3]

The significant thing with Mir 0.26.3 is that we are making this version available across the latest releases of Ubuntu as well as 17.10 (Artful Ardvark). That is: Ubuntu 17.04 (Zesty Zapus), Ubuntu 16.10 (Yakkety Yak) and, last but not least, Ubuntu 16.04LTS (Xenial Xerus).

This is important to those developing Mir based snaps. Having Mir 0.26 in the 16.04LTS archive removes the need to build Mir based snaps using the “stable-phone-overlay” PPA.

Read more
Stéphane Graber

LXD logo

Introduction

As you may know, LXD uses unprivileged containers by default.
The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).

The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.

The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.

From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.

LXD does offer a number of options related to unprivileged configuration:

  • Increasing the size of the default uid/gid map
  • Setting up per-container maps
  • Punching holes into the map to expose host users and groups

Increasing the size of the default map

As mentioned above, in most cases, LXD will have a default map that’s made of 65536 uids/gids.

In most cases you won’t have to change that. There are however a few cases where you may have to:

  • You need access to uid/gid higher than 65535.
    This is most common when using network authentication inside of your containers.
  • You want to use per-container maps.
    In which case you’ll need 65536 available uid/gid per container.
  • You want to punch some holes in your container’s map and need access to host uids/gids.

The default map is usually controlled by the “shadow” set of utilities and files. On systems where that’s the case, the “/etc/subuid” and “/etc/subgid” files are used to configure those maps.

On systems that do not have a recent enough version of the “shadow” package. LXD will assume that it doesn’t have to share uid/gid ranges with anything else and will therefore assume control of a billion uids and gids, starting at the host uid/gid 100000.

But the common case, is a system with a recent version of shadow.
An example of what the configuration may look like is:

stgraber@castiana:~$ cat /etc/subuid
lxd:100000:65536
root:100000:65536

stgraber@castiana:~$ cat /etc/subgid
lxd:100000:65536
root:100000:65536

The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.

Now if you want to increase the size of the map available to LXD. Simply edit both of the files and bump the last value from 65536 to whatever size you need. I tend to bump it to a billion just so I don’t ever have to think about it again:

stgraber@castiana:~$ cat /etc/subuid
lxd:100000:1000000000
root:100000:1000000000

stgraber@castiana:~$ cat /etc/subgid
lxd:100000:1000000000
root:100000:100000000

After altering those files, you need to restart LXD to have it detect the new map:

root@vorash:~# systemctl restart lxd
root@vorash:~# cat /var/log/lxd/lxd.log
lvl=info msg="LXD 2.14 is starting in normal mode" path=/var/lib/lxd t=2017-06-14T21:21:13+0000
lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-06-14T21:21:13+0000
lvl=info msg="Kernel uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg="Configured LXD uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg="Connecting to a remote simplestreams server" t=2017-06-14T21:21:13+0000
lvl=info msg="Expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Done expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Starting /dev/lxd handler" t=2017-06-14T21:21:13+0000
lvl=info msg="LXD is socket activated" t=2017-06-14T21:21:13+0000
lvl=info msg="REST API daemon:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding TCP socket" socket=[::]:8443 t=2017-06-14T21:21:13+0000
lvl=info msg="Pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Updating images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done updating images" t=2017-06-14T21:21:13+0000
root@vorash:~#

As you can see, the configured map is logged at LXD startup and can be used to confirm that the reconfiguration worked as expected.

You’ll then need to restart your containers to have them start using your newly expanded map.

Per container maps

Provided that you have a sufficient amount of uid/gid allocated to LXD, you can configure your containers to use their own, non-overlapping allocation of uids and gids.

This can be useful for two reasons:

  1. You are running software which alters kernel resource ulimits.
    Those user-specific limits are tied to a kernel uid and will cross container boundaries leading to hard to debug issues where one container can perform an action but all others are then unable to do the same.
  2. You want to know that should there be a way for someone in one of your containers to somehow get access to the host that they still won’t be able to access or interact with any of the other containers.

The main downsides to using this feature are:

  • It’s somewhat wasteful with using 65536 uids and gids per container.
    That being said, you’d still be able to run over 60000 isolated containers before running out of system uids and gids.
  • It’s effectively impossible to share storage between two isolated containers as everything written by one will be seen as -1 by the other. There is ongoing work around virtual filesystems in the kernel that will eventually let us get rid of that limitation.

To have a container use its own distinct map, simply run:

stgraber@castiana:~$ lxc config set test security.idmap.isolated true
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":65536}]

The restart step is needed to have LXD remap the entire filesystem of the container to its new map.
Note that this step will take a varying amount of time depending on the number of files in the container and the speed of your storage.

As can be seen above, after restart, the container is shown to have its own map of 65536 uids/gids.

If you want LXD to allocate more than the default 65536 uids/gids to an isolated container, you can bump the size of the allocation with:

stgraber@castiana:~$ lxc config set test security.idmap.size 200000
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":200000},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":200000}]

If you’re trying to allocate more uids/gids than are left in LXD’s allocation, LXD will let you know:

stgraber@castiana:~$ lxc config set test security.idmap.size 2000000000
error: Not enough uid/gid available for the container.

Direct user/group mapping

The fact that all uids/gids in an unprivileged container are mapped to a normally unused range on the host means that sharing of data between host and container is effectively impossible.

Now, what if you want to share your user’s home directory with a container?

The obvious answer to that is to define a new “disk” entry in LXD which passes your home directory to the container:

stgraber@castiana:~$ lxc config device add test home disk source=/home/stgraber path=/home/ubuntu
Device home added to test

So that was pretty easy, but did it work?

stgraber@castiana:~$ lxc exec test -- bash
root@test:~# ls -lh /home/
total 529K
drwx--x--x 45 nobody nogroup 84 Jun 14 20:06 ubuntu

No. The mount is clearly there, but it’s completely inaccessible to the container.
To fix that, we need to take a few extra steps:

  • Allow LXD’s use of our user uid and gid
  • Restart LXD to have it load the new map
  • Set a custom map for our container
  • Restart the container to have the new map apply
stgraber@castiana:~$ printf "lxd:$(id -u):1\nroot:$(id -u):1\n" | sudo tee -a /etc/subuid
lxd:201105:1
root:201105:1

stgraber@castiana:~$ printf "lxd:$(id -g):1\nroot:$(id -g):1\n" | sudo tee -a /etc/subgid
lxd:200512:1
root:200512:1

stgraber@castiana:~$ sudo systemctl restart lxd

stgraber@castiana:~$ printf "uid $(id -u) 1000\ngid $(id -g) 1000" | lxc config set test raw.idmap -

stgraber@castiana:~$ lxc restart test

At which point, things should be working in the container:

stgraber@castiana:~$ lxc exec test -- su ubuntu -l
ubuntu@test:~$ ls -lh
total 119K
drwxr-xr-x 5  ubuntu ubuntu 8 Feb 18 2016 data
drwxr-x--- 4  ubuntu ubuntu 6 Jun 13 17:05 Desktop
drwxr-xr-x 3  ubuntu ubuntu 28 Jun 13 20:09 Downloads
drwx------ 84 ubuntu ubuntu 84 Sep 14 2016 Maildir
drwxr-xr-x 4  ubuntu ubuntu 4 May 20 15:38 snap
ubuntu@test:~$ 

Conclusion

User namespaces, the kernel feature that makes those uid/gid mappings possible is a very powerful tool which finally made containers on Linux safe by design. It is however not the easiest thing to wrap your head around and all of that uid/gid map math can quickly become a major issue.

In LXD we’ve tried to expose just enough of those underlying features to be useful to our users while doing the actual mapping math internally. This makes things like the direct user/group mapping above significantly easier than it otherwise would be.

Going forward, we’re very interested in some of the work around uid/gid remapping at the filesystem level, this would let us decouple the on-disk user/group map from that used for processes, making it possible to share data between differently mapped containers and alter the various maps without needing to also remap the entire filesystem.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Discussion forun: https://discuss.linuxcontainers.org
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
admin

Thursday June 8th, 2017

The MAAS team is happy to announce the introduction of development summaries. We hope this helps to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS, and bugs fixed in released MAAS versions.

Announcements

With the MAAS 2.2 release out of the door, we are happy to announce that:

  • MAAS 2.3 is now opened for development.
  • MAAS is moving to GIT in Launchpad – In the coming weeks, MAAS source will now be hosted under a GIT repository in Launchpad, once we complete the work of updating all our internal processes (e.g. CI, Landers, etc).

MAAS 2.3 (current development release)

With the team now focusing efforts on the new development release, MAAS 2.3, the team has been working on the following features and improvements:

  • Started adding support for Django 1.11 – MAAS will continue to be backward compatible with Django 1.8.
  • Adding support for ‘upstream’ proxy – MAAS deployed machines will continue to use MAAS’ internal proxy, while allowing MAAS ‘ proxy to communicate with an upstream proxy.
  • Started adding network beaconing – New feature to support better network (subnet’s, vlans) discovery and allow fabric deduplication.
    • Officially registered IPv4 and IPv6 multicast groups for MAAS beaconing (224.0.0.118 and ff02::15a, respectively).
    • Implemented a mechanism to provide authenticated encryption using the MAAS shared secret.
    • Prototyped initial beaconing multicast join mechanism and receive path.

Libmaas (python-libmaas)

With the continuous improvement of the new MAAS Python Library (python-libmaas), we have focused our efforts on the following improvements the past week:

  • Add support to be able to provide nested objects and object sets.
  • Add support to be able to update any object accessible via the library.
  • Add ability to read interfaces (nested) under Machines, Devices, Rack Controllers and Region Controllers.
  • Add ability to read VLAN’s (nested) under Fabrics.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

  • Bug #1694767: RSD composition not setting local disk tags
  • Bug #1694759: RSD Pod refresh shows ComposedNodeState is “Failed”
  • Bug #1695083: Improve NTP IP address selection for MAAS DHCP clients.

Questions?

IRC – Find as on #maas @ freenode.

ML – https://lists.ubuntu.com/mailman/listinfo/maas-devel

Read more
UbuntuTouch

[原]百度云snap应用

百度云应用可以很方便地帮我们管理我们在云上的应用。这个应用的源码在:

https://github.com/LiuLang/bcloud


Snap版的软件源码在:

https://github.com/liu-xiao-guo/bcloud-snap




你可以使用如下的命令来从商店进行安装:

sudo snap install bcloud —devmode —beta



作者:UbuntuTouch 发表于2017/3/21 12:41:35 原文链接
阅读:1449 评论:0 查看评论

Read more
UbuntuTouch

[原]中文日历终于有Snap版了

经过一些努力,终于把中文日历的应用可以使用snap进行安装。由于一些原因,我们需要等到maintainer上传这个应用到商店。这个项目的源码在:

https://launchpad.net/chinese-calendar

Snap应用的源码在:

https://github.com/liu-xiao-guo/chinese-calendar






如果大家对这个应用感兴趣的话,可以把我的源码下下来,并安装好snapcraft开发工具。并在项目的根目录下直接打入snapcraft命令就可以打包这个应用了。之后就可以直接在本地安装了。

你可以在你的16.04+Ubuntu Desktop上使用如下的命令安装:

$ sudo snap install chinese-cal


作者:UbuntuTouch 发表于2017/3/20 10:08:34 原文链接
阅读:1232 评论:0 查看评论

Read more
UbuntuTouch

我们在先前的文章“利用ubuntu-app-platform提供的platform接口来减小Qt应用大小”已经了解到如何运用platform interface来减小Qt应用的大小。这里面的实现原理就是利用content分享来实现的。在今天的教程中,我们来运用一个开发者自己开发的python的interpreter snap安装包来实现同样的东西。对于一些系统来说,如果想要用最新的python版本,或者是想让很多的python应用都使用同一个python的安装,而不用分别把python的环境打入到每一个snap应用的包中,我们可以采用今天使用的方法。

这个python interpreter的snap应用的整个源码在:

https://github.com/jhenstridge/python-snap-pkg

我们可以通过如下的方式来得到:

$ git clone https://github.com/jhenstridge/python-snap-pkg


整个项目的源码如下:

$ tree -L 3
.
├── examples
│   └── hello-world
│       ├── hello.py
│       ├── hello.sh
│       └── snap
├── README.md
├── snap
│   └── snapcraft.yaml
└── src
    └── sitecustomize.py

在上面的snap目录中就是描述如何把python通过content sharing interface分享出去以供其它的开发者使用。开发者已经把编译好的snap上传到我们的商店了。我们可以通过如下的方式来进行安装:

$  snap install --edge python36-jamesh

我们可以到examples/hello-world目录下直接打入如下的命令:

$ snapcraft 
Preparing to pull hello-world 
Pulling hello-world 
Preparing to build hello-world 
Building hello-world 
Staging hello-world 
Priming hello-world 
Snapping 'hello-world' |                                                             
Snapped hello-world_0.1_all.snap

我们可以看到生产的.snap文件。我们可以使用如下的命令:

$ sudo snap install --dangerous hello-world_0.1_all.snap

来安装这个应用。并使用如下的命令来进行连接和运行:

$ snap connect hello-world:python3 python36-jamesh:python3
$ hello-world
Hello world!

我们可以看到我们的hello-world应用被成功运行。

hello.py

print("Hello world!")

我们可以检查一下我们最后的hello-world_0.1_all.snap文件大小:

$ ls -alh
total 28K
drwxrwxr-x 3 liuxg liuxg 4.0K 3月   1 09:45 .
drwxrwxr-x 3 liuxg liuxg 4.0K 3月   1 09:19 ..
-rw-rw-r-- 1 liuxg liuxg   28 3月   1 09:19 .gitignore
-rw-rw-r-- 1 liuxg liuxg   22 3月   1 09:19 hello.py
-rwxrwxr-x 1 liuxg liuxg   60 3月   1 09:19 hello.sh
-rw-r--r-- 1 liuxg liuxg 4.0K 3月   1 09:39 hello-world_0.1_all.snap
drwxrwxr-x 2 liuxg liuxg 4.0K 3月   1 09:23 snap

整个的.snap文件只有小小的4k大小。这比较以前的那种方法,显然这种通过content sharing的方法能够大大减少我们的python应用的大小。当然这个共享的python包也可以为其它的python应用所使用。

你甚至可以通过如下的方式来安装自己喜欢的pip包:

$ python36-jamesh.pip3 install --user django

这个django包的内容将会被安装到 python36-jamesh包里的$SNAP_USER_COMMON目录之中





作者:UbuntuTouch 发表于2017/3/1 9:48:19 原文链接
阅读:937 评论:0 查看评论

Read more
UbuntuTouch

[原]虾米电台snap应用

这是一个基于开源项目的虾米电台应用。它的源码在:

https://github.com/timxx/xmradio

它的snap版本的源码为:

https://github.com/liu-xiao-guo/xmradio

这个项目的应用界面为:






你可以在你的16.04及以上的桌面系统里使用如下的命令进行安装:

$ sudo snap install xmradio --edge --devmode

经过改良,现在你可以直接在stable channel里下载了:

$sudo snap install xmradio

作者:UbuntuTouch 发表于2017/3/16 10:05:08 原文链接
阅读:1000 评论:0 查看评论

Read more
UbuntuTouch

[原]在云上打包你的snap应用

如果你的应用已经在一个architecture(x86, arm)中开发好,你很想在另外一个architecture中进行编译,但是你苦于没有相应的硬件平台来编译。那你该怎么办呢?又或者你想把你的源码放到github中,你想通过一些方法进行自动编译你的代码,并发布到Ubuntu Store中。在几天的教程中,我们来展示一些在云上帮我们编译的一些方法。


在Launchpad上进行编译


我们可以把我们的代码放到https://launchpad.net/。比如我已经创建好了一个我自己的项目:


我们打开这个页面。我们可以在该页面的下面找到一个叫做“Create snap package”的链接:



通过这个链接,我们可以把我们的项目直接进行编译。我们可以选择我们需要的architecture,并最终生产我们所需要的.snap文件。

通过build.snapcraft.io来打包


我们可以在http://build.snapcraft.io/网站上选择我们自己的repo。



通过设置,并选择自己的repo,我们可以通过这个网站来帮我们生产armhf及amd64的snap包。它还可以帮我们发布到Ubuntu Store里。






如果大家对这个感兴趣的话,可以试一下:)

作者:UbuntuTouch 发表于2017/3/3 7:56:15 原文链接
阅读:1018 评论:0 查看评论

Read more
UbuntuTouch

[原]GoldenDict字典Snap应用

GoldenDictionary是一个非常好的在Linux上运行的应用软件。它的源码在:

https://github.com/goldendict/goldendict

Snap的源码在:

https://github.com/liu-xiao-guo/goldendict






你可以在你的16.04及以上的系统上安装:

sudo snap install goldendictionary


作者:UbuntuTouch 发表于2017/3/22 9:18:35 原文链接
阅读:926 评论:0 查看评论

Read more
UbuntuTouch

如果我们打开我们的Ubuntu Core安装的Core应用,在这个Core应用的安装目录中,我们会发现一个应用叫做xdg-open:

/snap/core/current/usr/local/bin$ ls
apt  apt-cache  apt-get  no-apt  xdg-open

关于xdg-open的更多描述可以在地址:https://linux.die.net/man/1/xdg-open找到。我们可以利用它来打开我们的所需要的文件或url。现在我们来利用它来启动一个应用,比如一个网站。为此,我们的snapcraft.yaml文件如下:


snapcraft.yaml


name: google
version: "1"
summary: this is a test program for launching a website using browser
description: |
     Launch google website using xdg-open
grade: stable
confinement: strict
architectures: [amd64]

apps:
   google:
     command: run.sh "http://www.google.com"
     plugs: [network, network-bind, x11, home, unity7, gsettings]

parts:
   files:
    plugin: dump
    source: scripts
    organize:
     run.sh: bin/run.sh

   integration:
    plugin: nil
    after: [desktop-gtk2]


scripts

#!/bin/sh

PATH="$PATH:/usr/local/bin"

xdg-open $1

我在Ubuntu Desktop上安装一个debian包:

$ sudo apt install snapd-xdg-open

打包完我们的应用并安装,运行:

$ google


我们可以看出google网站被成功启动。


作者:UbuntuTouch 发表于2017/3/6 13:36:38 原文链接
阅读:1013 评论:0 查看评论

Read more
UbuntuTouch

[原]网易云音乐snap

对于喜欢音乐的用户来讲,在16.04上可以安装snap版的播放器了。刚试了一下,效果还是不错的。源码在:

https://github.com/liu-xiao-guo/netease-music






你可以在16.04及以上的系统上安装如下的指令进行安装:

sudo snap install netease-music —devmode —beta



作者:UbuntuTouch 发表于2017/3/14 14:43:15 原文链接
阅读:1448 评论:0 查看评论

Read more
UbuntuTouch

[原]simplescreenrecorder snap应用

simeplscreenrecorder是一个工具应用软件。它可以用来帮我们录下我们的屏幕。这个项目的源码在:

https://github.com/MaartenBaert/ssr

snap版本的软件在:

https://github.com/liu-xiao-guo/simplescreenrecorder





你可以使用如下的命令来进行安装:

$ sudo snap install simplescreenrecorder



作者:UbuntuTouch 发表于2017/3/24 10:12:14 原文链接
阅读:824 评论:0 查看评论

Read more
UbuntuTouch

[原]酷我音乐盒snap应用

酷我音乐盒是一个音乐资源非常丰富的音乐播放器。它的源码在:


https://github.com/LiuLang/kwplayer


尽管它目前不被维护,但还是一款不错的应用。它的snap项目在:


https://github.com/liu-xiao-guo/kwplayer








你可以在你的16.04及以上的系统上安装这个应用:

$ sudo snap install kwplayer --beta --devmode

作者:UbuntuTouch 发表于2017/3/17 8:39:26 原文链接
阅读:861 评论:0 查看评论

Read more
UbuntuTouch

[原]moonplayer snap视频播放器

这是一个基于开源的一个视频播放器。它可以播放在优酷及土豆网上的视频。质量还是不错的。我把它打成了一个snap应用。供大家参考。snap包的好处就是你不需要安装任何其它的依赖。只需要安装一个包就可以了。看看市面上的好多debian应用都需要安装很多依赖的包才可以正确运行。而且可能很多的包还有不兼容的问题,从而导致安装不成功。


应用的源码在:

https://github.com/coslyk/moonplayer

打包成snap的代码在:

https://github.com/liu-xiao-guo/moonplayer






如果你有16.04及以上的电脑的话,可以直接使用如下的命令来安装:

sudo snap install moonplayer

作者:UbuntuTouch 发表于2017/3/15 10:27:43 原文链接
阅读:1102 评论:0 查看评论

Read more