# Canonical Voices

K. Tsakalozos

## Kubernetes pre-stable releases now available with MicroK8s

If you take a look at MicroK8s’ channel information with snap info microk8s you will see all available Kubernetes releases:

channels:
stable: v1.14.1 2019-04-18 (522) 214MB classic
candidate: v1.14.1 2019-04-15 (522) 214MB classic
beta: v1.14.1 2019-04-15 (522) 214MB classic
edge: v1.14.1 2019-05-10 (587) 217MB classic
1.15/stable: –
1.15/candidate: –
1.15/beta: –
1.15/edge: v1.15.0-alpha.3 2019-05-08 (578) 215MB classic
1.14/stable: v1.14.1 2019-04-18 (521) 214MB classic
1.14/candidate: v1.14.1 2019-04-15 (521) 214MB classic
1.14/beta: v1.14.1 2019-04-15 (521) 214MB classic
1.14/edge: v1.14.1 2019-05-11 (590) 217MB classic
1.13/stable: v1.13.5 2019-04-22 (526) 237MB classic
1.13/candidate: v1.13.6 2019-05-09 (581) 237MB classic
1.13/beta: v1.13.6 2019-05-09 (581) 237MB classic
1.13/edge: v1.13.6 2019-05-08 (581) 237MB classic
1.12/stable: v1.12.8 2019-05-02 (547) 259MB classic
1.12/candidate: v1.12.8 2019-05-01 (547) 259MB classic
1.12/beta: v1.12.8 2019-05-01 (547) 259MB classic
1.12/edge: v1.12.8 2019-04-24 (547) 259MB classic
1.11/stable: v1.11.10 2019-05-10 (557) 258MB classic
1.11/candidate: v1.11.10 2019-05-02 (557) 258MB classic
1.11/beta: v1.11.10 2019-05-02 (557) 258MB classic
1.11/edge: v1.11.10 2019-05-01 (557) 258MB classic
1.10/stable: v1.10.13 2019-04-22 (546) 222MB classic
1.10/candidate: v1.10.13 2019-04-22 (546) 222MB classic
1.10/beta: v1.10.13 2019-04-22 (546) 222MB classic
1.10/edge: v1.10.13 2019-04-22 (546) 222MB classic

If you want to follow the v1.14 Kubernetes releases you would:

sudo snap install microk8s --classic --channel=1.14/stable

Whereas if you always want to be on the latest stable release you would:

sudo snap install microk8s --classic

What is new in the channels list above is the pre-stable releases found under the 1.15 track (at the time of this writing the latest stable release is v1.14).

#### Following the pre-stable releases

We are committed to shipping MicroK8s with pre-stable releases under the following scheme.

• The edge channel (eg 1.15/edge) holds the alpha upstream releases.
• The beta channel (eg 1.15/beta) holds the beta upstream releases.
• The candidate channel (eg 1.15/candidate) holds the release candidate of upstream releases.

Pre-stable releases will be available the same day they are released upstream.

If you want to test your work against the alpha 1.15 release simply do:

sudo snap install microk8s --classic --channel=1.15/edge

However, be aware that pre-stable releases may change before the stable release. Be sure to test any work against the stable release once it becomes available.

#### Tracks with stable releases

Tracks are meant to serve specific Kubernetes releases. For example the 1.15 track with its four channels, 1.15/edge, 1.15/beta, 1.15/candidate, 1.15/stable, serves the v1.15 K8s release. As soon as a new K8s stable release is made, all channels of the corresponding track are updated. In our example, as soon as v1.15 stable is released the corresponding track channels are updated in the following way:

• The 1.15/edge channel is updated on every commit merged on the MicroK8s repository paired with the v1.15 stable K8s release.
• The 1.15/beta and 1.15/candidate channels are updated on every upstream patch release. They hold whatever the 1.15/edge channel has at the time of the patch release.
• The 1.15/stable channel gets updated with what 1.15/candidate holds a week after a new revision is put into 1.15/candidate.

#### I am confused. Which channel is right for me?

The single question you need to answer is what to put in the channel argument below:

sudo snap install microk8s --classic --channel=<What_to_use_here?>

Here are some suggestions for the channel to use based on your needs:

• I want to always be on the latest stable Kubernetes.
Use --channel=latest
• I want to always be on the latest release in a specific upstream K8s release.
Use --channel=<release>/stable eg --channel=1.14/stable.
• I want to test-drive a pre-stable release.
Use --channel=<next_release>/edge for alpha releases
Use --channel=<next_release>/beta for beta releases
Use --channel=<next_release>/candidate for candidate releases
• I am waiting for a bug fix on MicroK8s:
Use --channel=<release>/edge
• I am waiting for a bug fix on upstream Kubernetes:
Use --channel=<release>/candidate

#### Developing K8s core services with MicroK8s

One of the purposes of pre-stable releases is to assist K8s core service developers in their task. Let’s see how we can hook a local build of kubelet to a MicroK8s deployment.

Following the build instructions for Kubernetes we:

git clone https://github.com/kubernetes/kubernetes
cd kubernetes
build/run.sh make kubelet

The kubelet binary should be available under:

_output/dockerized/bin/linux/amd64/kubelet

Let’s grab a MicroK8s deployment:

sudo snap install microk8s --classic --channel=1.15/edge

To see what arguments the kubelet is running with we:

> ps -ef | grep kubelet
root 24184 1 2 17:28 ? 00:00:54 /snap/microk8s/578/kubelet
--kubeconfig=/snap/microk8s/578/configs/kubelet.config
--cert-dir=/var/snap/microk8s/578/certs
--client-ca-file=/var/snap/microk8s/578/certs/ca.crt
--anonymous-auth=false
--network-plugin=kubenet
--root-dir=/var/snap/microk8s/common/var/lib/kubelet
--fail-swap-on=false
--pod-cidr=10.1.1.0/24
--cni-bin-dir=/snap/microk8s/578/opt/cni/bin/
--feature-gates=DevicePlugins=true
--eviction-hard=memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi
--container-runtime=remote
--container-runtime-endpoint=/var/snap/microk8s/common/run/containerd.sock
--node-labels=microk8s.io/cluster=true

We now need to stop the kubelet that comes with MicroK8s and start our own build:

sudo systemctl stop snap.microk8s.daemon-kubelet.service
sudo _output/dockerized/bin/linux/amd64/kubelet
--kubeconfig=/snap/microk8s/578/configs/kubelet.config
--cert-dir=/var/snap/microk8s/578/certs
--clit-ca-file=/var/snap/microk8s/578/certs/ca.crt
--anonymous-auth=false --network-plugin=kubenet
--root-dir=/var/snap/microk8s/common/var/lib/kubelet
--fail-swap-on=false --pod-cidr=10.1.1.0/24
--container-runtime=remote
--container-runtime-endpoint=/var/snap/microk8s/common/run/containerd.sock
--node-labels=microk8s.io/cluster=true --eviction-hard='memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi'

That’s it! Your kubelet now runs in place of the one in MicroK8s! You have to admit it is as simple as it gets.

What you should be aware is that some microk8s commands will restart services through systemd. For example, microk8s.enable dns will initiate a services restart including the kubelet shipped with MicroK8s.

Happy coding!

Kubernetes pre-stable releases now available with MicroK8s was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

jdstrand

## Cloud images, qemu, cloud-init and snapd spread tests

It is useful for testing to want to work with official cloud images as local VMs. Eg, when I work on snapd, I like to have different images available to work with its spread tests.

The autopkgtest package makes working with Ubuntu images quite easy:

$sudo apt-get install qemu-kvm autopkgtest$ autopkgtest-buildvm-ubuntu-cloud -r bionic # -a i386
# and to integrate into spread
$mkdir -p ~/.spread/qemu$ mv ./autopkgtest-bionic-amd64.img ~/.spread/qemu/ubuntu-18.04-64.img
# now can run any test from 'spread -list' starting with
# 'qemu:ubuntu-18.04-64:'

I found myself wanting an official Debian unstable cloud image so I could use it in spread while testing snapd. I learned it is easy enough to create the images yourself but then I found that Debian started providing raw and qcow2 cloud images for use in OpenStack and so I started exploring how to use them and generalize how to use arbitrary cloud images.

### General procedure

The basic steps are:

1. obtain a cloud image
2. make copy of the cloud image for safekeeping
3. resize the copy
5. boot with networking and the seed file
7. cleanly shutdown
8. use normally (ie, without seed file)

In this case, I grabbed the ‘debian-testing-openstack-amd64.qcow2’ image from http://cdimage.debian.org/cdimage/openstack/testing/ and verified it. Since this is based on Debian ‘testing’ (current stable images are also available), when I copied it I named it accordingly. Eg, I knew for spread it needed to be ‘debian-sid-64.img’ so I did:

$cp ./debian-testing-openstack-amd64.qcow2 ./debian-sid-64.img I then resized it. I picked 20G since I recalled that is what autopkgtest uses:$ qemu-img resize debian-sid-64.img 20G

These are already setup for cloud-init, so I created a cloud-init data file (note, the ‘#cloud-config’ comment at the top is important):

$cat ./debian-data #cloud-config password: debian chpasswd: { expire: false } ssh_pwauth: true and a cloud-init meta-data file:$ cat ./debian-meta-data
instance-id: i-debian-sid-64
local-hostname: debian-sid-64

and fed that into cloud-localds to create a seed file:

$cloud-localds -v ./debian-seed.img ./debian-data ./debian-meta-data Then start the image with:$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda ./debian-sid-64.img -drive "file=./debian-seed.img,if=virtio,format=raw" -net nic -net user,hostfwd=tcp:127.0.0.1:59355-:22

(I’m using the invocation that is reminiscent of how spread invokes it; feel free to use a virtio invocation as described by Scott Moser if that better suits your environment.)

Here, the “59355” can be any unused high port. The idea is after the image boots, you can login with ssh using:

$ssh -p 59355 debian@127.0.0.1 Once logged in, perform any updates, etc that you want in place when tests are run, then disable cloud-init for the next boot and cleanly shutdown with:$ sudo touch /etc/cloud/cloud-init.disabled
$sudo shutdown -h now The above is the generalized procedure which can hopefully be adapted for other distros that provide cloud images, etc. For integrating into spread, just copy the image to ‘~/.spread/qemu’, naming it how spread expects. spread will use ‘-snapshot’ with the VM as part of its tests, so if you want to update the images later since they might be out of date, omit the seed file (and optionally ‘-net nic -net user,hostfwd=tcp:127.0.0.1:59355-:22’ if you don’t need port forwarding), and use:$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda ./debian-sid-64.img

UPDATE 2019-04-23: the above is confirmed to work with Fedora 28 and 29 (though, if using the resulting image to test snapd, be sure to configure the password as ‘fedora’ and then be sure to ‘yum update ; yum install kernel-modules nc strace’ in the image).

UPDATE 2019-04-22: the above is confirmed to work with CentOS 7 (though, if using the resulting image to test snapd, be sure to configure the password as ‘centos’ and then be sure to ‘yum update ; yum install epel-release ; yum install golang nc strace’ in the image).

### Extra steps for Debian cloud images without default e1000 networking

Unfortunately, for the Debian cloud images, there were additional steps because spread doesn’t use virtio, but instead the default the e1000 driver, and the Debian cloud kernel doesn’t include this:

$grep E1000 /boot/config-4.19.0-4-cloud-amd64 # CONFIG_E1000 is not set # CONFIG_E1000E is not set So… when the machine booted, there was no networking. To adjust for this, I blew away the image, copied from the safely kept downloaded image, resized then started it with:$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda $HOME/.spread/qemu/debian-sid-64.img -drive "file=$HOME/.spread/qemu/debian-seed.img,if=virtio,format=raw" -device virtio-net-pci,netdev=eth0 -netdev type=user,id=eth0

This allowed the VM to start with networking, at which point I adjusted /etc/apt/sources.list to refer to ‘sid’ instead of ‘buster’ then ran apt-get update then apt-get dist-upgrade to upgrade to sid. I then installed the Debian distro kernel with:

$sudo apt-get install linux-image-amd64 Then uninstalled the currently running kernel with:$ sudo apt-get remove --purge linux-image-cloud-amd64 linux-image-4.19.0-4-cloud-amd64

(I used ‘dpkg -l | grep linux-image’ to see the cloud kernels I wanted to remove). Removing the package that provides the currently running kernel is a dangerous operation for most systems, so there is a scary message to abort the operation. In our case, it isn’t so scary (we can just try again ;) and this is exactly what we want to do.

Next I cleanly shutdown the VM with:

$sudo shutdown -h now and try to start it again like with the ‘general procedures’, above (I’m keeping the seed file here because I want cloud-init to be re-run with the e1000 driver):$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda ./debian-sid-64.img -drive "file=./debian-seed.img,if=virtio,format=raw" -net nic -net user,hostfwd=tcp:127.0.0.1:59355-:22

Now I try to login via ssh:
$ssh -p 59355 debian@127.0.0.1 ... debian@127.0.0.1's password: ... Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Tue Apr 16 16:13:15 2019 debian@debian:~$ sudo touch /etc/cloud/cloud-init.disabled
debian@debian:~$sudo shutdown -h now Connection to 127.0.0.1 closed. While this VM is no longer the official cloud image, it is still using the Debian distro kernel and Debian archive, which is good enough for my purposes and at this point I’m ready to use this VM in my testing (eg, for me, copy ‘debian-sid-64.img’ to ‘~/.spread/qemu’). Read more abeato ## Porting Ubuntu Core 18 to nvidia Jetson TX1 Developer Kit Ubuntu Core (UC) is Canonical’s take in the IoT space. There are pre-built images for officially supported devices, like Raspberry Pi or Intel NUCs, but if we have something else and there is no community port, we need to create the UC image ourselves. High level instructions on how to do this are found in the official docs. The process is straightforward once we have two critical components: the kernel and the gadget snap. Creating these snaps is not necessarily complex, but there can be bumps in the road if you are new to the task. In this post I explain how I created them for the Jetson TX1 developer kit board, and how they were used to create a UC image for said device, hoping this will provide new tricks to hackers working on ports for other devices. All the sources for the snaps and the build scripts are available in github: https://github.com/alfonsosanchezbeato/jetson-kernel-snap https://github.com/alfonsosanchezbeato/jetson-gadget-snap https://github.com/alfonsosanchezbeato/jetson-ubuntu-core So, let’s start with… ## The kernel snap The Linux kernel that we will use needs some kernel configuration options to be activated, and it is also especially important that it has a modern version of apparmor so snaps can be properly confined. The official Jetson kernel is the 4.4 release, which is quite old, but fortunately Canonical has a reference 4.4 kernel with all the needed patches for snaps backported. Knowing this, we are a git format-patch command away to obtain the patches we will use on top of the nvidia kernel. The patches include also files with the configuration options that we need for snaps, plus some changes so the snap could be successfully compiled on Ubuntu 18.04 desktop. Once we have the sources, we need, of course, to create a snapcraft.yaml file that will describe how to build the kernel snap. We will walk through it, highlighting the parts more specific to the Jetson device. Starting with the kernel part, it turns out that we cannot use easily the kernel plugin, due to the special way in which the kernel needs to be built: nvidia distributes part of the needed drivers as separate repositories to the one used by the main kernel tree. Therefore, I resorted to using the nil plugin so I could hand-write the commands to do the build. The pull stage that resulted is override-pull: | snapcraftctl pull # Get kernel sources, which are distributed across different repos ./source_sync.sh -k tegra-l4t-r28.2.1 # Apply canonical patches - apparmor stuff essentially cd sources/kernel/display git am ../../../patch-display/* cd - cd sources/kernel/kernel-4.4 git am ../../../patch/* which runs a script to retrieve the sources (I pulled this script from nvidia Linux for Tegra -L4T- distribution) and applies Canonical patches. The build stage is a few more lines, so I decided to use an external script to implement it. We will analyze now parts of it. For the kernel configuration we add all the necessary Ubuntu bits: make "$JETSON_KERNEL_CONFIG" \
snappy/containers.config \
snappy/generic.config \
snappy/security.config \
snappy/snappy.config \
snappy/systemd.config

Then, to do the build we run

make -j"$num_cpu" Image modules dtbs An interesting catch here is that zImage files are not supported due to lack of a decompressor implementation in the arm64 kernel. So we have to build an uncompressed Image instead. After some code that stages the built files so they are included in the snap later, we retrieve the initramfs from the core snap. This step is usually hidden from us by the kernel plugin, but this time we have to code it ourselves: # Get initramfs from core snap, which we need to download core_url=$(curl -s -H "X-Ubuntu-Series: 16" -H "X-Ubuntu-Architecture: arm64" \
"https://search.apps.ubuntu.com/api/v1/snaps/details/core?channel=stable" \
curl -L "$core_url" > core.snap # Glob so we get both link and regular file unsquashfs core.snap "boot/initrd.img-core*" cp squashfs-root/boot/initrd.img-core "$SNAPCRAFT_PART_INSTALL"/initrd.img
ln "$SNAPCRAFT_PART_INSTALL"/initrd.img "$SNAPCRAFT_PART_INSTALL"/initrd-"$KERNEL_RELEASE".img Moving back to the snapcraft recipe we also have an initramfs part, which takes care of doing some changes to the default initramfs shipped by UC: initramfs: after: [ kernel ] plugin: nil source: ../initramfs override-build: | find . | cpio --quiet -o -H newc | lzma >> "$SNAPCRAFT_STAGE"/initrd.img

Here we are taking advantage of the fact that the initramfs can be built as a concatenation of compressed cpio archives. When the kernel decompresses it, the files included in the later archives overwrite the files from the first ones, which allows us to modify easily files in the initramfs without having to change the one shipped with core. The change that we are doing here is a modification to the resize script that allows UC to get all the free space in the disk on first boot. The modification makes sure this happens in the case when the partition is already taken all available space but the filesystem does not. We could remove this modification when these changes reach the core snap, thing that will happen eventually.

The last part of this snap is the firmware part:

firmware:
plugin: nil
override-build: |
set -xe
wget https://developer.nvidia.com/embedded/dlc/l4t-jetson-tx1-driver-package-28-2-ga -O Tegra210_Linux_R28.2.0_aarch64.tbz2
tar xf Tegra210_Linux_R28.2.0_aarch64.tbz2 Linux_for_Tegra/nv_tegra/nvidia_drivers.tbz2
tar xf Linux_for_Tegra/nv_tegra/nvidia_drivers.tbz2 lib/firmware/
cd lib; cp -r firmware/ "$SNAPCRAFT_PART_INSTALL" mkdir -p "$SNAPCRAFT_PART_INSTALL"/firmware/gm20b
cd "$SNAPCRAFT_PART_INSTALL"/firmware/gm20b ln -sf "../tegra21x/acr_ucode.bin" "acr_ucode.bin" ln -sf "../tegra21x/gpmu_ucode.bin" "gpmu_ucode.bin" ln -sf "../tegra21x/gpmu_ucode_desc.bin" "gpmu_ucode_desc.bin" ln -sf "../tegra21x/gpmu_ucode_image.bin" "gpmu_ucode_image.bin" ln -sf "../tegra21x/gpu2cde.bin" "gpu2cde.bin" ln -sf "../tegra21x/NETB_img.bin" "NETB_img.bin" ln -sf "../tegra21x/fecs_sig.bin" "fecs_sig.bin" ln -sf "../tegra21x/pmu_sig.bin" "pmu_sig.bin" ln -sf "../tegra21x/pmu_bl.bin" "pmu_bl.bin" ln -sf "../tegra21x/fecs.bin" "fecs.bin" ln -sf "../tegra21x/gpccs.bin" "gpccs.bin" Here we download some files so we can add firmware blobs to the snap. These files come separate from nvidia kernel sources. So this is it for the kernel snap, now you just need to follow the instructions to get it built. ## The gadget snap Time now to take a look at the gadget snap. First, I recommend to start by reading great ogra’s post on gadget snaps for devices with u-boot bootloader before going through this section. Now, same as for the kernel one, we will go through the different parts that are defined in the snapcraft.yaml file. The first one builds the u-boot binary: uboot: plugin: nil source: git://nv-tegra.nvidia.com/3rdparty/u-boot.git source-type: git source-tag: tegra-l4t-r28.2 override-pull: | snapcraftctl pull # Apply UC patches + bug fixes git am ../../../uboot-patch/*.patch override-build: | export ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make p2371-2180_defconfig nice make -j$(nproc)
cp "$SNAPCRAFT_PART_BUILD"/u-boot.bin$SNAPCRAFT_PART_INSTALL"/

We decided again to use the nil plugin as we need to do some special quirks. The sources are pulled from nvidia’s u-boot repository, but we apply some patches on top. These patches, along with the uboot environment, provide

• Support for the revert functionality in case a core or kernel snap installation goes wrong
• Bug fixes for u-boot’s ext4 subsystem – required because the just mentioned revert functionality needs to call u-boot’s command saveenv, which happened to be broken for ext4 filesystems in tegra’s u-boot

More information on the specifics of u-boot patches for UC can be found in this great blog post.

The only other part that the snap has is uboot-env:

uboot-env:
plugin: nil
source: uboot-env
override-build: |
mkenvimage -r -s 131072 -o uboot.env uboot.env.in
cp "$SNAPCRAFT_PART_BUILD"/uboot.env "$SNAPCRAFT_PART_INSTALL"/
# Link needed for ubuntu-image to work properly
cd "$SNAPCRAFT_PART_INSTALL"/; ln -s uboot.env uboot.conf build-packages: - u-boot-tools This simply encodes the uboot.env.in file into a format that is readable by u-boot. The resulting file, uboot.env, is included in the snap. This environment is where most of the support for UC is encoded. I will not delve too much into the details, but just want to mention that the variables that need to be edited usually for new devices are • devnum, partition, and devtype to set the system boot partition, from which we load the kernel and initramfs • fdtfile, fdt_addr_r, and fdt_high to determine the name of the device tree and where in memory it should be loaded • ramdisk_addr_r and initrd_high to set the loading location for the initramfs • kernel_addr_r to set where the kernel needs to be loaded • args contains kernel arguments and needs to be adapted to the device specifics • Finally, for this device, snappy_boot was changed so it used booti instead of bootz, as we could not use a compressed kernel as explained above Besides the snapcraft recipe, the other mandatory file when defining a gadget snap is the gadget.yaml file. This file defines, among other things, the image partitioning layout. There is more to it, but in this case, partitioning is the only thing we have defined: volumes: jetson: bootloader: u-boot schema: gpt structure: - name: system-boot role: system-boot type: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 filesystem: ext4 filesystem-label: system-boot offset: 17408 size: 67108864 - name: TBC type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 2097152 - name: EBT type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 4194304 - name: BPF type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 2097152 - name: WB0 type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 6291456 - name: RP1 type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 4194304 - name: TOS type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 6291456 - name: EKS type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 2097152 - name: FX type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 2097152 - name: BMP type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 134217728 - name: SOS type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 20971520 - name: EXI type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 67108864 - name: LNX type: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 size: 67108864 content: - image: u-boot.bin - name: DTB type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 4194304 - name: NXT type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 2097152 - name: MXB type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 6291456 - name: MXP type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 6291456 - name: USP type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 size: 2097152 The Jetson TX1 has a complex partitioning layout, with many partitions being allocated for the first stage bootloader, and many others that are undocumented. So, to minimize the risk of touching a critical partition, I preferred to keep most of them untouched and do just the minor amount of changes to fit UC into the device. Therefore, the gadget.yaml volumes entry mainly describes the TX1 defaults, with the main differences comparing to the original being: 1. The APP partition is renamed to system-boot and reduced to only 64MB. It will contain the uboot environment file plus the kernel and initramfs, as usual in UC systems with u-boot bootloader. 2. The LNX partition will contain our u-boot binary 3. If a partition with role: system-data is not defined explicitly (which is the case here), a partition which such role and with label “writable” is implicitly defined at the end of the volume. This will take all the available space left aside by the reduction of the APP partition, and will contain the UC root filesystem. This will replace the UDA partition that is the last in nvidia partitioning scheme. Now, it is time to build the gadget snap by following the repository instructions. ## Building & flashing the image Now that we have the snaps, it is time to build the image. There is not much to it, you just need an Ubuntu One account and to follow the instructions to create a key to be able to sign a model assertion. With that just follow the README.md file in the jetson-ubuntu-core repository. You can also download the latest tarball from the repository if you prefer. The build script will generate not only a full image file, but also a tarball that will contain separate files for each partition that needs to be flashed in the device. This is needed because unfortunately there is no way we can fully flash the Jetson device with a GPT image, instead we can flash only individual partitions with the tools nvidia provides. Once the build finishes, we can take the resulting tarball and follow the instructions to get the necessary partitions flashed. As can be read there, we have to download the nvidia L4T package. Also, note that to be able to change the partition sizes and files to flash, a couple of patches have to be applied on top of the L4T scripts. ## Summary After this, you should have a working Ubuntu Core 18 device. You can use the serial port or an external monitor to configure it with your launchpad account so you can ssh into it. Enjoy! Read more K. Tsakalozos ## MicroK8s in the Wild As the popularity of MicroK8s grows I would like to take the time to mention some projects that use this micro Kubernetes distribution. But before that, let me do some introductions. For those unfamiliar with Kubernetes, Kubernetes is an open source container orchestrator. It shows you how to deploy, upgrade, and provision your application. This is one of the rare occasions where all the major players (Google, Microsoft, IBM, Amazon etc) have flocked around a single framework making it an unofficial standard. MicroK8s is a distribution of Kubernetes. It is a snap package that sets up a Kubernetes cluster on your machine. You can have a Kubernetes cluster for local development, CI/CD or just for getting to know Kubernetes with just a: sudo snap install microk8s --classic If you are on a Mac or Windows you will need a Linux VM. In what follows you will find some examples on how people are using MicroK8s. Note that this is not a complete list of MicroK8s usages, it is just some efforts I happen to be aware of. #### Spring Cloud Kubernetes This project is using CircleCI for CI/CD. MicroK8s provides a local Kubernetes cluster where integration tests are run. The addons enabled are dns, the docker registry and Istio. The integration tests need to plug into the Kubernetes cluster using the kubeconfig file and the socket to dockerd. This work was introduced in this Pull Request (thanks George) and it gave us the incentive to add a microk8s.status command that would wait for the cluster to come online. For example we can wait up to 5 minutes for MicroK8s to come up with: microk8s.status --wait-ready --timeout=300 #### OpenFaaS on MicroK8s It was this year’s Config Management Camp where I met Joe McCobe the author of “Deploy OpenFaaS with MicroK8s”. I will just repeat his words “was blown away by the speed and ease with which I could get a basic lab environment up and running”. #### What about Kubeless? It seems the ease of deploying MicroK8s goes well with the ease of software development of serverless frameworks. Users of Kubeless are also kicking the tires on MicroK8s. Have a look at “Files upload from Kubeless on MicroK8s to Minio” and “Serverless MicroK8s Kubernetes.” #### SUSE Cloud Application Platform (CAP) on Microk8s In his blog post Dimitris describes in detail all the configuration he had to do to get the software from SUSE to run on MicroK8s. The most interesting part is the motivation behind this effort. As he says “… MicroK8s… use your machine’s resources without you having to decide on a VM size beforehand.” As he explained to me his application puts significant memory pressure only during bootstrap. MicroK8s enabled him to reclaim the unused memory after the initialization phase. #### Kubeflow Kubeflow is the missing link between Kubernetes and AI/ML. Canonical is actively involved in this so…. you should definitely check it out. Sure, I am biased but let me tell you a true story. I have a friend who was given three machines to deploy Tensorflow and run some experiments. She did not have any prior experience at the time so… none of the three node clusters were setup in exactly the same way. There was always something off. This head-scratching situation is just one reason to use Kubeflow. #### Transcrobes Transcrobes comes from an active member of the MicroK8s community. It serves as a language learning aid. “The system knows what you know, so can give you just the right amount of help to be able to understand the words you don’t know but gets out of the way for the stuff you do know.” Here MicroK8s is used for quick prototyping. We wish you all the best Anton, good luck! #### Summing Up We have seen a number of interesting use cases that include CI/CD, Serverless programming, lab setup, rapid prototyping and application development. If you have a MicroK8s use case do let us know. Come and say hi at #microk8s on the Kubernetes slack and/or issue a Pull Request against our MicroK8s In The Wild page. #### References MicroK8s in the Wild was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story. Read more jdstrand ## Monitoring your snaps for security updates Some time ago we started alerting publishers when their stage-packages received a security update since the last time they built a snap. We wanted to create the right balance for the alerts and so the service currently will only alert you when there are new security updates against your stage-packages. In this manner, you can choose not to rebuild your snap (eg, since it doesn’t use the affected functionality of the vulnerable package) and not be nagged every day that you are out of date. As nice as that is, sometimes you want to check these things yourself or perhaps hook the alerts into some form of automation or tool. While the review-tools had all of the pieces so you could do this, it wasn’t as straightforward as it could be. Now with the latest stable revision of the review-tools, this is easy:$ sudo snap install review-tools
$review-tools.check-notices \ ~/snap/review-tools/common/review-tools_656.snap {'review-tools': {'656': {'libapt-inst2.0': ['3863-1'], 'libapt-pkg5.0': ['3863-1'], 'libssl1.0.0': ['3840-1'], 'openssl': ['3840-1'], 'python3-lxml': ['3841-1']}}} The review-tools are a strict mode snap and while it plugs the home interface, that is only for convenience, so I typically disconnect the interface and put things in its SNAP_USER_COMMON directory, like I did above. Since now it is super easy to check a snap on disk, with a little scripting and a cron job, you can generate a machine readable report whenever you want. Eg, can do something like the following:$ cat ~/bin/check-snaps
#!/bin/sh
set -e

snaps="review-tools/stable rsync-jdstrand/edge"

tmpdir=$(mktemp -d -p "$HOME/snap/review-tools/common")
cleanup() {
rm -fr "$tmpdir" } trap cleanup EXIT HUP INT QUIT TERM cd "$tmpdir" || exit 1
for i in $snaps ; do snap=$(echo "$i" | cut -d '/' -f 1) channel=$(echo "$i" | cut -d '/' -f 2) snap download "$snap" "--$channel" >/dev/null done cd - >/dev/null || exit 1 /snap/bin/review-tools.check-notices "$tmpdir"/*.snap

or if  you already have the snaps on disk somewhere, just do:

$/snap/bin/review-tools.check-notices /path/to/snaps/*.snap Now can add the above to cron or some automation tool as a reminder of what needs updates. Enjoy! Read more K. Tsakalozos ## MicroK8s on MacOS MicroK8s is a local deployment of Kubernetes. Let’s skip all the technical details and just accept that Kubernetes does not run natively on MacOS or Windows. You may be thinking “I have seen Kubernetes running on a MacOS laptop, what kind of sorcery was that?” It’s simple, Kubernetes is running inside a VM. You might not see the VM or it might not even be a full blown virtual system but some level of virtualisation is there. This is exactly what we will show here. We will setup a VM and inside there we will install MicroK8s. After the installation we will discuss how to use the in-VM-Kubernetes. ### A multipass VM on MacOS Arguably the easiest way to get an Ubuntu VM on MacOS is with multipass. Head to the releases page and grab the latest package. Installing it is as simple as double-clicking on the .pkg file. To start a VM with MicroK8s we: multipass launch --name microk8s-vm --mem 4G --disk 40G multipass exec microk8s-vm -- sudo snap install microk8s --classic multipass exec microk8s-vm -- sudo iptables -P FORWARD ACCEPT Make sure you reserve enough resources to host your deployments; above, we got 4GB of RAM and 40GB of hard disk. We also make sure packets to/from the pod network interface can be forwarded to/from the default interface. Our VM has an IP that you can check with: > multipass list Name State IPv4 Release microk8s-vm RUNNING 10.72.145.216 Ubuntu 18.04 LTS Take a note of this IP since our services will become available there. Other multipass commands you may find handy: • Get a shell inside the VM: multipass shell microk8s-vm • Shutdown the VM: multipass stop microk8s-vm • Delete and cleanup the VM: multipass delete microk8s-vm multipass purge ### Using MicroK8s To run a command in the VM we can get a multipass shell with: multipass shell microk8s-vm To execute a command without getting a shell we can use multipass exec like so: multipass exec microk8s-vm -- /snap/bin/microk8s.status A third way to interact with MicroK8s is via the Kubernetes API server listening on port 8080 of the VM. We can use microk8s’ kubeconfig file with a local installation of kubectl to access the in-VM-kubernetes. Here is how: multipass exec microk8s-vm -- /snap/bin/microk8s.config > kubeconfig Install kubectl on the host machine and then use the kubeconfig: kubectl --kubeconfig=kubeconfig get all --all-namespaces #### Accessing in-VM services — Enabling addons Let’s first enable dns and the dashboard. In the rest of this blog we will be showing different methods of accessing Grafana: multipass exec microk8s-vm -- /snap/bin/microk8s.enable dns dashboard We check the deployment progress with: > multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get all --all-namespaces After all services are running we can proceed into looking how to access the dashboard. #### Accessing in-VM services — Use the Kubernetes API proxy The API server is on port 8080 of our VM. Let’s see how the proxy path looks like: > multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl cluster-info ... Grafana is running at http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy ... By replacing 127.0.0.1 with the VM’s IP, 10.72.145.216 in this case, we can reach our service at: http://10.72.145.216:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy #### Accessing in-VM services — Setup a proxy In a very similar fashion to what we just did above, we can ask Kubernetes to create a proxy for us. We need to request the proxy to be available to all interfaces and to accept connections from everywhere so that the host can reach it. > multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl proxy --address='0.0.0.0' --accept-hosts='.*' Starting to serve on [::]:8001 Leave the terminal with the proxy open. Again, replacing 127.0.0.1 with the VMs IP we reach the dashboard through: http://10.72.145.216:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy Make sure you go through the official docs on constructing the proxy paths. #### Accessing in-VM services — Use a NodePort service We can expose our service in a port on the VM and access it from there. This approach is using the NodePort service type. We start by spotting the deployment we want to expose: > multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get deployment -n kube-system | grep grafana monitoring-influxdb-grafana-v4 1 1 1 1 22h Then we create the NodePort service: multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl expose deployment.apps/monitoring-influxdb-grafana-v4 -n kube-system --type=NodePort We have now a port for the Grafana service: > multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get services -n kube-system | grep NodePort monitoring-influxdb-grafana-v4 NodePort 10.152.183.188 <none> 8083:32580/TCP,8086:32152/TCP,3000:32720/TCP 13m Grafana is on port 3000 mapped here to 32720. This port is randomly selected so it my vary for you. In our case, the service is available on 10.72.145.216:32720. ### Conclusions MicroK8s on MacOS (or Windows) will need a VM to run. This is no different that any other local Kubernetes solution and it comes with some nice benefits. The VM gives you an extra layer of isolation. Instead of using your host and potentially exposing the Kubernetes services to the outside world you have full control of what others can see. Of course, this isolation comes with some extra administrative overhead that may not be applicable for a dev environment. Give it a try and tell us what you think! ### Links CanonicalLtd/multipass MicroK8s on MacOS was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story. Read more K. Tsakalozos ## How to Inspect the Configuration of a Kubernetes Node Looking at the configuration of a Kubernetes node sounds like a simple thing, yet it not so obvious. The arguments kubelet takes come either as command line parameters or from a configuration file you pass with --config. Seems, straight forward to do a ps -ex | grep kubelet and look in the file you see after the --config parameter. Simple, right? But… are you sure you got all the arguments right? What if Kubernetes defaulted to a value you did not want? What if you do not have shell access to a node? There is a way to query the Kubernetes API for the configuration a node is running with: api/vi/nodes/<node_name>/proxy/cofigz. Lets see this in a real deployment. #### Deploy a Kubernetes Cluster I am using the Canonical Distribution of Kubernetes (CDK) on AWS here but you can use whichever cloud and Kubernetes installation method you like. juju bootstrap aws juju deploy canonical-kubernetes ..and wait for the deployment to finish watch juju status #### Change a Configuration CDK allows for configuring both the command line arguments and the extra arguments of the config file. Here we add arguments to the config file: juju config kubernetes-worker kubelet-extra-config='{imageGCHighThresholdPercent: 60, imageGCLowThresholdPercent: 39}' A great question is how we got the imageGCHighThreshholdPercent literal. At the time of this writing the official upstream docs point you to the type definitions; a rather ugly approach. There is an EvictionHard property in the type definitions, however, if you look at the example of the upstream docs you see the same property is with lowercase. #### Check the Configuration We will need two shells. On the first one we will start the API proxy and on the second we will query the API. On the first shell: juju ssh kubernetes-master/0 kubectl proxy Now that we have the proxy at 127.0.0.1:8001 on the kubernetes-master, use a second shell to get a node name and we query the API: juju ssh kubernetes-master/0 kubectl get no curl -sSL "http://localhost:8001/api/v1/nodes/<node_name>/proxy/configz" | python3 -m json.tool Here is a full run: juju ssh kubernetes-master/0 Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1023-aws x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Mon Oct 22 10:40:40 UTC 2018 System load: 0.11 Processes: 115 Usage of /: 13.7% of 15.45GB Users logged in: 1 Memory usage: 20% IP address for ens5: 172.31.0.48 Swap usage: 0% IP address for fan-252: 252.0.48.1 Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud 0 packages can be updated. 0 updates are security updates. Last login: Mon Oct 22 10:38:14 2018 from 2.86.54.15 To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@ip-172-31-0-48:~$ kubectl get no
NAME STATUS ROLES AGE VERSION
ubuntu@ip-172-31-0-48:~curl -sSL "http://localhost:8001/api/v1/nodes/ip-172-31-14-174/proxy/configz" | python3 -m json.tool { "kubeletconfig": { "syncFrequency": "1m0s", "fileCheckFrequency": "20s", "httpCheckFrequency": "20s", "address": "0.0.0.0", "port": 10250, "tlsCertFile": "/root/cdk/server.crt", "tlsPrivateKeyFile": "/root/cdk/server.key", "authentication": { "x509": { "clientCAFile": "/root/cdk/ca.crt" }, "webhook": { "enabled": true, "cacheTTL": "2m0s" }, "anonymous": { "enabled": false } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "registryPullQPS": 5, "registryBurst": 10, "eventRecordQPS": 5, "eventBurst": 10, "enableDebuggingHandlers": true, "healthzPort": 10248, "healthzBindAddress": "127.0.0.1", "oomScoreAdj": -999, "clusterDomain": "cluster.local", "clusterDNS": [ "10.152.183.93" ], "streamingConnectionIdleTimeout": "4h0m0s", "nodeStatusUpdateFrequency": "10s", "nodeLeaseDurationSeconds": 40, "imageMinimumGCAge": "2m0s", "imageGCHighThresholdPercent": 60, "imageGCLowThresholdPercent": 39, "volumeStatsAggPeriod": "1m0s", "cgroupsPerQOS": true, "cgroupDriver": "cgroupfs", "cpuManagerPolicy": "none", "cpuManagerReconcilePeriod": "10s", "runtimeRequestTimeout": "2m0s", "hairpinMode": "promiscuous-bridge", "maxPods": 110, "podPidsLimit": -1, "resolvConf": "/run/systemd/resolve/resolv.conf", "cpuCFSQuota": true, "cpuCFSQuotaPeriod": "100ms", "maxOpenFiles": 1000000, "contentType": "application/vnd.kubernetes.protobuf", "kubeAPIQPS": 5, "kubeAPIBurst": 10, "serializeImagePulls": true, "evictionHard": { "imagefs.available": "15%", "memory.available": "100Mi", "nodefs.available": "10%", "nodefs.inodesFree": "5%" }, "evictionPressureTransitionPeriod": "5m0s", "enableControllerAttachDetach": true, "makeIPTablesUtilChains": true, "iptablesMasqueradeBit": 14, "iptablesDropBit": 15, "failSwapOn": false, "containerLogMaxSize": "10Mi", "containerLogMaxFiles": 5, "configMapAndSecretChangeDetectionStrategy": "Watch", "enforceNodeAllocatable": [ "pods" ] } } #### Summing Up There is a way to get the configuration of an online Kubernetes node through the Kubernetes API (api/v1/nodes/<node_name>/proxy/configz). This might be handy if you want to code against Kubernetes or you do not want to get into the intricacies of your particular cluster setup. #### References How to Inspect the Configuration of a Kubernetes Node was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story. Read more K. Tsakalozos ## Microk8s puts up its Istio and sails away Istio almost immediately strikes you as enterprise grade software. Not so much because of the complexity it introduces, but more because of the features it adds to your service mesh. Must-have features packaged together in a coherent framework: • Traffic Management • Security Policies • Telemetry • Performance Tuning Since microk8s positions itself as the local Kubernetes cluster developers prototype on, it is no surprise that deployment of Istio is made dead simple. Let’s start with the microk8s deployment itself: > sudo snap install microk8s --classic Istio deployment available with: > microk8s.enable istio There is a single question that we need to respond to at this point. Do we want to enforce mutual TLS authentication among sidecars? Istio places a proxy to your services so as to take control over routing, security etc. If we know we have a mixed deployment with non-Istio and Istio enabled services we would rather not enforce mutual TLS: > microk8s.enable istio Enabling Istio Enabling DNS Applying manifest service/kube-dns created serviceaccount/kube-dns created configmap/kube-dns created deployment.extensions/kube-dns created Restarting kubelet DNS is enabled Enforce mutual TLS authentication (https://bit.ly/2KB4j04) between sidecars? If unsure, choose N. (y/N): y Believe it or not we are done, Istio v1.0 services are being set up, you can check the deployment progress with: > watch microk8s.kubectl get all --all-namespaces We have packaged istioctl in microk8s for your convenience: > microk8s.istioctl get all --all-namespaces NAME KIND NAMESPACE AGE grafana-ports-mtls-disabled Policy.authentication.istio.io.v1alpha1 istio-system 2m DESTINATION-RULE NAME HOST SUBSETS NAMESPACE AGE istio-policy istio-policy.istio-system.svc.cluster.local istio-system 3m istio-telemetry istio-telemetry.istio-system.svc.cluster.local istio-system 3m GATEWAY NAME HOSTS NAMESPACE AGE istio-autogenerated-k8s-ingress * istio-system 3m Do not get scared by the amount of services and deployments, everything is under the istio-system namespace. We are ready to start exploring! ### Demo Time! Istio needs to inject sidecars to the pods of your deployment. In microk8s auto-injection is supported so the only thing you have to label the namespace you will be using with istion-injection=enabled: > microk8s.kubectl label namespace default istio-injection=enabled Let’s now grab the bookinfo example from the v1.0 Istio release and apply it: > wget https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/platform/kube/bookinfo.yaml > microk8s.kubectl create -f bookinfo.yaml The following services should be available soon: > microk8s.kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) details ClusterIP 10.152.183.33 <none> 9080/TCP kubernetes ClusterIP 10.152.183.1 <none> 443/TCP productpage ClusterIP 10.152.183.59 <none> 9080/TCP ratings ClusterIP 10.152.183.124 <none> 9080/TCP reviews ClusterIP 10.152.183.9 <none> 9080/TCP We can reach the services using the ClusterIP they have; we can for example get to the productpage in the above example by pointing our browser to 10.152.183.59:9080. But let’s play by the rules and follow the official instructions on exposing the services via NodePort: > wget https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/networking/bookinfo-gateway.yaml > microk8s.kubectl create -f bookinfo-gateway.yaml To get to the productpage through ingress we shamelessly copy the example instructions: > microk8s.kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}' 31380 And our node is the localhost so we can point our browser to http://localhost:31380/productpage ### Show me some graphs! Of course graphs look nice in a blog post, so here you go. You will need to grab the ClusterIP of the Grafana service: microk8s.kubectl -n istio-system get svc grafana Prometheus is also available in the same way. microk8s.kubectl -n istio-system get svc prometheus And for traces you will need to look at the jaeger-query. microk8s.kubectl -n istio-system get service/jaeger-query The servicegraph endpoint is available with: microk8s.kubectl -n istio-system get svc servicegraph I should stop here. Go and checkout the Istio documentation for more details on how to take advantage of what Istio is offering. ### What to keep from this post ### References Microk8s puts up its Istio and sails away was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story. Read more admin ## MAAS 2.5.0 beta 1 released Hello MAASters! I’m happy to announce that MAAS 2.5.0 beta 1 has been released. The beta 1 now features • Complete proxing of machine communication through the rack controller. This includes DNS, HTTP to metadata server, Proxy with Squid and new in 2.5.0 beta 1, syslog. • CentOS 7 & RHEL 7 storage support (Requires a new Curtin version available in PPA). • Full networking for KVM pods. • ESXi network configuration For more information, please refer to MAAS Discourse [1]. [1]: https://discourse.maas.io/t/maas-2-5-0-beta-1-released/174 Read more K. Tsakalozos ## Microk8s Docker Registry A friend once asked, why would one prefer microk8s over minikube?… We never spoke since. True story! That was a hard question, especially for an engineer. The answer is not so obvious largely because it has to do with personal preferences. Let me show you why. Microk8s-wise this is what you have to do to have a local Kubernetes cluster with a registry: sudo snap install microk8s --edge --classic microk8s.enable registry #### How is this great? • It is super fast! A couple of hundreds of MB over the internet tubes and you are all set. • You skip the pain of going through the docs for setting up and configuring Kubernetes with persistent storage and the registry. #### So why is this bad? • As a Kubernetes engineer you may want to know what happens under the hood. What got deployed? What images? Where? • As a Kubernetes user you may want to configure the registry. Where are the images stored? Can you change any access credentials? Do you see why this is a matter of preference? Minikube is a mature solution for setting up a Kubernetes in a VM. It runs everywhere (even on windows) and it does only one thing, sets up a Kubernetes cluster. On the other hand, microk8s offers Kubernetes as an application. It is opinionated and it takes a step towards automating common development workflows. Speaking of development workflows... ### The full story with the registry The registry shipped with microk8s is available on port 32000 of the localhost. It is an insecure registry because, let’s be honest, who cares about security when doing local development :) . And it’s getting better, check this out! The docker daemon used by microk8s is configured to trust this insecure registry. It is this daemon we talk to when we want to upload images. The easiest way to do so is by using the microk8s.docker command coming with microk8s: # Lets get a Docker file first wget https://raw.githubusercontent.com/nginxinc/docker-nginx/ddbbbdf9c410d105f82aa1b4dbf05c0021c84fd6/mainline/stretch/Dockerfile # And build it microk8s.docker build -t localhost:32000/nginx:testlocal . microk8s.docker push localhost:32000/nginx:testlocal If you prefer to use an external docker client you should point it to the socket dockerd is listening on: docker -H unix:///var/snap/microk8s/docker.sock ps To use an image from the local registry just reference it in your manifests: apiVersion: v1 kind: Pod metadata: name: my-nginx namespace: default spec: containers: - name: nginx image: localhost:32000/nginx:testlocal restartPolicy: Always And deploy with: microk8s.kubectl create -f the-above-awesome-manifest.yaml Microk8s and registry ### What to keep from this post? You want Kubernetes? We deliver it as a (sn)app! You want to see your tool-chain in microk8s? Drop us a line. Send us a PR! We are pleased to see happy Kubernauts! Those of you who are here for the gossip. He was not that good of a friend (obviously!). We only met in a meetup :) ! ### References Microk8s Docker Registry was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story. Read more admin ## MAAS 2.4.1 released! Hello MAASTers MAAS 2.4.1 has now been released and it is a bug fix release. Please see more details in discourse.maas.io [1]. [1]: https://discourse.maas.io/t/maas-2-4-1-released/148 Read more admin ## MAAS 2.5.0 alpha 1 released! # Hello MAASters! I’m happy to announce that the current MAAS development release (2.5.0 alpha 1) is now officially available in PPA for early testers. What’s new? Most notable MAAS 2.5.0 alpha 1 changes include: • Proxying the communication through rack controllers • HA improvements for better Rack-to-Region communication and discovery • Adding new machines with IPMI credentials or non-PXE IP address • Commissioning during enlistment For more details, please refer to the release notes available in discourse [1]. Where to get it? MAAS 2.5.0a1 is currently available for Ubuntu Bionic in ppa:maas/next. sudo add-apt-repository ppa:maas/next sudo apt-get update sudo apt-get install maas [1]: https://discourse.maas.io/t/maas-2-5-0-alpha-1/106 Read more admin ## MAAS 2.4.0 (final) released! # Hello MAASters! I’m happy to announce that MAAS 2.4.0 (final) is now available! This new MAAS release introduces a set of exciting features and improvements that improve performance, stability and usability of MAAS. MAAS 2.4.0 will be immediately available in the PPA, but it is in the process of being SRU’d into Ubuntu Bionic. PPA’s Availability MAAS 2.4.0 is currently available for Ubuntu Bionic in ppa:maas/stable for the coming week. sudo add-apt-repository ppa:maas/stable sudo apt-get update sudo apt-get install maas What’s new? Most notable MAAS 2.4.0 changes include: • Performance improvements across the backend & UI. • KVM pod support for storage pools (over API). • DNS UI to manage resource records. • Audit Logging • Machine locking • Expanded commissioning script support for firmware upgrades & HBA changes. • NTP services now provided with Chrony. For the full list of features & changes, please refer to the release notes: Read more admin ## MAAS 2.4.0 beta 2 released! # Hello MAASters! I’m happy to announce that MAAS 2.4.0 beta 2 is now released and is available for Ubuntu Bionic. MAAS Availability MAAS 2.4.0 beta 2 is currently available in Bionic’s Archive or in the following PPA: ppa:maas/next # MAAS 2.4.0 (beta2) ## New Features & Improvements ### MAAS Internals optimisation Continuing with MAAS’ internal surgery, a few more improvements have been made: • Backend improvements • Improve the image download process, to ensure rack controllers immediately start image download after the region has finished downloading images. • Reduce the service monitor interval to 30 seconds. The monitor tracks the status of the various services provided alongside MAAS (DNS, NTP, Proxy). • UI Performance optimizations for machines, pods, and zones, including better filtering of node types. ### KVM pod improvements Continuing with the improvements for KVM pods, beta 2 adds the ability to: • Define a default storage pool This feature allows users to select the default storage pool to use when composing machines, in case multiple pools have been defined. Otherwise, MAAS will pick the storage pool automatically depending which pool has the most available space. • API – Allow allocating machines with different storage pools Allows users to request a machine with multiple storage devices from different storage pools. This feature uses storage tags to automatically map a storage pool in libvirt with a storage tag in MAAS. ### UI Improvements • Remove remaining YUI in favor of AngularJS. As of beta 2, MAAS has now fully dropped the use of YUI for the Web Interface. The last section using YUI was the Settings page and the login page. Both sections have now been transitioned to use AngularJS instead. • Re-organize Settings page The MAAS settings have now been reorganized into multiple tabs. ### Minor improvements • API for default DNS domain selection Adds the ability to define the default DNS domain. This is currently only available via the API. • Vanilla framework upgrade We would like to thank the Ubuntu web team for their hard work upgrading MAAS to the latest version of the Vanilla framework. MAAS is looking better and more consistent every day! ## Bug fixes Please refer to the following for all 37 bug fixes in this release, which address issues with MAAS across the board: https://launchpad.net/maas/+milestone/2.4.0beta2 Read more admin ## MAAS 2.4.0 beta 1 released! # Hello MAASters! I’m happy to announce that MAAS 2.4.0 beta 1 and python-libmaas 0.6.0 have now been released and are available for Ubuntu Bionic. MAAS Availability MAAS 2.4.0 beta 1 is currently available in Bionic -proposed waiting to be published into Ubuntu, or in the following PPA: ppa:maas/next # MAAS 2.4.0 (beta1) ## Important announcements ### Debian package maas-dns no longer needed The Debian package ‘maas-dns’ has now been made a transitional package. This package provided some post-installation configuration to prepare bind to be managed by MAAS, but it required maas-region-api to be installed first. In order to streamline the installation and make it easier for users to install HA environments, the configuration of bind has now been integrated to the ‘maas-region-api’ package itself, which and we have made ‘maas-dns’ a dummy transitional package that can now be removed. ## New Features & Improvements ### MAAS Internals optimization Major internal surgery to MAAS 2.4 continues improve various areas not visible to the user. These updates will advance the overall performance of MAAS in larger environments. These improvements include: • Database query optimizations Further reductions in the number of database queries, significantly cutting the queries made by the boot source cache image import process from over 100 to just under 5. • UI optimizations MAAS is being optimized to reduce the amount of data using the websocket API to render the UI. This is targeted at only processing data only for viewable information, improving various legacy areas. Currently, the work done for this release includes: • Only load historic script results (e.g. old commissioning/testing results) when requested / accessed by the user, instead of always making them available over the websocket. • Only load node objects in listing pages when the specific object type is requested. For instance, only load machines when accessing the machines tab instead of also loading devices and controllers. • Change the UI mechanism to only request OS Information only on initial page load rather than every 10 seconds. ### KVM pod improvements Continuing with the improvements from alpha 2, this new release provides more updates to KVM pods: • Added overcommit ratios for CPU and memory. When composing or allocating machines, previous versions of MAAS would allow the user to request as many resources as the user wanted regardless of the available resources. This created issues when dynamically allocating machines as it could allow users to create an infinite number of machines even when the physical host was already over committed. Adding this feature allows administrators to control the amount of resources they want to over commit. • Added ability to filter which pods or pod types to avoid when allocating machines Provides users with the ability to select which pods or pod types not to allocate resources from. This makes it particularly useful when dynamically allocating machines when MAAS has a large number of pods. ### DNS UI Improvements MAAS 2.0 introduced the ability to manage DNS, not only to allow the creation of new domains, but also to the creation of resources records such as A, AAA, CNAME, etc. However, most of this functionality has only been available over the API, as the UI only allowed to add and remove new domains. As of 2.4, MAAS now adds the ability to manage not only DNS domains but also the following resource records: • Added ability to edit domains (e.g. TTL, name, authoritative). • Added ability to create and delete resource records (A, AAA, CNAME, TXT, etc). • Added ability to edit resource records. ### Navigation UI improvements MAAS 2.4 beta 1 is changing the top-level navigation: • Rename ‘Zones’ for ‘AZs’ • Add ‘Machines, Devices, Controllers’ to the top-level navigation instead of ‘Hardware’. ### Minor improvements A few notable improvements being made available in MAAS 2.4 include: • Add ability to force the boot type for IPMI machines. Hardware manufactures have been upgrading their BMC firmware versions to be more compliant with the Intel IPMI 2.0 spec. Unfortunately, the IPMI 2.0 spec has made changes that provide a non-backward compatible user experience. For example, if the administrator configures their machine to always PXE boot over EFI, and the user executed an IPMI command without specifying the boot type, the machine would use the value of the configured BIOS. However, with these new changes, the user is required to always specify a boot type, avoiding a fallback to the BIOS. As such, MAAS now allows the selection of a boot type (auto, legacy, efi) to force the machine to always PXE with the desired type (on the next boot only) . • Add ability, via the API, to skip the BMC configuration on commissioning. Provides an API option to skip the BMC auto configuration during commissioning for IPMI systems. This option helps admins keep credentials provided over the API when adding new nodes. ## Bug fixes Please refer to the following for all 32 bug fixes in this release. https://launchpad.net/maas/+milestone/2.4.0beta1 Read more admin ## MAAS 2.4.0 Alpha 2 released! # Hello MAASters! I’m happy to announce that MAAS 2.4.0 alpha 2 has now been released and is available for Ubuntu Bionic. MAAS Availability MAAS 2.4.0 alpha 1 is available in the Bionic -proposed archive or in the following PPA: ppa:maas/next # MAAS 2.4.0 (alpha2) ## Important announcements ### NTP services now provided by Chrony Starting with 2.4 Alpha 2, and in common with changes being made to Ubuntu Server, MAAS replaces ‘ntpd’ with Chrony for the NTP protocol. MAAS will handle the upgrade process and automatically resume NTP service operation. ### Vanilla CSS Framework Transition MAAS 2.4 is undergoing a Vanilla CSS framework transition to a new version of vanilla, which will bring a fresher look to the MAAS UI. This framework transition is currently work in progress and not all of the UI have been fully updated. Please expect to see some inconsistencies in this new release. ## New Features & Improvements ### NTP services now provided by Chrony. Starting from MAAS 2.4alpha2, chrony is now the default NTP service, replacing ntpd. This work has been done to align with the Ubuntu Server and Security team to support chrony instead of ntpd. MAAS will continue to provide services exactly the same way and users will not be affected by the changes, handling the upgrade process transparently. This means that: • MAAS will configure chrony as peers on all Region Controllers • MAAS will configure chrony as a client of peers for all Rack Controllers • Machines will use the Rack Controllers as they do today ### MAAS Internals optimization MAAS 2.4 is currently undergoing major surgery to improve various areas of operation that are not visible to the user. These updates will improve the overall performance of MAAS in larger environments. These improvements include: • AsyncIO based event loop • MAAS has an event loop which performs various internal actions. In older versions of MAAS, the event loop was managed by the default twisted event loop. MAAS now uses an asyncio based event loop, driven by uvloop, which is targeted at improving internal performance. • Improved daemon management • MAAS has changed the way daemons are run to allow users to see both ‘regiond’ and ‘rackd’ as processes in the process list. • As part of these changes, regiond workers are now managed by a master regiond process. In older versions of MAAS each worker was directly run by systemd. The master process is now in charge of ensuring workers are running at all times, and re-spawning new workers in case of failures. This also allows users to see the worker hierarchy in the process list. • Ability to increase the number of regiond workers • Following the improved way MAAS daemons are run, further internal changes have been made to allow the number of regiond workers to be increased automatically. This allows MAAS to scale to handle more internal operations in larger environments. • While this capability is already available, it is not yet available by default. It will become available in the following milestone release. • Database query optimizations • In the process of inspecting the internal operations of MAAS, it was discovered that multiple unnecessary database queries are performed for various operations. Optimising these requires internal improvements to reduce the footprint of these operations. Some areas that have been addressed in this release include: • When saving node objects (e.g. making any update of a machine, device, rack controller, etc), MAAS validated changes across various fields. This required an increased number of queries for fields, even when they were not being updated. MAAS now tracks specific fields that change and only performs queries for those fields. • Example: To update a power state, MAAS would perform 11 queries. After these improvements, , only 1 query is now performed. • On every transaction, MAAS performed 2 queries to update the timestamp. This has now been consolidated into a single query per transaction. • These changes greatly improves MAAS performance and database utilisation in larger environments. More improvements will continue to be made as we continue to examine various areas in MAAS. • UI optimisations • MAAS is now being optimised to reduce the amount of data loaded in the websocket API to render the UI. This is targeted at only processing data for viewable information, improving various legacy areas. Currently, the work done in this area includes: • Script results are only loaded for viewable nodes in the machine listing page, reducing the overall amount of data loaded. • The node object is updated in the websocket only when something has changed in the database, reducing the data transferred to the clients as well as the amount of internal queries. ### Audit logging Continuing with the audit logging improvements, alpha2 now adds audit logging for all user actions that affect Hardware Testing & Commissioning. ### KVM pod improvements MAAS’ KVM pods was initially developed as a feature to help developers quickly iterate and test new functionality while developing MAAS. This, however, because a feature that allow not only developers, but also administrators to make better use of resources across their datacenter. Since the feature was initially create for developers, some features were lacking. As such, in 2.4 we are improving the usability of KVM pods: • Pod AZ’s. MAAS now allows setting the physical zone for the pod. This helps administrators by conceptually placing their KVM pods in a AZ, which enables them to request/allocate machines on demand based on its AZ. All VM’s created from a pod will inherit the AZ. • Pod tagging MAAS now adds the ability to set tags for a pod. This allows administrators to use tags to allow/prevent the creation of a VM inside the pod using tags. For example, if the administrator would like a machine with a ‘tag’ named ‘virtual’, MAAS will filter all physical machines and only consider other VM’s or a KVM pod for machine allocation. ## Bug fixes Please refer to the following for all bug fixes in this release. https://launchpad.net/maas/+milestone/2.4.0alpha2 Read more abeato ## Analysis and Plots of Solutions to Complex Powers In chapter 5 of his mind-blowing “The Road to Reality”, Penrose devotes a section to complex powers, that is, to the solutions to $$w^z~~~\text{with}~~~w,z \in \mathbb{C}$$ In this post I develop a bit more what he exposes and explore what the solutions look like with the help of some simple Python scripts. The scripts can be found in this github repo, and all the figures in this post can be replicated by running git clone https://github.com/alfonsosanchezbeato/exponential-spiral.git cd exponential-spiral; ./spiral_examples.py The scripts make use of numpy and matplotlib, so make sure those are installed before running them. Now, let’s develop the math behind this. The values for $$w^z$$ can be found by using the exponential function as $$w^z=e^{z\log{w}}=e^{z~\text{Log}~w}e^{2\pi nzi}$$ In this equation, “log” is the complex natural logarithm multi-valued function, while “Log” is one of its branches, concretely the principal value, whose imaginary part lies in the interval $$(−\pi, \pi]$$. In the equation we reflect the fact that $$\log{w}=\text{Log}~w + 2\pi ni$$ with $$n \in \mathbb{Z}$$. This shows the remarkable fact that, in the general case, we have infinite solutions for the equation. For the rest of the discussion we will separate $$w^z$$ as follows: $$w^z=e^{z~\text{Log}~w}e^{2\pi nzi}=C \cdot F_n$$ with constant $$C=e^{z~\text{Log}~w}$$ and the rest being the sequence $$F_n=e^{2\pi nzi}$$. Being $$C$$ a complex constant that multiplies $$F_n$$, the only influence it has is to rotate and scale equally all solutions. Noticeably, $$w$$ appears only in this constant, which shows us that the $$z$$ values are what is really determinant for the number and general shape of the solutions. Therefore, we will concentrate in analyzing the behavior of $$F_n$$, by seeing what solutions we can find when we restrict $$z$$ to different domains. Starting by restricting $$z$$ to integers ($$z \in \mathbb{Z}$$), it is easy to see that there is only one resulting solution in this case, as the factor $$F_n=e^{2\pi nzi}=1$$ in this case (it just rotates the solution $$2\pi$$ radians an integer number of times, leaving it unmodified). As expected, a complex number to an integer power has only one solution. If we let $$z$$ be a rational number ($$z=p/q$$, being $$p$$ and $$q$$ integers chosen so we have the canonical form), we obtain $$F_n=e^{2\pi\frac{pn}{q} i}$$ which makes the sequence $$F_n$$ periodic with period $$q$$, that is, there are $$q$$ solutions for the equation. So we have two solutions for $$w^{1/2}$$, three for $$w^{1/3}$$, etc., as expected as that is the number of solutions for square roots, cube roots and so on. The values will be the vertex of a regular polygon in the complex plane. For instance, in figure 1 the solutions for $$2^{1/5}$$ are displayed. If $$z$$ is real, $$e^{2\pi nzi}$$ is not periodic anymore has infinite solutions in the unit circle, and therefore $$w^z$$ has infinite values that lie on a circle of radius $$|C|$$. In the more general case, $$z \in \mathbb{C}$$, that is, $$z=a+bi$$ being $$a$$ and $$b$$ real numbers, and we have $$F_n=e^{-2\pi bn}e^{2\pi ani}.$$ There is now a scaling factor, $$e^{-2\pi bn}$$ that makes the module of the solutions vary with $$n$$, scattering them across the complex plane, while $$e^{2\pi ani}$$ rotates them as $$n$$ changes. The result is an infinite number of solutions for $$w^z$$ that lie in an equiangular spiral in the complex plane. The spiral can be seen if we change the domain of $$F$$ to $$\mathbb{R}$$, this is $$F(t)=e^{-2\pi bt}e^{2\pi ati}~~~\text{with}~~~t \in \mathbb{R}.$$ In figure 2 we can see one example which shows some solutions to $$2^{0.4-0.1i}$$, plus the spiral that passes over them. In fact, in Penrose’s book it is stated that these values are found in the intersection of two equiangular spirals, although he leaves finding them as an exercise for the reader (problem 5.9). Let’s see then if we can find more spirals that cross these points. We are searching for functions that have the same value as $$F(t)$$ when $$t$$ is an integer. We can easily verify that the family of functions $$F_k'(t)=F(t)e^{2\pi kti}~~~\text{with}~~~k \in \mathbb{Z}$$ are compatible with this restriction, as $$e^{2\pi kti}=1$$ in that case (integer $$t$$). Figures 3 and 4 represent again some solutions to $$2^{0.4-0.1i}$$, $$F(t)$$ (which is the same as the spiral for $$k=0$$), plus the spirals for $$k=-1$$ and $$k=1$$ respectively. We can see there that the solutions lie in the intersection of two spirals indeed. If we superpose these 3 spirals, the ones for $$k=1$$ and $$k=-1$$ cross also in places different to the complex powers, as can be seen in figure 5. But, if we choose two consecutive numbers for $$k$$, the two spirals will cross only in the solutions to $$w^z$$. See, for instance, figure 6 where the spirals for $$k=\{-2,-1\}$$ are plotted. We see that any pair of such spirals fulfills Penrose’s description. In general, the number of places at which two spirals cross depends on the difference between their $$k$$-number. If we have, say, $$F_k’$$ and $$F_l’$$ with $$k>l$$, they will cross when $$t=…,0,\frac{1}{k-l},\frac{2}{k-l},…,\frac{k-l-1}{k},1,1+\frac{1}{k-l},…$$ That is, they will cross when $$t$$ is an integer (at the solutions to $$w^z$$) and also at $$k-l-1$$ points between consecutive solutions. Let’s see now another interesting special case: when $$z=bi$$, that is, it is pure imaginary. In this case, $$e^{2\pi ati}$$ is $$1$$, and there is no turn in the complex plane when $$t$$ grows. We end up with the spiral $$F(t)$$ degenerating to a half-line that starts at the origin (which is reached when $$t=\infty$$ if $$b>0$$). This can be appreciated in figure 7, where the line and the spirals for $$k=-1$$ and $$k=1$$ are plotted for $$20^{0.1i}$$. The two spirals are mirrored around the half-line. Digging more into this case, it turns out that a pure imaginary number to a pure imaginary power can produce a real result. For instance, for $$i^{0.1i}$$, we see in figure 8 that the roots are in the half-positive real line. That something like this can produce real numbers is a curiosity that has intrigued historically mathematicians ($$i^i$$ has real values too!). And with this I finish the post. It is really amusing to start playing with the values of $$w$$ and $$z$$, if you want to do so you can use the python scripts I pointed to in the beginning of the post. I hope you enjoyed the post as much as I did writing it. Read more Dustin Kirkland ## RFC: The New Ubuntu 18.04 LTS Server Installer One of the many excellent suggestions from last year's HackerNews thread, Ask HN: What do you want to see in Ubuntu 17.10?, was to refresh the Ubuntu server's command line installer: We're pleased to introduce this new installer, which will be the default Server installer for 18.04 LTS, and solicit your feedback. Follow the instructions below, to download the current daily image, and install it into a KVM. Alternatively, you could write it to a flash drive and install a physical machine, or try it in your virtual machine of your choice (VMware, VirtualBox, etc.). wget http://cdimage.ubuntu.com/ubuntu-server/daily-live/current/bionic-live-server-amd64.iso
$qemu-img create -f raw target.img 10G$ kvm -m 1024 -boot d -cdrom bionic-live-server-amd64.iso -hda target.img
...
\$ kvm -m 1024 target.img

For those too busy to try it themselves at the moment, I've taken a series of screenshots below, for your review.

Finally, you can provide feedback, bugs, patches, and feature requests against the Subiquity project in Launchpad:

Cheers,
Dustin

Dustin Kirkland

## 10 Amazing Years of Ubuntu and Canonical

 February 2008, Canonical's office in Lexington, MA
10 years ago today, I joined Canonical, on the very earliest version of the Ubuntu Server Team!

And in the decade since, I've had the tremendous privilege to work with so many amazing people, and the opportunity to contribute so much open source software to the Ubuntu ecosystem.

Marking the occasion, I've reflected about much of my work over that time period and thought I'd put down in writing a few of the things I'm most proud of (in chronological order)...  Maybe one day, my daughters will read this and think their daddy was a real geek :-)

#### 3. Byobu (December 2008)

If you know me at all, you know my passion for the command line UI/UX that is "Byobu".  Byobu was born as the "screen-profiles" project, over lunch at Google in Mountain View, in December of 2008, at the Ubuntu Developer Summit.  Around the lunch table, several of us (including Nick Barcet, Dave Walker, Michael Halcrow, and others), shared our tips and tricks from our own ~/.screenrc configuration files.  In Cape Town, February 2010, at the suggestion of Gustavo Niemeyer, I ported Byobu from Screen to Tmux.  Since Ubuntu Servers don't generally have GUIs, Byobu is designed to be a really nice interface to the Ubuntu command line environment.

#### http://blog.dustinkirkland.com/2011/08/formal-introduction-to-ubuntu-orchestra.html

In 2009, Canonical purchased 5 Dell laptops, which was the Ubuntu Server team's first "cloud".  These laptops were our very first lab for deploying and testing Eucalyptus clouds.  I was responsible for those machines at my house for a while, and I automated their installation with PXE, TFTP, DHCP, DNS, and a ton of nasty debian-installer preseed data.  That said -- it worked!  As it turned out, Scott Moser and Mathias Gug had both created similar setups at their houses for the same reason.  I was mentoring a new hire at Canonical, named Andres Rodriquez at the time, and he took over our part-time hacks and we worked together to create the Orchestra project.  Orchestra, itself was short lived.  It was severely limited by Cobbler as a foundation technology.  So the Orchestra project was killed by Canonical.  But, six months later, a new project was created, based on the same general concept -- physical machine provisioning at scale -- with an entire squad of engineers led by...Andres Rodriguez :-)  MAAS today is easily one of the most important projects the Ubuntu ecosystem and one of the most successful products in Canonical's portfolio.

#### 7. pollinate / pollen / entropy.ubuntu.com (February 2014)

In 2013, I set out to secure Ubuntu at large from a set of attacks ranging from insufficient entropy at first boot.  This was especially problematic in virtual machine instances, in public clouds, where every instance is, by design, exactly identical to many others.  Moreover, the first thing that instance does, is usually ... generate SSH keys.  This isn't hypothetical -- it's quite real.  Raspberry Pi's running Debian were deemed susceptible to this exact problem in November 2015.  So designed and implemented a client (shell script that runs at boot, and fetches some entropy from one to many sources), as well as a high-performance server (golang).  The client is the 'pollinate' script, which runs on the first boot of every Ubuntu server, and the server is the cluster of physical machines processing hundreds of requests per minute at entropy.ubuntu.com.  Many people helped review the design and implementation, including Kees Cook, Jamie Strandboge, Seth Arnold, Tyler Hicks, James Troup, Scott Moser, Steve Langasek, Gustavo Niemeyer, and others.

#### 8. The Orange Box (May 2014)

In December of 2011, in my regular 1:1 with my manager, Mark Shuttleworth, I told him about these new "Intel NUCs", which I had bought and placed them around my house.  I had 3, each of which was running Ubuntu, and attached to a TV around the house, as a media player (music, videos, pictures, etc).  In their spare time, though, they were OpenStack Nova nodes, capable of running a couple of virtual machines.  Mark immediately asked, "How many of those could you fit into a suitcase?"  Within 24 hours, Mark had reached out to the good folks at TranquilPC and introduced me to my new mission -- designing the Orange Box.  I worked with the Tranquil folks through Christmas, and we took our first delivery of 5 of these boxes in January of 2014.  Each chassis held 10 little Intel NUC servers, and a switch, as well as a few peripherals.  Effectively, it's a small data center that travels.  We spend the next 4 months working on the hardware under wraps and then unveiled them at the OpenStack Summit in Atlanta in May 2014.  We've gone through a couple of iterations on the hardware and software over the last 4 years, and these machines continue to deliver tremendous value, from live demos on the booth, to customer workshops on premises, or simply accelerating our own developer productivity by "shipping them a lab in a suitcase".  I worked extensively with Dan Poler on this project, over the course of a couple of years.

#### 9. Hollywood (December 2014)

Perhaps the highlight of my professional career came in October of 2016.  Watching Saturday Night Live with my wife Kim, we were laughing at a skit that poked fun at another of my favorite shows, Mr. Robot.  On the computer screen behind the main character, I clearly spotted Hollywood!  Hollywood is just a silly, fun little project I created on a plane one day, mostly to amuse Kim.  But now, it's been used in Saturday Night LiveNBC Dateline News, and an Experian TV commercials!  Even Jess Frazelle created a Docker container

#### 10. petname / golang-petname / python-petname (January 2015)

From "warty warthog" to "bionic beaver", we've always had a focus on fun, and user experience here in Ubuntu.  How hard is it to talk to your colleague about your Amazon EC2 instance, "i-83ab39f93e"?  Or your container "adfxkenw"?  We set out to make something a little more user-friendly with our "petnames".  Petnames are randomly generated "adjective-animal" names, which are easy to pronounce, spell, and remember.  I curated and created libraries that are easily usable in Shell, Golang, and Python.  With the help of colleagues like Stephane Graber and Andres Rodriguez, we now use these in many places in the Ubuntu ecosystem, such as LXD and MAAS.

If you've read this post, thank you for indulging me in a nostalgic little trip down memory lane!  I've had an amazing time designing, implementing, creating, and innovating with some of the most amazing people in the entire technology industry.  And here's to a productive, fun future!

Cheers,
:-Dustin

# Hello MAASters!

I’m happy to announce that MAAS 2.4.0 alpha 1 and python-libmaas 0.6.0 have now been released and are available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 alpha 1 is available in the Bionic -proposed archive or in the following PPA:
ppa:maas/next

Python-libmaas Availability
Libmaas is available in the Ubuntu Bionic archive or you can download the source from:

# MAAS 2.4.0 (alpha1)

## Important announcements

### Dependency on tgt (iSCSI) has now been dropped

Starting from MAAS 2.3, the way run ephemeral environments and perform deployments was changed away from using iSCSI. Instead, we introduced the ability to do the same using a squashfs image. With that, we completely removed the requirement for having tgt at all, but we didn’t drop the dependency in 2.3. As of 2.4, however, tgt has now been completely removed.

### Dependency on apache2 has now been dropped in the debian packages

Starting from MAAS 2.0, MAAS now made the UI available in port 5240 and deprecated the use of port 80. However, as a mechanism to not break users when upgrading from the previous LTS, MAAS continued to have apache2 as a dependency to provide a reverse proxy to allow users to connect via port 80.

However, the MAAS snap changed that behavior no longer providing access to MAAS via port 80. In order to keep MAAS consistent with the snap, starting from MAAS 2.4, the debian package no longer depends on apache2 to provide a reverse proxy capability from port 80.

### Python libmaas (0.6.0) now available in the Ubuntu Archive

I’m happy to announce that the new MAAS Client Library is now available in the Ubuntu Archives for Bionic. Libmaas is an asyncio based client library that provides a nice interface to interact with MAAS. More details below.

## New Features & Improvements

### Machine Locking

MAAS now adds the ability to lock machines, which prevents the user from performing actions on machines that could change their state. This gives MAAS a prevention mechanism of potentially catastrophic actions. For example, it will prevent mistakenly powering off machines or mistanly releasing machines that could bring workloads down.

### Audit logging

MAAS 2.4 now allows the administrators to audit the user’s actions, with the introduction of audit logging. The audit logs are available to administrators via the MAAS CLI/API, giving administrators a centralized location to access these logs.

Documentation is in the process of being published. For raw access please refer to the following link:

https://github.com/CanonicalLtd/maas-docs/pull/766/commits/eb05fb5efa42ba850446a21ca0d55cf34ced2f5d

### Commissioning Harness – Supporting firmware upgrade and hardware specific scripts

The commissioning harness has been expanded with various improvements to help administrators write their own firmware upgrade and hardware specific scripts. These improvements addresses various of the challenges administrators face when performing such tasks at scale. The improvements include:

• Ability to auto-select all the firmware upgrade/storage hardware changes (API only, UI will be available soon)

• Ability to run scripts only for the hardware they are intended to run on.

• Ability to reboot the machine while on the commissioning environment without disrupting the commissioning process.

• Create a hardware specific by declaring in which machine it needs to be run, by specifying the hardware specific PCI ID, modalias, vendor or model of the machine or device.

• Create firmware upgrade scripts that require a reboot before the machine finishes the commissioning process, by allowing to describe this in the script’s metadata.

• Allows administrators to define where the script can obtain proprietary firmware and/or proprietary tools to perform any of the operations required.

### Minor improvements – Gather information about BIOS & firmware

MAAS now gathers more information about the underlying system, such as the Model, Serial, BIOS and firmware information of a machine (where available). It also gathers the information for storage devices as well as network interfaces.

## MAAS Client Library (python-libmaas)

#### New upstream release – 0.6.0

A new upstream release is now available in the Ubuntu Archive for Bionic. The new release includes the following changes:

• Configure partitions and mount points

• Configure Bcache

• Configure RAID

• Configure LVM

## Known issues & work arounds

### LP: #1748712  – 2.4.0a1 upgrade failed with old node event data

It has been reported that an upgrade to MAAS 2.4.0a1 failed due to having old data from a non-existent know stored in the database. This could have been due to a older devel version of MAAS which would have left an entry in the node event table. A work around is provided in the bug report.

If you hit this issue, please update the bug report immediately so MAAS developers.

## Bug fixes

Please refer to the following for all bug fixes in this release.