Canonical Voices

Posts tagged with 'ubuntu'

K. Tsakalozos

If you take a look at MicroK8s’ channel information with snap info microk8s you will see all available Kubernetes releases:

channels:
stable: v1.14.1 2019-04-18 (522) 214MB classic
candidate: v1.14.1 2019-04-15 (522) 214MB classic
beta: v1.14.1 2019-04-15 (522) 214MB classic
edge: v1.14.1 2019-05-10 (587) 217MB classic
1.15/stable: –
1.15/candidate: –
1.15/beta: –
1.15/edge: v1.15.0-alpha.3 2019-05-08 (578) 215MB classic
1.14/stable: v1.14.1 2019-04-18 (521) 214MB classic
1.14/candidate: v1.14.1 2019-04-15 (521) 214MB classic
1.14/beta: v1.14.1 2019-04-15 (521) 214MB classic
1.14/edge: v1.14.1 2019-05-11 (590) 217MB classic
1.13/stable: v1.13.5 2019-04-22 (526) 237MB classic
1.13/candidate: v1.13.6 2019-05-09 (581) 237MB classic
1.13/beta: v1.13.6 2019-05-09 (581) 237MB classic
1.13/edge: v1.13.6 2019-05-08 (581) 237MB classic
1.12/stable: v1.12.8 2019-05-02 (547) 259MB classic
1.12/candidate: v1.12.8 2019-05-01 (547) 259MB classic
1.12/beta: v1.12.8 2019-05-01 (547) 259MB classic
1.12/edge: v1.12.8 2019-04-24 (547) 259MB classic
1.11/stable: v1.11.10 2019-05-10 (557) 258MB classic
1.11/candidate: v1.11.10 2019-05-02 (557) 258MB classic
1.11/beta: v1.11.10 2019-05-02 (557) 258MB classic
1.11/edge: v1.11.10 2019-05-01 (557) 258MB classic
1.10/stable: v1.10.13 2019-04-22 (546) 222MB classic
1.10/candidate: v1.10.13 2019-04-22 (546) 222MB classic
1.10/beta: v1.10.13 2019-04-22 (546) 222MB classic
1.10/edge: v1.10.13 2019-04-22 (546) 222MB classic

If you want to follow the v1.14 Kubernetes releases you would:

sudo snap install microk8s --classic --channel=1.14/stable

Whereas if you always want to be on the latest stable release you would:

sudo snap install microk8s --classic

What is new in the channels list above is the pre-stable releases found under the 1.15 track (at the time of this writing the latest stable release is v1.14).

Following the pre-stable releases

We are committed to shipping MicroK8s with pre-stable releases under the following scheme.

  • The edge channel (eg 1.15/edge) holds the alpha upstream releases.
  • The beta channel (eg 1.15/beta) holds the beta upstream releases.
  • The candidate channel (eg 1.15/candidate) holds the release candidate of upstream releases.

Pre-stable releases will be available the same day they are released upstream.

If you want to test your work against the alpha 1.15 release simply do:

sudo snap install microk8s --classic --channel=1.15/edge

However, be aware that pre-stable releases may change before the stable release. Be sure to test any work against the stable release once it becomes available.

Tracks with stable releases

Tracks are meant to serve specific Kubernetes releases. For example the 1.15 track with its four channels, 1.15/edge, 1.15/beta, 1.15/candidate, 1.15/stable, serves the v1.15 K8s release. As soon as a new K8s stable release is made, all channels of the corresponding track are updated. In our example, as soon as v1.15 stable is released the corresponding track channels are updated in the following way:

  • The 1.15/edge channel is updated on every commit merged on the MicroK8s repository paired with the v1.15 stable K8s release.
  • The 1.15/beta and 1.15/candidate channels are updated on every upstream patch release. They hold whatever the 1.15/edge channel has at the time of the patch release.
  • The 1.15/stable channel gets updated with what 1.15/candidate holds a week after a new revision is put into 1.15/candidate.

I am confused. Which channel is right for me?

The single question you need to answer is what to put in the channel argument below:

sudo snap install microk8s --classic --channel=<What_to_use_here?>

Here are some suggestions for the channel to use based on your needs:

  • I want to always be on the latest stable Kubernetes.
    Use --channel=latest
  • I want to always be on the latest release in a specific upstream K8s release.
    Use --channel=<release>/stable eg --channel=1.14/stable.
  • I want to test-drive a pre-stable release.
    Use --channel=<next_release>/edge for alpha releases
    Use --channel=<next_release>/beta for beta releases
    Use --channel=<next_release>/candidate for candidate releases
  • I am waiting for a bug fix on MicroK8s:
    Use --channel=<release>/edge
  • I am waiting for a bug fix on upstream Kubernetes:
    Use --channel=<release>/candidate

Developing K8s core services with MicroK8s

One of the purposes of pre-stable releases is to assist K8s core service developers in their task. Let’s see how we can hook a local build of kubelet to a MicroK8s deployment.

Following the build instructions for Kubernetes we:

git clone https://github.com/kubernetes/kubernetes
cd kubernetes
build/run.sh make kubelet

The kubelet binary should be available under:

_output/dockerized/bin/linux/amd64/kubelet

Let’s grab a MicroK8s deployment:

sudo snap install microk8s --classic --channel=1.15/edge

To see what arguments the kubelet is running with we:

> ps -ef | grep kubelet
root 24184 1 2 17:28 ? 00:00:54 /snap/microk8s/578/kubelet
--kubeconfig=/snap/microk8s/578/configs/kubelet.config
--cert-dir=/var/snap/microk8s/578/certs
--client-ca-file=/var/snap/microk8s/578/certs/ca.crt
--anonymous-auth=false
--network-plugin=kubenet
--root-dir=/var/snap/microk8s/common/var/lib/kubelet
--fail-swap-on=false
--pod-cidr=10.1.1.0/24
--non-masquerade-cidr=10.152.183.0/24
--cni-bin-dir=/snap/microk8s/578/opt/cni/bin/
--feature-gates=DevicePlugins=true
--eviction-hard=memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi
--container-runtime=remote
--container-runtime-endpoint=/var/snap/microk8s/common/run/containerd.sock
--node-labels=microk8s.io/cluster=true

We now need to stop the kubelet that comes with MicroK8s and start our own build:

sudo systemctl stop snap.microk8s.daemon-kubelet.service
sudo _output/dockerized/bin/linux/amd64/kubelet 
--kubeconfig=/snap/microk8s/578/configs/kubelet.config
--cert-dir=/var/snap/microk8s/578/certs
--clit-ca-file=/var/snap/microk8s/578/certs/ca.crt
--anonymous-auth=false --network-plugin=kubenet
--root-dir=/var/snap/microk8s/common/var/lib/kubelet
--fail-swap-on=false --pod-cidr=10.1.1.0/24
--container-runtime=remote
--container-runtime-endpoint=/var/snap/microk8s/common/run/containerd.sock
--node-labels=microk8s.io/cluster=true --eviction-hard='memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi'

That’s it! Your kubelet now runs in place of the one in MicroK8s! You have to admit it is as simple as it gets.

What you should be aware is that some microk8s commands will restart services through systemd. For example, microk8s.enable dns will initiate a services restart including the kubelet shipped with MicroK8s.

Happy coding!

Further reading


Kubernetes pre-stable releases now available with MicroK8s was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
jdstrand

It is useful for testing to want to work with official cloud images as local VMs. Eg, when I work on snapd, I like to have different images available to work with its spread tests.

The autopkgtest package makes working with Ubuntu images quite easy:

$ sudo apt-get install qemu-kvm autopkgtest
$ autopkgtest-buildvm-ubuntu-cloud -r bionic # -a i386
Downloading https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img...
# and to integrate into spread
$ mkdir -p ~/.spread/qemu
$ mv ./autopkgtest-bionic-amd64.img ~/.spread/qemu/ubuntu-18.04-64.img
# now can run any test from 'spread -list' starting with
# 'qemu:ubuntu-18.04-64:'

This post isn’t really about autopkgtest, snapd or spread specifically though….

I found myself wanting an official Debian unstable cloud image so I could use it in spread while testing snapd. I learned it is easy enough to create the images yourself but then I found that Debian started providing raw and qcow2 cloud images for use in OpenStack and so I started exploring how to use them and generalize how to use arbitrary cloud images.

General procedure

The basic steps are:

  1. obtain a cloud image
  2. make copy of the cloud image for safekeeping
  3. resize the copy
  4. create a seed.img with cloud-init to set the username/password
  5. boot with networking and the seed file
  6. login, update, etc
  7. cleanly shutdown
  8. use normally (ie, without seed file)

In this case, I grabbed the ‘debian-testing-openstack-amd64.qcow2’ image from http://cdimage.debian.org/cdimage/openstack/testing/ and verified it. Since this is based on Debian ‘testing’ (current stable images are also available), when I copied it I named it accordingly. Eg, I knew for spread it needed to be ‘debian-sid-64.img’ so I did:

$ cp ./debian-testing-openstack-amd64.qcow2 ./debian-sid-64.img

I then resized it. I picked 20G since I recalled that is what autopkgtest uses:

$ qemu-img resize debian-sid-64.img 20G

These are already setup for cloud-init, so I created a cloud-init data file (note, the ‘#cloud-config’ comment at the top is important):

$ cat ./debian-data
#cloud-config
password: debian
chpasswd: { expire: false }
ssh_pwauth: true

and a cloud-init meta-data file:

$ cat ./debian-meta-data
instance-id: i-debian-sid-64
local-hostname: debian-sid-64

and fed that into cloud-localds to create a seed file:

$ cloud-localds -v ./debian-seed.img ./debian-data ./debian-meta-data

Then start the image with:

$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda ./debian-sid-64.img -drive "file=./debian-seed.img,if=virtio,format=raw" -net nic -net user,hostfwd=tcp:127.0.0.1:59355-:22

(I’m using the invocation that is reminiscent of how spread invokes it; feel free to use a virtio invocation as described by Scott Moser if that better suits your environment.)

Here, the “59355” can be any unused high port. The idea is after the image boots, you can login with ssh using:

$ ssh -p 59355 debian@127.0.0.1

Once logged in, perform any updates, etc that you want in place when tests are run, then disable cloud-init for the next boot and cleanly shutdown with:

$ sudo touch /etc/cloud/cloud-init.disabled
$ sudo shutdown -h now

The above is the generalized procedure which can hopefully be adapted for other distros that provide cloud images, etc.

For integrating into spread, just copy the image to ‘~/.spread/qemu’, naming it how spread expects. spread will use ‘-snapshot’ with the VM as part of its tests, so if you want to update the images later since they might be out of date, omit the seed file (and optionally ‘-net nic -net user,hostfwd=tcp:127.0.0.1:59355-:22’ if you don’t need port forwarding), and use:

$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda ./debian-sid-64.img

UPDATE 2019-04-23: the above is confirmed to work with Fedora 28 and 29 (though, if using the resulting image to test snapd, be sure to configure the password as ‘fedora’ and then be sure to ‘yum update ; yum install kernel-modules nc strace’ in the image).

UPDATE 2019-04-22: the above is confirmed to work with CentOS 7 (though, if using the resulting image to test snapd, be sure to configure the password as ‘centos’ and then be sure to ‘yum update ; yum install epel-release ; yum install golang nc strace’ in the image).

Extra steps for Debian cloud images without default e1000 networking

Unfortunately, for the Debian cloud images, there were additional steps because spread doesn’t use virtio, but instead the default the e1000 driver, and the Debian cloud kernel doesn’t include this:

$ grep E1000 /boot/config-4.19.0-4-cloud-amd64
# CONFIG_E1000 is not set
# CONFIG_E1000E is not set

So… when the machine booted, there was no networking. To adjust for this, I blew away the image, copied from the safely kept downloaded image, resized then started it with:

$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda $HOME/.spread/qemu/debian-sid-64.img -drive "file=$HOME/.spread/qemu/debian-seed.img,if=virtio,format=raw" -device virtio-net-pci,netdev=eth0 -netdev type=user,id=eth0

This allowed the VM to start with networking, at which point I adjusted /etc/apt/sources.list to refer to ‘sid’ instead of ‘buster’ then ran apt-get update then apt-get dist-upgrade to upgrade to sid. I then installed the Debian distro kernel with:

$ sudo apt-get install linux-image-amd64

Then uninstalled the currently running kernel with:

$ sudo apt-get remove --purge linux-image-cloud-amd64 linux-image-4.19.0-4-cloud-amd64

(I used ‘dpkg -l | grep linux-image’ to see the cloud kernels I wanted to remove). Removing the package that provides the currently running kernel is a dangerous operation for most systems, so there is a scary message to abort the operation. In our case, it isn’t so scary (we can just try again ;) and this is exactly what we want to do.

Next I cleanly shutdown the VM with:

$ sudo shutdown -h now

and try to start it again like with the ‘general procedures’, above (I’m keeping the seed file here because I want cloud-init to be re-run with the e1000 driver):

$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda ./debian-sid-64.img -drive "file=./debian-seed.img,if=virtio,format=raw" -net nic -net user,hostfwd=tcp:127.0.0.1:59355-:22

Now I try to login via ssh:
$ ssh -p 59355 debian@127.0.0.1
...
debian@127.0.0.1's password:
...
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Apr 16 16:13:15 2019
debian@debian:~$ sudo touch /etc/cloud/cloud-init.disabled
debian@debian:~$ sudo shutdown -h now
Connection to 127.0.0.1 closed.

While this VM is no longer the official cloud image, it is still using the Debian distro kernel and Debian archive, which is good enough for my purposes and at this point I’m ready to use this VM in my testing (eg, for me, copy ‘debian-sid-64.img’ to ‘~/.spread/qemu’).

Read more
James Henstridge

Last week I gave a talk at Perth Linux Users Group about building IoT projects using Ubuntu Core and Snapcraft. The video is now available online. Unfortunately there were some problems with the audio setup leading to some background noise in the video, but it is still intelligible:

The slides used in the talk can be found here.

The talk was focused on how Ubuntu Core could be used to help with the ongoing security and maintenance of IoT projects. While it might be easy to buy a Raspberry Pi, install Linux and your application, how do you make sure the device remains up to date with security updates? How do you push out updates to your application in a reliable fashion?

I outlined a way of deploy a project using Ubuntu Core, including:

  1. Packaging a simple web server app using the snapcraft tool.
  2. Configuring automatic builds from git, published to the edge channel on the Snap Store. This is also an easy way to get ARM builds for a package, rather than trying to deal with cross compilation tool chains.
  3. Using the ubuntu-image command to create an Ubuntu Core image with the application preinstalled

I gave a demo booting such an image in a virtual machine. This showed the application up and running ready to use. I also demonstrated how promoting a build from the edge channel to stable on the store would make it available to the system wide automatic update mechanism on the device.

Read more
K. Tsakalozos

We have been quiet for a few months just because we have been busy. We were working mainly on two features that we intend to ship in the v1.14 release:

The entailed changes will affect the backwards compatibility and user experience of MicroK8s and this is the reason we time them with the upcoming upstream Kubernetes release. Here we will provide a) a short description of these features, b) a way for you to test drive the new MicroK8s, and c) the steps on how to hold back on the release in case this is a major show stopper for you.

The transition to Containerd

We replace Dockerd with Containerd mainly for two reasons.

  • The setup of having two dockerd on the same host has proven problematic. MicroK8s brings its own dockerd that may clash with a local dockerd users may want to have. With moving to containerd users can apt-get install docker.io without affecting MicroK8s. This switch also means that microk8s.docker will not be available anymore, you will have to use a docker client shipped with your distribution.
  • Performance. It is shown that there is a performance benefit from using containerd. This should not be a surprise since dockerd itself uses containerd internally. With the switch to containerd we are essentially removing a layer that is docker specific.

Hardening MicroK8s security

MicroK8s is a developer’s tool. It is not meant to be deployed in production or in hostile environments. Having said that we tried to make MicroK8s more secure by:

  • Exposing as few services as we can. Here is a table with what we left open and the access restrictions involved:
https://medium.com/media/4dac105e741261ca58799b0b8d101dae/href
  • A CA and certificates are created once at deployment time.

Test drive the upcoming patches

We have prepared a temporary branch you could use to evaluate the above changes:

snap install microk8s --classic --channel=1.13/edge/secure-containerd

If you have MicroK8s already installed you can switch the channel your MicroK8s is following:

snap refresh --channel=1.13/edge/secure-containerd microk8s

Try it out and let us know if we missed anything.

“Thanks, I’ll pass”

All release series up until now will not be affected by this change. This means you can have your MicroK8s deployment follow the 1.13 track:

snap refresh --channel=1.13/stable microk8s

Summing up

An important update is coming. Make sure you give it a try with:

snap install microk8s --classic --channel=1.13/edge/secure-containerd

If you do not like what you see tell us what breaks by filing an issue and keep using the 1.13 track.

References


Containerd on a more secure MicroK8s was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

MicroK8s in the Wild

As the popularity of MicroK8s grows I would like to take the time to mention some projects that use this micro Kubernetes distribution. But before that, let me do some introductions. For those unfamiliar with Kubernetes, Kubernetes is an open source container orchestrator. It shows you how to deploy, upgrade, and provision your application. This is one of the rare occasions where all the major players (Google, Microsoft, IBM, Amazon etc) have flocked around a single framework making it an unofficial standard.

MicroK8s is a distribution of Kubernetes. It is a snap package that sets up a Kubernetes cluster on your machine. You can have a Kubernetes cluster for local development, CI/CD or just for getting to know Kubernetes with just a:

sudo snap install microk8s --classic

If you are on a Mac or Windows you will need a Linux VM.

In what follows you will find some examples on how people are using MicroK8s. Note that this is not a complete list of MicroK8s usages, it is just some efforts I happen to be aware of.

Spring Cloud Kubernetes

This project is using CircleCI for CI/CD. MicroK8s provides a local Kubernetes cluster where integration tests are run. The addons enabled are dns, the docker registry and Istio. The integration tests need to plug into the Kubernetes cluster using the kubeconfig file and the socket to dockerd. This work was introduced in this Pull Request (thanks George) and it gave us the incentive to add a microk8s.status command that would wait for the cluster to come online. For example we can wait up to 5 minutes for MicroK8s to come up with:

microk8s.status --wait-ready --timeout=300

OpenFaaS on MicroK8s

It was this year’s Config Management Camp where I met Joe McCobe the author of “Deploy OpenFaaS with MicroK8s”. I will just repeat his words “was blown away by the speed and ease with which I could get a basic lab environment up and running”.

What about Kubeless?

It seems the ease of deploying MicroK8s goes well with the ease of software development of serverless frameworks. Users of Kubeless are also kicking the tires on MicroK8s. Have a look at “Files upload from Kubeless on MicroK8s to Minio” and “Serverless MicroK8s Kubernetes.”

SUSE Cloud Application Platform (CAP) on Microk8s

In his blog post Dimitris describes in detail all the configuration he had to do to get the software from SUSE to run on MicroK8s. The most interesting part is the motivation behind this effort. As he says “… MicroK8s… use your machine’s resources without you having to decide on a VM size beforehand.” As he explained to me his application puts significant memory pressure only during bootstrap. MicroK8s enabled him to reclaim the unused memory after the initialization phase.

Kubeflow

Kubeflow is the missing link between Kubernetes and AI/ML. Canonical is actively involved in this so…. you should definitely check it out. Sure, I am biased but let me tell you a true story. I have a friend who was given three machines to deploy Tensorflow and run some experiments. She did not have any prior experience at the time so… none of the three node clusters were setup in exactly the same way. There was always something off. This head-scratching situation is just one reason to use Kubeflow.

Transcrobes

Transcrobes comes from an active member of the MicroK8s community. It serves as a language learning aid. “The system knows what you know, so can give you just the right amount of help to be able to understand the words you don’t know but gets out of the way for the stuff you do know.” Here MicroK8s is used for quick prototyping. We wish you all the best Anton, good luck!

Summing Up

We have seen a number of interesting use cases that include CI/CD, Serverless programming, lab setup, rapid prototyping and application development. If you have a MicroK8s use case do let us know. Come and say hi at #microk8s on the Kubernetes slack and/or issue a Pull Request against our MicroK8s In The Wild page.

References


MicroK8s in the Wild was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
jdstrand

Some time ago we started alerting publishers when their stage-packages received a security update since the last time they built a snap. We wanted to create the right balance for the alerts and so the service currently will only alert you when there are new security updates against your stage-packages. In this manner, you can choose not to rebuild your snap (eg, since it doesn’t use the affected functionality of the vulnerable package) and not be nagged every day that you are out of date.

As nice as that is, sometimes you want to check these things yourself or perhaps hook the alerts into some form of automation or tool. While the review-tools had all of the pieces so you could do this, it wasn’t as straightforward as it could be. Now with the latest stable revision of the review-tools, this is easy:

$ sudo snap install review-tools
$ review-tools.check-notices \
  ~/snap/review-tools/common/review-tools_656.snap
{'review-tools': {'656': {'libapt-inst2.0': ['3863-1'],
                          'libapt-pkg5.0': ['3863-1'],
                          'libssl1.0.0': ['3840-1'],
                          'openssl': ['3840-1'],
                          'python3-lxml': ['3841-1']}}}

The review-tools are a strict mode snap and while it plugs the home interface, that is only for convenience, so I typically disconnect the interface and put things in its SNAP_USER_COMMON directory, like I did above.

Since now it is super easy to check a snap on disk, with a little scripting and a cron job, you can generate a machine readable report whenever you want. Eg, can do something like the following:

$ cat ~/bin/check-snaps
#!/bin/sh
set -e

snaps="review-tools/stable rsync-jdstrand/edge"

tmpdir=$(mktemp -d -p "$HOME/snap/review-tools/common")
cleanup() {
    rm -fr "$tmpdir"
}
trap cleanup EXIT HUP INT QUIT TERM

cd "$tmpdir" || exit 1
for i in $snaps ; do
    snap=$(echo "$i" | cut -d '/' -f 1)
    channel=$(echo "$i" | cut -d '/' -f 2)
    snap download "$snap" "--$channel" >/dev/null
done
cd - >/dev/null || exit 1

/snap/bin/review-tools.check-notices "$tmpdir"/*.snap

or if  you already have the snaps on disk somewhere, just do:

$ /snap/bin/review-tools.check-notices /path/to/snaps/*.snap

Now can add the above to cron or some automation tool as a reminder of what needs updates. Enjoy!

Read more
K. Tsakalozos

MicroK8s is a local deployment of Kubernetes. Let’s skip all the technical details and just accept that Kubernetes does not run natively on MacOS or Windows. You may be thinking “I have seen Kubernetes running on a MacOS laptop, what kind of sorcery was that?” It’s simple, Kubernetes is running inside a VM. You might not see the VM or it might not even be a full blown virtual system but some level of virtualisation is there. This is exactly what we will show here. We will setup a VM and inside there we will install MicroK8s. After the installation we will discuss how to use the in-VM-Kubernetes.

A multipass VM on MacOS

Arguably the easiest way to get an Ubuntu VM on MacOS is with multipass. Head to the releases page and grab the latest package. Installing it is as simple as double-clicking on the .pkg file.

To start a VM with MicroK8s we:

multipass launch --name microk8s-vm --mem 4G --disk 40G
multipass exec microk8s-vm -- sudo snap install microk8s --classic
multipass exec microk8s-vm -- sudo iptables -P FORWARD ACCEPT

Make sure you reserve enough resources to host your deployments; above, we got 4GB of RAM and 40GB of hard disk. We also make sure packets to/from the pod network interface can be forwarded to/from the default interface.

Our VM has an IP that you can check with:

> multipass list
Name State IPv4 Release
microk8s-vm RUNNING 10.72.145.216 Ubuntu 18.04 LTS

Take a note of this IP since our services will become available there.

Other multipass commands you may find handy:

  • Get a shell inside the VM:
multipass shell microk8s-vm
  • Shutdown the VM:
multipass stop microk8s-vm
  • Delete and cleanup the VM:
multipass delete microk8s-vm 
multipass purge

Using MicroK8s

To run a command in the VM we can get a multipass shell with:

multipass shell microk8s-vm

To execute a command without getting a shell we can use multipass exec like so:

multipass exec microk8s-vm -- /snap/bin/microk8s.status

A third way to interact with MicroK8s is via the Kubernetes API server listening on port 8080 of the VM. We can use microk8s’ kubeconfig file with a local installation of kubectl to access the in-VM-kubernetes. Here is how:

multipass exec microk8s-vm -- /snap/bin/microk8s.config > kubeconfig

Install kubectl on the host machine and then use the kubeconfig:

kubectl --kubeconfig=kubeconfig get all --all-namespaces

Accessing in-VM services — Enabling addons

Let’s first enable dns and the dashboard. In the rest of this blog we will be showing different methods of accessing Grafana:

multipass exec microk8s-vm -- /snap/bin/microk8s.enable dns dashboard

We check the deployment progress with:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get all --all-namespaces

After all services are running we can proceed into looking how to access the dashboard.

The Grafana of our dashboard

Accessing in-VM services — Use the Kubernetes API proxy

The API server is on port 8080 of our VM. Let’s see how the proxy path looks like:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl cluster-info
...
Grafana is running at http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
...

By replacing 127.0.0.1 with the VM’s IP, 10.72.145.216 in this case, we can reach our service at:

http://10.72.145.216:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

Accessing in-VM services — Setup a proxy

In a very similar fashion to what we just did above, we can ask Kubernetes to create a proxy for us. We need to request the proxy to be available to all interfaces and to accept connections from everywhere so that the host can reach it.

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl proxy --address='0.0.0.0' --accept-hosts='.*'
Starting to serve on [::]:8001

Leave the terminal with the proxy open. Again, replacing 127.0.0.1 with the VMs IP we reach the dashboard through:

http://10.72.145.216:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

Make sure you go through the official docs on constructing the proxy paths.

Accessing in-VM services — Use a NodePort service

We can expose our service in a port on the VM and access it from there. This approach is using the NodePort service type. We start by spotting the deployment we want to expose:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get deployment -n kube-system  | grep grafana
monitoring-influxdb-grafana-v4 1 1 1 1 22h

Then we create the NodePort service:

multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl expose deployment.apps/monitoring-influxdb-grafana-v4 -n kube-system --type=NodePort

We have now a port for the Grafana service:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get services -n kube-system  | grep NodePort
monitoring-influxdb-grafana-v4 NodePort 10.152.183.188 <none> 8083:32580/TCP,8086:32152/TCP,3000:32720/TCP 13m

Grafana is on port 3000 mapped here to 32720. This port is randomly selected so it my vary for you. In our case, the service is available on 10.72.145.216:32720.

Conclusions

MicroK8s on MacOS (or Windows) will need a VM to run. This is no different that any other local Kubernetes solution and it comes with some nice benefits. The VM gives you an extra layer of isolation. Instead of using your host and potentially exposing the Kubernetes services to the outside world you have full control of what others can see. Of course, this isolation comes with some extra administrative overhead that may not be applicable for a dev environment. Give it a try and tell us what you think!

Links

CanonicalLtd/multipass


MicroK8s on MacOS was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

Looking at the configuration of a Kubernetes node sounds like a simple thing, yet it not so obvious.

The arguments kubelet takes come either as command line parameters or from a configuration file you pass with --config. Seems, straight forward to do a ps -ex | grep kubelet and look in the file you see after the --config parameter. Simple, right? But… are you sure you got all the arguments right? What if Kubernetes defaulted to a value you did not want? What if you do not have shell access to a node?

There is a way to query the Kubernetes API for the configuration a node is running with: api/vi/nodes/<node_name>/proxy/cofigz. Lets see this in a real deployment.

Deploy a Kubernetes Cluster

I am using the Canonical Distribution of Kubernetes (CDK) on AWS here but you can use whichever cloud and Kubernetes installation method you like.

juju bootstrap aws
juju deploy canonical-kubernetes

..and wait for the deployment to finish

watch juju status 

Change a Configuration

CDK allows for configuring both the command line arguments and the extra arguments of the config file. Here we add arguments to the config file:

juju config kubernetes-worker kubelet-extra-config='{imageGCHighThresholdPercent: 60, imageGCLowThresholdPercent: 39}'

A great question is how we got the imageGCHighThreshholdPercent literal. At the time of this writing the official upstream docs point you to the type definitions; a rather ugly approach. There is an EvictionHard property in the type definitions, however, if you look at the example of the upstream docs you see the same property is with lowercase.

Check the Configuration

We will need two shells. On the first one we will start the API proxy and on the second we will query the API. On the first shell:

juju ssh kubernetes-master/0
kubectl proxy

Now that we have the proxy at 127.0.0.1:8001 on the kubernetes-master, use a second shell to get a node name and we query the API:

juju ssh kubernetes-master/0
kubectl get no
curl -sSL "http://localhost:8001/api/v1/nodes/<node_name>/proxy/configz" | python3 -m json.tool

Here is a full run:

juju ssh kubernetes-master/0
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1023-aws x86_64)
* Documentation:  https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Mon Oct 22 10:40:40 UTC 2018
System load:  0.11               Processes:              115
Usage of /: 13.7% of 15.45GB Users logged in: 1
Memory usage: 20% IP address for ens5: 172.31.0.48
Swap usage: 0% IP address for fan-252: 252.0.48.1
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
Last login: Mon Oct 22 10:38:14 2018 from 2.86.54.15
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@ip-172-31-0-48:~$ kubectl get no
NAME STATUS ROLES AGE VERSION
ip-172-31-14-174 Ready <none> 41m v1.12.1
ip-172-31-24-80 Ready <none> 41m v1.12.1
ip-172-31-63-34 Ready <none> 41m v1.12.1
ubuntu@ip-172-31-0-48:~$ curl -sSL "http://localhost:8001/api/v1/nodes/ip-172-31-14-174/proxy/configz" | python3 -m json.tool
{
"kubeletconfig": {
"syncFrequency": "1m0s",
"fileCheckFrequency": "20s",
"httpCheckFrequency": "20s",
"address": "0.0.0.0",
"port": 10250,
"tlsCertFile": "/root/cdk/server.crt",
"tlsPrivateKeyFile": "/root/cdk/server.key",
"authentication": {
"x509": {
"clientCAFile": "/root/cdk/ca.crt"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"registryPullQPS": 5,
"registryBurst": 10,
"eventRecordQPS": 5,
"eventBurst": 10,
"enableDebuggingHandlers": true,
"healthzPort": 10248,
"healthzBindAddress": "127.0.0.1",
"oomScoreAdj": -999,
"clusterDomain": "cluster.local",
"clusterDNS": [
"10.152.183.93"
],
"streamingConnectionIdleTimeout": "4h0m0s",
"nodeStatusUpdateFrequency": "10s",
"nodeLeaseDurationSeconds": 40,
"imageMinimumGCAge": "2m0s",
"imageGCHighThresholdPercent": 60,
"imageGCLowThresholdPercent": 39,
"volumeStatsAggPeriod": "1m0s",
"cgroupsPerQOS": true,
"cgroupDriver": "cgroupfs",
"cpuManagerPolicy": "none",
"cpuManagerReconcilePeriod": "10s",
"runtimeRequestTimeout": "2m0s",
"hairpinMode": "promiscuous-bridge",
"maxPods": 110,
"podPidsLimit": -1,
"resolvConf": "/run/systemd/resolve/resolv.conf",
"cpuCFSQuota": true,
"cpuCFSQuotaPeriod": "100ms",
"maxOpenFiles": 1000000,
"contentType": "application/vnd.kubernetes.protobuf",
"kubeAPIQPS": 5,
"kubeAPIBurst": 10,
"serializeImagePulls": true,
"evictionHard": {
"imagefs.available": "15%",
"memory.available": "100Mi",
"nodefs.available": "10%",
"nodefs.inodesFree": "5%"
},
"evictionPressureTransitionPeriod": "5m0s",
"enableControllerAttachDetach": true,
"makeIPTablesUtilChains": true,
"iptablesMasqueradeBit": 14,
"iptablesDropBit": 15,
"failSwapOn": false,
"containerLogMaxSize": "10Mi",
"containerLogMaxFiles": 5,
"configMapAndSecretChangeDetectionStrategy": "Watch",
"enforceNodeAllocatable": [
"pods"
]
}
}

Summing Up

There is a way to get the configuration of an online Kubernetes node through the Kubernetes API (api/v1/nodes/<node_name>/proxy/configz). This might be handy if you want to code against Kubernetes or you do not want to get into the intricacies of your particular cluster setup.

References


How to Inspect the Configuration of a Kubernetes Node was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
albertomilone@gmail.com

Ubuntu 18.04 marked the transition to a new, more granular, packaging of the NVIDIA drivers, which, unfortunately, combined with a change in logind, and with the previous migration from Lightdm to Gdm3, caused (Intel+NVIDIA) hybrid laptops to stop working the way they used to in Ubuntu 16.xx and older.

The following are the main issues experienced by our users:

  • An increase in power consumption when using the power saving profile (i.e. when the discrete GPU is off).
  • The inability to switch between power profiles on log out (thus requiring a reboot).

We have backported a commit to solve the problem with logind, and I have worked on a few changes in gpu-manager, and in the other key components, to improve the experience when using Gdm3.

NOTE: fixes for Lightdm, and for SDDM still need some work, and will be made available in the next update.

Both issues should be fixed in Ubuntu 18.10, and I have backported my work to Ubuntu 18.04, which is now available for testing.

If you run Ubuntu 18.04, own a hybrid laptop with an Intel and an NVIDIA GPU (supported by the 390 NVIDIA driver),  we would love to get your feedback on the updates in Ubuntu 18.04.

If you are interested, head over to the bug report, follow the instructions at the end of the bug description, and let us know about your experience.

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.5.0 beta 1 has been released. The beta 1 now features

  • Complete proxing of machine communication through the rack controller. This includes DNS, HTTP to metadata server, Proxy with Squid and new in 2.5.0 beta 1, syslog.
  • CentOS 7 & RHEL 7 storage support (Requires a new Curtin version available in PPA).
  • Full networking for KVM pods.
  • ESXi network configuration

For more information, please refer to MAAS Discourse [1].

[1]: https://discourse.maas.io/t/maas-2-5-0-beta-1-released/174

Read more
K. Tsakalozos

A friend once asked, why would one prefer microk8s over minikube?… We never spoke since. True story!

That was a hard question, especially for an engineer. The answer is not so obvious largely because it has to do with personal preferences. Let me show you why.

Microk8s-wise this is what you have to do to have a local Kubernetes cluster with a registry:

sudo snap install microk8s --edge --classic
microk8s.enable registry

How is this great?

  • It is super fast! A couple of hundreds of MB over the internet tubes and you are all set.
  • You skip the pain of going through the docs for setting up and configuring Kubernetes with persistent storage and the registry.

So why is this bad?

  • As a Kubernetes engineer you may want to know what happens under the hood. What got deployed? What images? Where?
  • As a Kubernetes user you may want to configure the registry. Where are the images stored? Can you change any access credentials?

Do you see why this is a matter of preference? Minikube is a mature solution for setting up a Kubernetes in a VM. It runs everywhere (even on windows) and it does only one thing, sets up a Kubernetes cluster.

On the other hand, microk8s offers Kubernetes as an application. It is opinionated and it takes a step towards automating common development workflows. Speaking of development workflows...

The full story with the registry

The registry shipped with microk8s is available on port 32000 of the localhost. It is an insecure registry because, let’s be honest, who cares about security when doing local development :) .

And it’s getting better, check this out! The docker daemon used by microk8s is configured to trust this insecure registry. It is this daemon we talk to when we want to upload images. The easiest way to do so is by using the microk8s.docker command coming with microk8s:

# Lets get a Docker file first
wget https://raw.githubusercontent.com/nginxinc/docker-nginx/ddbbbdf9c410d105f82aa1b4dbf05c0021c84fd6/mainline/stretch/Dockerfile
# And build it
microk8s.docker build -t localhost:32000/nginx:testlocal . microk8s.docker push localhost:32000/nginx:testlocal
If you prefer to use an external docker client you should point it to the socket dockerd is listening on:
docker -H unix:///var/snap/microk8s/docker.sock ps

To use an image from the local registry just reference it in your manifests:

apiVersion: v1
kind: Pod
metadata:
name: my-nginx
namespace: default
spec:
containers:
- name: nginx
image: localhost:32000/nginx:testlocal
restartPolicy: Always

And deploy with:

microk8s.kubectl create -f the-above-awesome-manifest.yaml

Microk8s and registry

What to keep from this post?

You want Kubernetes? We deliver it as a (sn)app!

You want to see your tool-chain in microk8s? Drop us a line. Send us a PR!

We are pleased to see happy Kubernauts!

Those of you who are here for the gossip. He was not that good of a friend (obviously!). We only met in a meetup :) !

References


Microk8s Docker Registry was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
admin

Hello MAASTers

MAAS 2.4.1 has now been released and it is a bug fix release. Please see more details in discourse.maas.io [1].

[1]: https://discourse.maas.io/t/maas-2-4-1-released/148

Read more
admin

Hello MAASters!

I’m happy to announce that the current MAAS development release (2.5.0 alpha 1) is now officially available in PPA for early testers.
What’s new?
Most notable MAAS 2.5.0 alpha 1 changes include:
  • Proxying the communication through rack controllers
  • HA improvements for better Rack-to-Region communication and discovery
  • Adding new machines with IPMI credentials or non-PXE IP address
  • Commissioning during enlistment
For more details, please refer to the release notes available in discourse [1].
Where to get it?
MAAS 2.5.0a1 is currently available for Ubuntu Bionic in ppa:maas/next.
sudo add-apt-repository ppa:maas/next
sudo apt-get update
sudo apt-get install maas
[1]: https://discourse.maas.io/t/maas-2-5-0-alpha-1/106

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 (final) is now available!
This new MAAS release introduces a set of exciting features and improvements that improve performance, stability and usability of MAAS.
MAAS 2.4.0 will be immediately available in the PPA, but it is in the process of being SRU’d into Ubuntu Bionic.
PPA’s Availability
MAAS 2.4.0 is currently available for Ubuntu Bionic in ppa:maas/stable for the coming week.
sudo add-apt-repository ppa:maas/stable
sudo apt-get update
sudo apt-get install maas
What’s new?
Most notable MAAS 2.4.0 changes include:
  • Performance improvements across the backend & UI.
  • KVM pod support for storage pools (over API).
  • DNS UI to manage resource records.
  • Audit Logging
  • Machine locking
  • Expanded commissioning script support for firmware upgrades & HBA changes.
  • NTP services now provided with Chrony.
For the full list of features & changes, please refer to the release notes:

Read more
James Henstridge

While working on a feature for snapd, we had a need to perform a “secure bind mount”. In this context, “secure” meant:

  1. The source and/or target of the mount is owned by a less privileged user.
  2. User processes will continue to run while we’re performing the mount (so solutions that involve suspending all user processes are out).
  3. While we can’t prevent the user from moving the mount point, they should not be able to trick us into mounting to locations they don’t control (e.g. by replacing the path with a symbolic link).

The main problem is that the mount system call uses string path names to identify the mount source and target. While we can perform checks on the paths before the mounts, we have no way to guarantee that the paths don’t point to another location when we move on to the mount() system call: a classic time of check to time of use race condition.

One suggestion was to modify the kernel to add a MS_NOFOLLOW flag to prevent symbolic link attacks. This turns out to be harder than it would appear, since the kernel is documented as ignoring any flags other than MS_BIND and MS_REC when performing a bind mount. So even if a patched kernel also recognised the MS_NOFOLLOW, there would be no way to distinguish its behaviour from an unpatched kernel. Fixing this properly would probably require a new system call, which is a rabbit hole I don’t want to dive down.

So what can we do using the tools the kernel gives us? The common way to reuse a reference to a file between system calls is the file descriptor. We can securely open a file descriptor for a path using the following algorithm:

  1. Break the path into segments, and check that none are empty, ".", or "..".
  2. Open the root directory with open("/", O_PATH|O_DIRECTORY).
  3. Open the first segment with openat(parent_fd, "segment", O_PATH|O_NOFOLLOW|O_DIRECTORY).
  4. Repeat for each of the remaining file descriptors, closing parent descriptors as needed.

Now we just need to find a way to use these file descriptors with the mount system call. I came up with two strategies to achieve this.

Use the current working directory

The first idea I tried was to make use of the fact that the mount system call accepts relative paths. We can use the fchdir system call to change to a directory identified by a file descriptor, and then refer to it as ".". Putting those together, we can perform a secure bind mount as a multi step process:

  1. fchdir to the mount source directory.
  2. Perform a bind mount from "." to a private stash directory.
  3. fchdir to the mount target directory.
  4. Perform a bind mount from the private stash directory to ".".
  5. Unmount the private stash directory.

While this works, it has a few downsides. It requires a third intermediate location to stash the mount. It could interfere with anything else that relies on the working directory. It also only works for directory bind mounts, since you can’t fchdir to a regular file.

Faced with these downsides, I started thinking about whether there was any simpler options available.

Use magic /proc symbolic links

For every open file descriptor in a process, there is a corresponding file in /proc/self/fd/. These files appear to be symbolic links that point at the file associated with the descriptor. So what if we pass these /proc/self/fd/NNN paths to the mount system call?

The obvious question is that if these paths are symbolic links, is this any different than passing the real paths directly? It turns out that there is a difference, because the kernel does not resolve these symbolic links in the standard fashion. Rather than recursively resolving link targets, the kernel short circuits the process and uses the path structure associated with the file descriptor. So it will follow file moves and can even refer to paths in other mount namespaces. Furthermore the kernel keeps track of deleted paths, so we will get a clean error if the user deletes a directory after we’ve opened it.

So in pseudo-code, the secure mount operation becomes:

source_fd = secure_open("/path/to/source", O_PATH);
target_fd = secure_open("/path/to/target", O_PATH);
mount("/proc/self/fd/${source_fd}", "/proc/self/fd/${target_fd}",
      NULL, MS_BIND, NULL);

This resolves all the problems I had with the first solution: it doesn’t alter the working directory, it can do file bind mounts, and doesn’t require a protected third location.

The only downside I’ve encountered is that if I wanted to flip the bind mount to read-only, I couldn’t use "/proc/self/fd/${target_fd}" to refer to the mount point. My best guess is that it continues to refer to the directory shadowed by the mount point, rather than the mount point itself. So it is necessary to re-open the mount point, which is another opportunity for shenanigans. One possibility would be to read /proc/self/fdinfo/NNN to determine the mount ID associated with the file descriptor and then double check it against /proc/self/mountinfo.

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 beta 2 is now released and is available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 beta 2 is currently available in Bionic’s Archive or in the following PPA:
ppa:maas/next

MAAS 2.4.0 (beta2)

New Features & Improvements

MAAS Internals optimisation

Continuing with MAAS’ internal surgery, a few more improvements have been made:

  • Backend improvements

  • Improve the image download process, to ensure rack controllers immediately start image download after the region has finished downloading images.

  • Reduce the service monitor interval to 30 seconds. The monitor tracks the status of the various services provided alongside MAAS (DNS, NTP, Proxy).

  • UI Performance optimizations for machines, pods, and zones, including better filtering of node types.

KVM pod improvements

Continuing with the improvements for KVM pods, beta 2 adds the ability to:

  • Define a default storage pool

This feature allows users to select the default storage pool to use when composing machines, in case multiple pools have been defined. Otherwise, MAAS will pick the storage pool automatically depending which pool has the most available space.

  • API – Allow allocating machines with different storage pools

Allows users to request a machine with multiple storage devices from different storage pools. This feature uses storage tags to automatically map a storage pool in libvirt with a storage tag in MAAS.

UI Improvements

  • Remove remaining YUI in favor of AngularJS.

As of beta 2, MAAS has now fully dropped the use of YUI for the Web Interface. The last section using YUI was the Settings page and the login page. Both sections have now been transitioned to use AngularJS instead.

  • Re-organize Settings page

The MAAS settings  have now been reorganized into multiple tabs.

Minor improvements

  • API for default DNS domain selection

Adds the ability to define the default DNS domain. This is currently only available via the API.

  • Vanilla framework upgrade

We would like to thank the Ubuntu web team for their hard work upgrading MAAS to the latest version of the Vanilla framework. MAAS is looking better and more consistent every day!

Bug fixes

Please refer to the following for all 37 bug fixes in this release, which address issues with MAAS across the board:

https://launchpad.net/maas/+milestone/2.4.0beta2

 

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 beta 1 and python-libmaas 0.6.0 have now been released and are available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 beta 1 is currently available in Bionic -proposed waiting to be published into Ubuntu, or in the following PPA:
ppa:maas/next

MAAS 2.4.0 (beta1)

Important announcements

Debian package maas-dns no longer needed

The Debian package ‘maas-dns’ has now been made a transitional package. This package provided some post-installation configuration to prepare bind to be managed by MAAS, but it required maas-region-api to be installed first.

In order to streamline the installation and make it easier for users to install HA environments, the configuration of bind has now been integrated to the ‘maas-region-api’ package itself, which and we have made ‘maas-dns’ a dummy transitional package that can now be removed.

New Features & Improvements

MAAS Internals optimization

Major internal surgery to MAAS 2.4 continues improve various areas not visible to the user. These updates will advance the overall performance of MAAS in larger environments. These improvements include:

  • Database query optimizations

Further reductions in the number of database queries, significantly cutting the queries made by the boot source cache image import process from over 100 to just under 5.

  • UI optimizations

MAAS is being optimized to reduce the amount of data using the websocket API to render the UI. This is targeted at only processing data only for viewable information, improving various legacy areas. Currently, the work done for this release includes:

  • Only load historic script results (e.g. old commissioning/testing results) when requested / accessed by the user, instead of always making them available over the websocket.

  • Only load node objects in listing pages when the specific object type is requested. For instance, only load machines when accessing the machines tab instead of also loading devices and controllers.

  • Change the UI mechanism to only request OS Information only on initial page load rather than every 10 seconds.

KVM pod improvements

Continuing with the improvements from alpha 2, this new release provides more updates to KVM pods:

  • Added overcommit ratios for CPU and memory.

When composing or allocating machines, previous versions of MAAS would allow the user to request as many resources as the user wanted regardless of the available resources. This created issues when dynamically allocating machines as it could allow users to create an infinite number of machines even when the physical host was already over committed. Adding this feature allows administrators to control the amount of resources they want to over commit.

  • Added ability to filter which pods or pod types to avoid when allocating machines

Provides users with the ability to select which pods or pod types not to allocate resources from. This makes it particularly useful when dynamically allocating machines when MAAS has a large number of pods.

DNS UI Improvements

MAAS 2.0 introduced the ability to manage DNS, not only to allow the creation of new domains, but also to the creation of resources records such as A, AAA, CNAME, etc. However, most of this functionality has only been available over the API, as the UI only allowed to add and remove new domains.

As of 2.4, MAAS now adds the ability to manage not only DNS domains but also the following resource records:

  • Added ability to edit domains (e.g. TTL, name, authoritative).

  • Added ability to create and delete resource records (A, AAA, CNAME, TXT, etc).

  • Added ability to edit resource records.

Navigation UI improvements

MAAS 2.4 beta 1 is changing the top-level navigation:

  • Rename ‘Zones’ for ‘AZs’

  • Add ‘Machines, Devices, Controllers’ to the top-level navigation instead of ‘Hardware’.

Minor improvements

A few notable improvements being made available in MAAS 2.4 include:

  • Add ability to force the boot type for IPMI machines.

Hardware manufactures have been upgrading their BMC firmware versions to be more compliant with the Intel IPMI 2.0 spec. Unfortunately, the IPMI 2.0 spec has made changes that provide a non-backward compatible user experience. For example, if the administrator configures their machine to always PXE boot over EFI, and the user executed an IPMI command without specifying the boot type, the machine would use the value of the configured BIOS. However, with  these new changes, the user is required to always specify a boot type, avoiding a fallback to the BIOS.

As such, MAAS now allows the selection of a boot type (auto, legacy, efi) to force the machine to always PXE with the desired type (on the next boot only) .

  • Add ability, via the API, to skip the BMC configuration on commissioning.

Provides an API option to skip the BMC auto configuration during commissioning for IPMI systems. This option helps admins keep credentials provided over the API when adding new nodes.

Bug fixes

Please refer to the following for all 32 bug fixes in this release.

https://launchpad.net/maas/+milestone/2.4.0beta1

 

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 alpha 2 has now been released and is available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 alpha 1 is available in the Bionic -proposed archive or in the following PPA:
ppa:maas/next

MAAS 2.4.0 (alpha2)

Important announcements

NTP services now provided by Chrony

Starting with 2.4 Alpha 2, and in common with changes being made to Ubuntu Server, MAAS replaces ‘ntpd’ with Chrony for the NTP protocol. MAAS will handle the upgrade process and automatically resume NTP service operation.

Vanilla CSS Framework Transition

MAAS 2.4 is undergoing a Vanilla CSS framework transition to a new version of vanilla, which will bring a fresher look to the MAAS UI. This framework transition is currently work in progress and not all of the UI have been fully updated. Please expect to see some inconsistencies in this new release.

New Features & Improvements

NTP services now provided by Chrony.

Starting from MAAS 2.4alpha2, chrony is now the default NTP service, replacing ntpd. This work has been done to align with the Ubuntu Server and Security team to support chrony instead of ntpd. MAAS will continue to provide services exactly the same way and users will not be affected by the changes, handling the upgrade process transparently. This means that:

  • MAAS will configure chrony as peers on all Region Controllers
  • MAAS will configure chrony as a client of peers for all Rack Controllers
  • Machines will use the Rack Controllers as they do today

MAAS Internals optimization

MAAS 2.4 is currently undergoing major surgery to improve various areas of operation that are not visible to the user. These updates will improve the overall performance of MAAS in larger environments. These improvements include:

  • AsyncIO based event loop
    • MAAS has an event loop which performs various internal actions. In older versions of MAAS, the event loop was managed by the default twisted event loop. MAAS now uses an asyncio based event loop, driven by uvloop, which is targeted at improving internal performance.

  • Improved daemon management
    • MAAS has changed the way daemons are run to allow users to see both ‘regiond’ and ‘rackd’ as processes in the process list.
    • As part of these changes, regiond workers are now managed by a master regiond process. In older versions of MAAS each worker was directly run by systemd. The master process is now in charge of ensuring workers are running at all times, and re-spawning new workers in case of failures. This also allows users to see the worker hierarchy in the process list.
  • Ability to increase the number of regiond workers
    • Following the improved way MAAS daemons are run, further internal changes have been made to allow the number of regiond workers to be increased automatically. This allows MAAS to scale to handle more internal operations in larger environments.
    • While this capability is already available, it is not yet available by default. It will become available in the following milestone release.
  • Database query optimizations
    • In the process of inspecting the internal operations of MAAS, it was discovered that multiple unnecessary database queries are performed for various operations. Optimising these requires internal improvements to reduce the footprint of these operations. Some areas that have been addressed in this release include:
      • When saving node objects (e.g. making any update of a machine, device, rack controller, etc), MAAS validated changes across various fields. This required an increased number of queries for fields, even when they were not being updated. MAAS now tracks specific fields that change and only performs queries for those fields.
        • Example: To update a power state, MAAS would perform 11 queries. After these improvements, , only 1 query is now performed.
      • On every transaction, MAAS performed 2 queries to update the timestamp. This has now been consolidated into a single query per transaction.
    • These changes  greatly improves MAAS performance and database utilisation in larger environments. More improvements will continue to be made as we continue to examine various areas in MAAS.
  • UI optimisations
    • MAAS is now being optimised to reduce the amount of data loaded in the websocket API to render the UI. This is targeted at only processing data for viewable information, improving various legacy areas. Currently, the work done in this area includes:
      • Script results are only loaded for viewable nodes in the machine listing page, reducing the overall amount of data loaded.
      • The node object is updated in the websocket only when something has changed in the database, reducing the data transferred to the clients as well as the amount of internal queries.

Audit logging

Continuing with the audit logging improvements, alpha2 now adds audit logging for all user actions that affect Hardware Testing & Commissioning.

KVM pod improvements

MAAS’ KVM pods was initially developed as a feature to help developers quickly iterate and test new functionality while developing MAAS. This, however, because a feature that allow not only developers, but also administrators to make better use of resources across their datacenter. Since the feature was initially create for developers, some features were lacking. As such, in 2.4 we are improving the usability of KVM pods:

  • Pod AZ’s.
    MAAS now allows setting the physical zone for the pod. This helps administrators by conceptually placing their KVM pods in a AZ, which enables them to request/allocate machines on demand based on its AZ. All VM’s created from a pod will inherit the AZ.

  • Pod tagging
    MAAS now adds the ability to set tags for a pod. This allows administrators to use tags to allow/prevent the creation of a VM inside the pod using tags. For example, if the administrator would like a machine with a ‘tag’ named ‘virtual’, MAAS will filter all physical machines and only consider other VM’s or a KVM pod for machine allocation.

Bug fixes

Please refer to the following for all bug fixes in this release.

https://launchpad.net/maas/+milestone/2.4.0alpha2

Read more
Dustin Kirkland

One of the many excellent suggestions from last year's HackerNews thread, Ask HN: What do you want to see in Ubuntu 17.10?, was to refresh the Ubuntu server's command line installer:



We're pleased to introduce this new installer, which will be the default Server installer for 18.04 LTS, and solicit your feedback.

Follow the instructions below, to download the current daily image, and install it into a KVM.  Alternatively, you could write it to a flash drive and install a physical machine, or try it in your virtual machine of your choice (VMware, VirtualBox, etc.).

$ wget http://cdimage.ubuntu.com/ubuntu-server/daily-live/current/bionic-live-server-amd64.iso
$ qemu-img create -f raw target.img 10G
$ kvm -m 1024 -boot d -cdrom bionic-live-server-amd64.iso -hda target.img
...
$ kvm -m 1024 target.img

For those too busy to try it themselves at the moment, I've taken a series of screenshots below, for your review.














Finally, you can provide feedback, bugs, patches, and feature requests against the Subiquity project in Launchpad:



Cheers,
Dustin

Read more
Dustin Kirkland

February 2008, Canonical's office in Lexington, MA
10 years ago today, I joined Canonical, on the very earliest version of the Ubuntu Server Team!

And in the decade since, I've had the tremendous privilege to work with so many amazing people, and the opportunity to contribute so much open source software to the Ubuntu ecosystem.

Marking the occasion, I've reflected about much of my work over that time period and thought I'd put down in writing a few of the things I'm most proud of (in chronological order)...  Maybe one day, my daughters will read this and think their daddy was a real geek :-)

1. update-motd / motd.ubuntu.com (September 2008)

Throughout the history of UNIX, the "message of the day" was always manually edited and updated by the local system administrator.  Until Ubuntu's message-of-the-day.  In fact, I received an email from Dennis Ritchie and Jon "maddog" Hall, confirming this, in April 2010.  This started as a feature request for the Landscape team, but has turned out to be tremendously useful and informative to all Ubuntu users.  Just last year, we launched motd.ubuntu.com, which provides even more dynamic information about important security vulnerabilities and general news from the Ubuntu ecosystem.  Mathias Gug help me with the design and publication.

2. manpages.ubuntu.com (September 2008)

This was the first public open source project I worked on, in my spare time at Canonical.  I had a local copy of the Ubuntu archive and I was thinking about what sorts of automated jobs I could run on it.  So I wrote some scripts that extracted the manpages out of each one, formatted them as HTML, and published into a structured set of web directories.  10 years later, it's still up and running, serving thousands of hits per day.  In fact, this was one of the ways we were able to shrink the Ubuntu minimal image, but removing the manpages, since they're readable online.  Colin Watson and Kees Cook helped me with the initial implementation, and Matthew Nuzum helped with the CSS and Ubuntu theme in the HTML.

3. Byobu (December 2008)

If you know me at all, you know my passion for the command line UI/UX that is "Byobu".  Byobu was born as the "screen-profiles" project, over lunch at Google in Mountain View, in December of 2008, at the Ubuntu Developer Summit.  Around the lunch table, several of us (including Nick Barcet, Dave Walker, Michael Halcrow, and others), shared our tips and tricks from our own ~/.screenrc configuration files.  In Cape Town, February 2010, at the suggestion of Gustavo Niemeyer, I ported Byobu from Screen to Tmux.  Since Ubuntu Servers don't generally have GUIs, Byobu is designed to be a really nice interface to the Ubuntu command line environment.

4. eCryptfs / Ubuntu Encrypted Home Directories (October 2009)

I was familiar with eCryptfs from its inception in 2005, in the IBM Linux Technology Center's Security Team, sitting next to Michael Halcrow who was the original author.  When I moved to Canonical, I helped Michael maintain the userspace portion of eCryptfs (ecryptfs-utils) and I shepherded into Ubuntu.  eCryptfs was super powerful, with hundreds of options and supported configurations, but all of that proved far to difficult for users at large.  So I set out to simplify it drastically, with an opinionated set of basic defaults.  I started with a simple command to mount a "Private" directory inside of your home directory, where you could stash your secrets.  A few months later, on a long flight to Paris, I managed to hack a new PAM module, pam_ecryptfs.c, that actually encrypted your entire home directory!  This was pretty revolutionary at the time -- predating Apple's FileVault or Microsoft's Bitlocker, even.  Today, tens of millions of Ubuntu users have used eCryptfs to secure their personal data.  I worked closely with Tyler Hicks, Kees Cook, Jamie Strandboge, Michael Halcrow, Colin Watson, and Martin Pitt on this project over the years.

5. ssh-import-id (March 2010)

With the explosion of virtual machines and cloud instances in 2009 / 2010, I found myself constantly copying public SSH keys around.  Moreover, given Canonical's globally distributed nature, I also regularly found myself asking someone for their public SSH keys, so that I could give them access to an instance, perhaps for some pair programming or assistance debugging.  As it turns out, everyone I worked with, had a Launchpad.net account, and had their public SSH keys available there.  So I created (at first) a simple shell script to securely fetch and install those keys.  Scott Moser helped clean up that earliest implementation.  Eventually, I met Casey Marshall, who helped rewrite it entirely in Python.  Moreover, we contacted the maintainers of Github, and asked them to expose user public SSH keys by the API -- which they did!  Now, ssh-import-id is integrated directly into Ubuntu's new subiquity installer and used by many other tools, such as cloud-init and MAAS.

6. Orchestra / MAAS (August 2011)

In 2009, Canonical purchased 5 Dell laptops, which was the Ubuntu Server team's first "cloud".  These laptops were our very first lab for deploying and testing Eucalyptus clouds.  I was responsible for those machines at my house for a while, and I automated their installation with PXE, TFTP, DHCP, DNS, and a ton of nasty debian-installer preseed data.  That said -- it worked!  As it turned out, Scott Moser and Mathias Gug had both created similar setups at their houses for the same reason.  I was mentoring a new hire at Canonical, named Andres Rodriquez at the time, and he took over our part-time hacks and we worked together to create the Orchestra project.  Orchestra, itself was short lived.  It was severely limited by Cobbler as a foundation technology.  So the Orchestra project was killed by Canonical.  But, six months later, a new project was created, based on the same general concept -- physical machine provisioning at scale -- with an entire squad of engineers led by...Andres Rodriguez :-)  MAAS today is easily one of the most important projects the Ubuntu ecosystem and one of the most successful products in Canonical's portfolio.

7. pollinate / pollen / entropy.ubuntu.com (February 2014)

In 2013, I set out to secure Ubuntu at large from a set of attacks ranging from insufficient entropy at first boot.  This was especially problematic in virtual machine instances, in public clouds, where every instance is, by design, exactly identical to many others.  Moreover, the first thing that instance does, is usually ... generate SSH keys.  This isn't hypothetical -- it's quite real.  Raspberry Pi's running Debian were deemed susceptible to this exact problem in November 2015.  So designed and implemented a client (shell script that runs at boot, and fetches some entropy from one to many sources), as well as a high-performance server (golang).  The client is the 'pollinate' script, which runs on the first boot of every Ubuntu server, and the server is the cluster of physical machines processing hundreds of requests per minute at entropy.ubuntu.com.  Many people helped review the design and implementation, including Kees Cook, Jamie Strandboge, Seth Arnold, Tyler Hicks, James Troup, Scott Moser, Steve Langasek, Gustavo Niemeyer, and others.

8. The Orange Box (May 2014)

In December of 2011, in my regular 1:1 with my manager, Mark Shuttleworth, I told him about these new "Intel NUCs", which I had bought and placed them around my house.  I had 3, each of which was running Ubuntu, and attached to a TV around the house, as a media player (music, videos, pictures, etc).  In their spare time, though, they were OpenStack Nova nodes, capable of running a couple of virtual machines.  Mark immediately asked, "How many of those could you fit into a suitcase?"  Within 24 hours, Mark had reached out to the good folks at TranquilPC and introduced me to my new mission -- designing the Orange Box.  I worked with the Tranquil folks through Christmas, and we took our first delivery of 5 of these boxes in January of 2014.  Each chassis held 10 little Intel NUC servers, and a switch, as well as a few peripherals.  Effectively, it's a small data center that travels.  We spend the next 4 months working on the hardware under wraps and then unveiled them at the OpenStack Summit in Atlanta in May 2014.  We've gone through a couple of iterations on the hardware and software over the last 4 years, and these machines continue to deliver tremendous value, from live demos on the booth, to customer workshops on premises, or simply accelerating our own developer productivity by "shipping them a lab in a suitcase".  I worked extensively with Dan Poler on this project, over the course of a couple of years.

9. Hollywood (December 2014)

Perhaps the highlight of my professional career came in October of 2016.  Watching Saturday Night Live with my wife Kim, we were laughing at a skit that poked fun at another of my favorite shows, Mr. Robot.  On the computer screen behind the main character, I clearly spotted Hollywood!  Hollywood is just a silly, fun little project I created on a plane one day, mostly to amuse Kim.  But now, it's been used in Saturday Night LiveNBC Dateline News, and an Experian TV commercials!  Even Jess Frazelle created a Docker container

10. petname / golang-petname / python-petname (January 2015)

From "warty warthog" to "bionic beaver", we've always had a focus on fun, and user experience here in Ubuntu.  How hard is it to talk to your colleague about your Amazon EC2 instance, "i-83ab39f93e"?  Or your container "adfxkenw"?  We set out to make something a little more user-friendly with our "petnames".  Petnames are randomly generated "adjective-animal" names, which are easy to pronounce, spell, and remember.  I curated and created libraries that are easily usable in Shell, Golang, and Python.  With the help of colleagues like Stephane Graber and Andres Rodriguez, we now use these in many places in the Ubuntu ecosystem, such as LXD and MAAS.

If you've read this post, thank you for indulging me in a nostalgic little trip down memory lane!  I've had an amazing time designing, implementing, creating, and innovating with some of the most amazing people in the entire technology industry.  And here's to a productive, fun future!

Cheers,
:-Dustin

Read more