Canonical Voices

Posts tagged with 'kubernetes'

K. Tsakalozos

MicroK8s is a compact Kubernetes distribution that will install a complete, single node cluster of Kubernetes on your PC. This is great for local development, CI/CD and for the edge where resources may be limited.

Installing MicroK8s is as simple as:

sudo snap install microk8s --classic 

On a Windows machine

MicroK8s and Kubernetes require a Linux kernel to operate. Currently, the way to achieve this is by running MicroK8s in a virtual machine (VM).

The Canonical way to get a VM on Windows is with multipass. Multipass gives you an easy to use interface to manage VMs on Windows, MacOS and Linux. You have the option of creating VMs with HyperV on Windows Pro and Enterprise or you can take advantage of a local installation of VirtualBox.

Here is what you have to do:

multipass launch ubuntu -n microk8s-vm -mem 4G -disk 40G

Above, we name the VM microk8s-vm and we give it 4GB of RAM and 40GB of disk.

You can read more on the multipass installation and VM creation process here.

Getting MicroK8s in the VM

Installing MicroK8s is as simple as:

multipass exec microk8s-vm - sudo snap install microk8s -classic
multipass exec microk8s-vm - sudo iptables -P FORWARD ACCEPT

Where to go from here?

In our previous blog we showed how to manage the VM and the hosted MicroK8s so this is a good place to start getting familiar with this setup.

You may find interesting information on MicroK8s in the official docs.

Finally, have a look at the production grade offerings from Canonical Ltd. I am pleased to see how committed Canonical is. The production grade K8s distribution (CDK) runs on-premises and/or in public clouds. All offerings including MicroK8s, are covered by a simplified, coherent support contract.

Links


MicroK8s on Windows, the Canonical way was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
Carmine Rimi

Kubecon Europe 2019

Kubeflow, the Kubernetes native application for AI and Machine Learning, continues to accelerate feature additions and community growth. The community has released two new versions since the last Kubecon – 0.4 in January and 0.5 in April – and is currently working on the 0.6 release, to be released in July. The key features in each release are briefly discussed below.

Kubeflow at Kubecon

For those attending KubeCon + CloudNativeCon Europe 2019 in Barcelona, you can learn more about Kubeflow and how to apply it to your business in the following sessions:

TUESDAY, May 21

14:00Kubernetes the New Research Platform
– Lindsey Tulloch, Brock University & Bob Killen, University of Michigan
14:00Tutorial: Introduction to Kubeflow Pipelines
– Michelle Casbon, Dan Sanche, Dan Anghel, & Michal Zylinski, Google
15:55KubeFlow BoF (Birds of a Feather)
– David Aronchick, Microsoft & Yaron Haviv, Iguazio

WEDNESDAY, May 22

11:55Towards Kubeflow 1.0, Bringing a Cloud Native Platform For ML to Kubernetes
– David Aronchick, Microsoft & Jeremy Lewi, Google
14:00Building Cross-Cloud ML Pipelines with Kubeflow with Spark & Tensorflow
– Holden Karau, Google & Trevor Grant, IBM
14:50Managing Machine Learning in Production with Kubeflow and DevOps
– David Aronchick, Microsoft

THURSDAY, May 23

11:55A Tale of Two Worlds: Canary-Testing for Both ML Models and Microservices
– Jörg Schad, ArangoDB & Vincent Lesierse, Vamp.io
14:00Moving People and Products with Machine Learning on Kubeflow
– Jeremy Lewi, Google & Willem Pienaar, GO-JEK
14:50Economics and Best Practices of Running AI/ML Workloads on Kubernetes
– Maulin Patel, Google & Yaron Haviv, Iguazio

Come by the Canonical booth to learn how to get started with Kubeflow quickly and easily – on Ubuntu with Microk8s, and on Windows or macOS with Multipass and Microk8s.


What’s in Kubeflow 0.5?

  • This is a summary of some of the key features:
  • UI Improvements – new Central Dashboard and a new sidebar navigation
  • JupyterHub Improvements – launch multiple notebooks, attach volume
  • Fairing Python Library – build, train, and deploy models from notebooks or IDE
  • Katib (hyperparameter) Improvements – more generic, updated CRD, better status
  • KFCTL binary (configure and platform deploy). (https://deploy.kubeflow.cloud/)
  • Pipelines Persistence (upgrade or reinstall)
  • 150+ closed issues and 250+ merged PRs

You can learn more about the 0.5 release from the Kubeflow blog on 0.5.

What’s in Kubeflow 0.4?

  • An updated JupyterHub UI that makes it easy to spawn notebooks with Persistent Volume Claims (PVCs).
  • An alpha release of fairing, a python library that simplifies the build and train process for data scientists – they can start training jobs directly from a notebook or IDE.
  • An initial release of a Custom Resource Definition (CRD) for managing Jupyter notebooks. You can use kubectl to create notebook containers.
  • Kubeflow Pipelines for orchestrating ML workflows, which speeds the process of productizing models by reusing pipelines with different datasets or updated data.
  • Katib support for TFJob, which makes it easier to tune models and compare performance with different hyper-parameters.
  • Beta versions of the TFJob and PyTorch operators, which enable data scientists to program their training jobs against a more stable API and to more easily switch between training frameworks.

You can learn more about the 0.4 release from the Kubeflow blog on 0.4.

Learn more about Kubeflow

There is a wealth of information at kubeflow.org, this includes docs, blogs, and examples. In addition, you can go to ubuntu.com/ai/install to get started quickly.

The post Kubeflow at KubeCon Europe 2019 in Barcelona appeared first on Ubuntu Blog.

Read more
K. Tsakalozos

If you take a look at MicroK8s’ channel information with snap info microk8s you will see all available Kubernetes releases:

channels:
stable: v1.14.1 2019-04-18 (522) 214MB classic
candidate: v1.14.1 2019-04-15 (522) 214MB classic
beta: v1.14.1 2019-04-15 (522) 214MB classic
edge: v1.14.1 2019-05-10 (587) 217MB classic
1.15/stable: –
1.15/candidate: –
1.15/beta: –
1.15/edge: v1.15.0-alpha.3 2019-05-08 (578) 215MB classic
1.14/stable: v1.14.1 2019-04-18 (521) 214MB classic
1.14/candidate: v1.14.1 2019-04-15 (521) 214MB classic
1.14/beta: v1.14.1 2019-04-15 (521) 214MB classic
1.14/edge: v1.14.1 2019-05-11 (590) 217MB classic
1.13/stable: v1.13.5 2019-04-22 (526) 237MB classic
1.13/candidate: v1.13.6 2019-05-09 (581) 237MB classic
1.13/beta: v1.13.6 2019-05-09 (581) 237MB classic
1.13/edge: v1.13.6 2019-05-08 (581) 237MB classic
1.12/stable: v1.12.8 2019-05-02 (547) 259MB classic
1.12/candidate: v1.12.8 2019-05-01 (547) 259MB classic
1.12/beta: v1.12.8 2019-05-01 (547) 259MB classic
1.12/edge: v1.12.8 2019-04-24 (547) 259MB classic
1.11/stable: v1.11.10 2019-05-10 (557) 258MB classic
1.11/candidate: v1.11.10 2019-05-02 (557) 258MB classic
1.11/beta: v1.11.10 2019-05-02 (557) 258MB classic
1.11/edge: v1.11.10 2019-05-01 (557) 258MB classic
1.10/stable: v1.10.13 2019-04-22 (546) 222MB classic
1.10/candidate: v1.10.13 2019-04-22 (546) 222MB classic
1.10/beta: v1.10.13 2019-04-22 (546) 222MB classic
1.10/edge: v1.10.13 2019-04-22 (546) 222MB classic

If you want to follow the v1.14 Kubernetes releases you would:

sudo snap install microk8s --classic --channel=1.14/stable

Whereas if you always want to be on the latest stable release you would:

sudo snap install microk8s --classic

What is new in the channels list above is the pre-stable releases found under the 1.15 track (at the time of this writing the latest stable release is v1.14).

Following the pre-stable releases

We are committed to shipping MicroK8s with pre-stable releases under the following scheme.

  • The edge channel (eg 1.15/edge) holds the alpha upstream releases.
  • The beta channel (eg 1.15/beta) holds the beta upstream releases.
  • The candidate channel (eg 1.15/candidate) holds the release candidate of upstream releases.

Pre-stable releases will be available the same day they are released upstream.

If you want to test your work against the alpha 1.15 release simply do:

sudo snap install microk8s --classic --channel=1.15/edge

However, be aware that pre-stable releases may change before the stable release. Be sure to test any work against the stable release once it becomes available.

Tracks with stable releases

Tracks are meant to serve specific Kubernetes releases. For example the 1.15 track with its four channels, 1.15/edge, 1.15/beta, 1.15/candidate, 1.15/stable, serves the v1.15 K8s release. As soon as a new K8s stable release is made, all channels of the corresponding track are updated. In our example, as soon as v1.15 stable is released the corresponding track channels are updated in the following way:

  • The 1.15/edge channel is updated on every commit merged on the MicroK8s repository paired with the v1.15 stable K8s release.
  • The 1.15/beta and 1.15/candidate channels are updated on every upstream patch release. They hold whatever the 1.15/edge channel has at the time of the patch release.
  • The 1.15/stable channel gets updated with what 1.15/candidate holds a week after a new revision is put into 1.15/candidate.

I am confused. Which channel is right for me?

The single question you need to answer is what to put in the channel argument below:

sudo snap install microk8s --classic --channel=<What_to_use_here?>

Here are some suggestions for the channel to use based on your needs:

  • I want to always be on the latest stable Kubernetes.
    Use --channel=latest
  • I want to always be on the latest release in a specific upstream K8s release.
    Use --channel=<release>/stable eg --channel=1.14/stable.
  • I want to test-drive a pre-stable release.
    Use --channel=<next_release>/edge for alpha releases
    Use --channel=<next_release>/beta for beta releases
    Use --channel=<next_release>/candidate for candidate releases
  • I am waiting for a bug fix on MicroK8s:
    Use --channel=<release>/edge
  • I am waiting for a bug fix on upstream Kubernetes:
    Use --channel=<release>/candidate

Developing K8s core services with MicroK8s

One of the purposes of pre-stable releases is to assist K8s core service developers in their task. Let’s see how we can hook a local build of kubelet to a MicroK8s deployment.

Following the build instructions for Kubernetes we:

git clone https://github.com/kubernetes/kubernetes
cd kubernetes
build/run.sh make kubelet

The kubelet binary should be available under:

_output/dockerized/bin/linux/amd64/kubelet

Let’s grab a MicroK8s deployment:

sudo snap install microk8s --classic --channel=1.15/edge

To see what arguments the kubelet is running with we:

> ps -ef | grep kubelet
root 24184 1 2 17:28 ? 00:00:54 /snap/microk8s/578/kubelet
--kubeconfig=/snap/microk8s/578/configs/kubelet.config
--cert-dir=/var/snap/microk8s/578/certs
--client-ca-file=/var/snap/microk8s/578/certs/ca.crt
--anonymous-auth=false
--network-plugin=kubenet
--root-dir=/var/snap/microk8s/common/var/lib/kubelet
--fail-swap-on=false
--pod-cidr=10.1.1.0/24
--non-masquerade-cidr=10.152.183.0/24
--cni-bin-dir=/snap/microk8s/578/opt/cni/bin/
--feature-gates=DevicePlugins=true
--eviction-hard=memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi
--container-runtime=remote
--container-runtime-endpoint=/var/snap/microk8s/common/run/containerd.sock
--node-labels=microk8s.io/cluster=true

We now need to stop the kubelet that comes with MicroK8s and start our own build:

sudo systemctl stop snap.microk8s.daemon-kubelet.service
sudo _output/dockerized/bin/linux/amd64/kubelet 
--kubeconfig=/snap/microk8s/578/configs/kubelet.config
--cert-dir=/var/snap/microk8s/578/certs
--clit-ca-file=/var/snap/microk8s/578/certs/ca.crt
--anonymous-auth=false --network-plugin=kubenet
--root-dir=/var/snap/microk8s/common/var/lib/kubelet
--fail-swap-on=false --pod-cidr=10.1.1.0/24
--container-runtime=remote
--container-runtime-endpoint=/var/snap/microk8s/common/run/containerd.sock
--node-labels=microk8s.io/cluster=true --eviction-hard='memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi'

That’s it! Your kubelet now runs in place of the one in MicroK8s! You have to admit it is as simple as it gets.

What you should be aware is that some microk8s commands will restart services through systemd. For example, microk8s.enable dns will initiate a services restart including the kubelet shipped with MicroK8s.

Happy coding!

Further reading


Kubernetes pre-stable releases now available with MicroK8s was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

Kubernetes manages containerised applications. The container images are found either locally, or fetched from a remote registry. We recently released MicroK8s and noticed that some of our users were not comfortable with configuring containerd with image registries. In this blog we go through a few workflows most people are following.

What you should know already

We discuss how to consume local images, or images fetched from public and private registries in Kubernetes configured with containerd.

To get one such cluster simply:

sudo snap install microk8s --classic

Familiarity with building, pushing and tagging container images will be helpful. The examples that follow use Docker but you can use your preferred container tool chain.

To install Docker on Ubuntu 18.04:

sudo apt-get install docker.io

Add the user to the docker group:

sudo usermod -aG docker ${USER}

Open a new shell for the user, with updated group membership:

su - ${USER}

The Dockerfile we will be using is:

FROM nginx:alpine

To build the image tagged with mynginx:local, go to the directory where the Dockerfile is and run:

docker build . -t mynginx:local

Working with locally built images without a registry

When an image is built it is cached on the Docker daemon used during the build. Having run the docker build . -t mynginx:localcommand, you can see the newly built image by running:

docker images

This will list the images currently known to Docker, for example:

REPOSITORY          TAG                 IMAGE ID             SIZE
mynginx local 0be75340bd9b 16.1MB

The image we created is known to Docker. Kubernetes is not aware of the newly built image as your local Docker daemon is not part of the MicroK8s Kubernetes cluster. We can export the built image from the local Docker daemon and “inject” it into the MicroK8s image cache like this:

docker save mynginx > myimage.tar
microk8s.ctr -n k8s.io image import myimage.tar

Note that when we import the image to MicroK8s we do so under the k8s.io namespace (the -n k8s.io argument).

Now we can list the images present in MicroK8s:

microk8s.ctr -n k8s.io images ls

At this point we are ready to microk8s.kubectl apply -f a deployment with this image:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: mynginx:local
ports:
- containerPort: 80

We reference the image with image: mynginx:local. Kubernetes will behave as if there is an image in docker.io (the Dockerhub registry) for which it already has a cached copy. Note here, that containerd will not cache images with the latest tag so make sure you do not use that.

Working with public registries

After building an image with docker build . -t mynginx:local, it can be pushed to one of the mainstream public registries. You will need to create an account and register a username with the registry provider. For this example, we created an account with https://hub.docker.com/ and we log in as kjackal.

First we run the login command:

docker login

Docker will ask for a Docker ID and password to complete the login.

Login with your Docker ID to push and pull images from Docker Hub. If you don’t have a Docker ID, head over to https://hub.docker.com to create one.
Username: kjackal
Password: *******

Pushing to the registry requires that the image is tagged with your-hub-username/image-name:tag. We can either add proper tagging during build:

docker build . -t kjackal/mynginx:public

Or tag an already existing image using the image ID. Obtain the ID by running:

docker images

The ID is listed in the output:

REPOSITORY          TAG                 IMAGE ID            SIZE
mynginx local 0be75340bd9b 16.1MB

Then use the tag command:

docker tag 0be75340bd9b kjackal/mynginx:public

Now that the image is tagged correctly, it can be pushed to the registry:

docker push kjackal/mynginx

At this point we are ready to microk8s.kubectl apply -f a deployment with our image:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: kjackal/mynginx:public
ports:
- containerPort: 80

We refer to the image as image:kjackal/mynginx:public. Kubernetes will search for the image in its default registry, docker.io.

Working with MicroK8s’ registry add-on

Having a private Docker registry can significantly improve your productivity by reducing the time spent in uploading and downloading images. The registry shipped with MicroK8s is hosted within the Kubernetes cluster and is exposed as a NodePort service on port 32000 of the localhost. Note that this is an insecure registry and you may need to take extra steps to limit access to it.

You can install the registry with:

microk8s.enable registry

The add-on registry is backed up by a 20Gi persistent volume claimed for storing images. To satisfy this claim the storage add-on is also enabled along with the registry.

The containerd daemon used by MicroK8s is configured to trust this insecure registry. To upload images we have to tag them with localhost:32000/your-mage before pushing them:

We can either add proper tagging during build:

docker build . -t localhost:32000/mynginx:registry

Or tag an already existing image using the image ID. Obtain the ID by running:

docker images

The ID is listed in the output:

REPOSITORY                TAG             IMAGE ID       SIZE
localhost:32000/mynginx registry 0be75340bd9b 16.1MB

Then use the tag command:

docker tag 0be75340bd9b localhost:32000/mynginx:registry

Now that the image is tagged correctly, it can be pushed to the registry:

docker push localhost:32000/mynginx

Pushing to this insecure registry may fail in some versions of Docker unless the daemon is explicitly configured to trust it. To address this we need to edit /etc/docker/daemon.json and add:

{
"insecure-registries" : ["localhost:32000"]
}

The new configuration should be loaded with a Docker daemon restart:

sudo systemctl restart docker

At this point we are ready to microk8s.kubectl apply -f a deployment with our image:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: localhost:32000/mynginx:registry
ports:
- containerPort: 80

What if MicroK8s runs inside a VM?
Often MicroK8s is placed in a VM while the development process takes place on the host machine. In this setup pushing container images to the in-VM registry requires some extra configuration.

Let’s assume the IP of the VM running MicroK8s is 10.141.241.175. When we are on the host the Docker registry is not on localhost:32000 but on 10.141.241.175:32000. As a result the first thing we need to do is to tag the image we are building on the host with the right registry endpoint:

docker build . -t 10.141.241.175:32000/mynginx:registry

If we immediately try to push the mynginx image we will fail because the local Docker does not trust the in-VM registry. Here is what happens if we try a push:

> docker push 10.141.241.175:32000/mynginx
The push refers to repository [10.141.241.175:32000/mynginx]
Get https://10.141.241.175:32000/v2/: http: server gave HTTP response to HTTPS client

We need to be explicit and configure the Docker daemon running on the host to trust the in-VM insecure registry. Add the registry endpoint in /etc/docker/daemon.json:

{
"insecure-registries" : ["10.141.241.175:32000"]
}

Then restart the docker daemon on the host to load the new configuration:

sudo systemctl restart docker

We can now docker push 10.141.241.175:32000/mynginx and see the image getting uploaded. During the push our Docker client instructs the in-host Docker daemon to upload the newly built image to the 10.141.241.175:32000 endpoint as marked by the tag on the image. The Docker daemon sees (on /etc/docker/daemon.jason) that it trusts the registry and proceeds with uploading the image.

Consuming the image from inside the VM involves no changes:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: localhost:32000/mynginx:registry
ports:
- containerPort: 80

We reference the image with localhost:32000/mynginx:registry since the registry runs inside the VM so it is on localhost:32000.

Working with a private registry

Often organisations have their own private registry to assist collaboration and accelerate development. Kubernetes (and thus MicroK8s) need to be aware of the registry endpoints before being able to pull container images.

Insecure registry
Let’s assume the private insecure registry is at 10.141.241.175 on port 32000. The images we build need to be tagged with the registry endpoint:

docker build . -t 10.141.241.175:32000/mynginx:registry

Pushing the mynginx image at this point will fail because the local Docker does not trust the private insecure registry. The docker daemon used for building images should be configured to trust the private insecure registry. This is done by marking the registry endpoint in /etc/docker/daemon.json:

{
"insecure-registries" : ["10.141.241.175:32000"]
}

Restart the Docker daemon on the host to load the new configuration:

sudo systemctl restart docker

Now running

docker push 10.141.241.175:32000/mynginx

…should succeed in uploading the image to the registry.

Attempting to pull an image in MicroK8s at this point will result in an error like this:

Warning Failed 1s (x2 over 16s) kubelet, jackal-vgn-fz11m Failed to pull image “10.141.241.175:32000/mynginx:registry”: rpc error: code = Unknown desc = failed to resolve image “10.141.241.175:32000/mynginx:registry”: no available registry endpoint: failed to do request: Head https://10.141.241.175:32000/v2/mynginx/manifests/registry: http: server gave HTTP response to HTTPS client

We need to edit /var/snap/microk8s/current/args/containerd-template.toml and add the following under [plugins] -> [plugins.cri.registry] -> [plugins.cri.registry.mirrors]:

  [plugins.cri.registry.mirrors.”10.141.241.175:32000"]
endpoint = [“http://10.141.241.175:32000"]

See the full file here.

Restart MicroK8s to have the new configuration loaded:

microk8s.stop
microk8s.start

The image can now be deployed with:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 10.141.241.175:32000/mynginx:registry
ports:
- containerPort: 80

Note that the image is referenced with 10.141.241.175:32000/mynginx:registry.

Secure registry
There are a lot of ways to setup a private secure registry that may slightly change the way you interact with it. Instead of diving into the specifics of each setup we provide here two pointers on how you can approach the integration with Kubernetes.

  • The official Kubernetes documentation describe how to create a secret from the Docker login credentials and use it to access the secure registry. To achieve this, imagePullSecrets is used as part of the container spec.
  • MicroK8s v1.14 and onwards uses containerd. As described here configuring containerd involves editing /var/snap/microk8s/current/args/containerd-template.toml and reloading the new configuration via a microk8s.stop, microk8s.start cycle.

Further Reading


Working with image registries and containerd in Kubernetes was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

We have been quiet for a few months just because we have been busy. We were working mainly on two features that we intend to ship in the v1.14 release:

The entailed changes will affect the backwards compatibility and user experience of MicroK8s and this is the reason we time them with the upcoming upstream Kubernetes release. Here we will provide a) a short description of these features, b) a way for you to test drive the new MicroK8s, and c) the steps on how to hold back on the release in case this is a major show stopper for you.

The transition to Containerd

We replace Dockerd with Containerd mainly for two reasons.

  • The setup of having two dockerd on the same host has proven problematic. MicroK8s brings its own dockerd that may clash with a local dockerd users may want to have. With moving to containerd users can apt-get install docker.io without affecting MicroK8s. This switch also means that microk8s.docker will not be available anymore, you will have to use a docker client shipped with your distribution.
  • Performance. It is shown that there is a performance benefit from using containerd. This should not be a surprise since dockerd itself uses containerd internally. With the switch to containerd we are essentially removing a layer that is docker specific.

Hardening MicroK8s security

MicroK8s is a developer’s tool. It is not meant to be deployed in production or in hostile environments. Having said that we tried to make MicroK8s more secure by:

  • Exposing as few services as we can. Here is a table with what we left open and the access restrictions involved:
https://medium.com/media/4dac105e741261ca58799b0b8d101dae/href
  • A CA and certificates are created once at deployment time.

Test drive the upcoming patches

We have prepared a temporary branch you could use to evaluate the above changes:

snap install microk8s --classic --channel=1.13/edge/secure-containerd

If you have MicroK8s already installed you can switch the channel your MicroK8s is following:

snap refresh --channel=1.13/edge/secure-containerd microk8s

Try it out and let us know if we missed anything.

“Thanks, I’ll pass”

All release series up until now will not be affected by this change. This means you can have your MicroK8s deployment follow the 1.13 track:

snap refresh --channel=1.13/stable microk8s

Summing up

An important update is coming. Make sure you give it a try with:

snap install microk8s --classic --channel=1.13/edge/secure-containerd

If you do not like what you see tell us what breaks by filing an issue and keep using the 1.13 track.

References


Containerd on a more secure MicroK8s was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

MicroK8s in the Wild

As the popularity of MicroK8s grows I would like to take the time to mention some projects that use this micro Kubernetes distribution. But before that, let me do some introductions. For those unfamiliar with Kubernetes, Kubernetes is an open source container orchestrator. It shows you how to deploy, upgrade, and provision your application. This is one of the rare occasions where all the major players (Google, Microsoft, IBM, Amazon etc) have flocked around a single framework making it an unofficial standard.

MicroK8s is a distribution of Kubernetes. It is a snap package that sets up a Kubernetes cluster on your machine. You can have a Kubernetes cluster for local development, CI/CD or just for getting to know Kubernetes with just a:

sudo snap install microk8s --classic

If you are on a Mac or Windows you will need a Linux VM.

In what follows you will find some examples on how people are using MicroK8s. Note that this is not a complete list of MicroK8s usages, it is just some efforts I happen to be aware of.

Spring Cloud Kubernetes

This project is using CircleCI for CI/CD. MicroK8s provides a local Kubernetes cluster where integration tests are run. The addons enabled are dns, the docker registry and Istio. The integration tests need to plug into the Kubernetes cluster using the kubeconfig file and the socket to dockerd. This work was introduced in this Pull Request (thanks George) and it gave us the incentive to add a microk8s.status command that would wait for the cluster to come online. For example we can wait up to 5 minutes for MicroK8s to come up with:

microk8s.status --wait-ready --timeout=300

OpenFaaS on MicroK8s

It was this year’s Config Management Camp where I met Joe McCobe the author of “Deploy OpenFaaS with MicroK8s”. I will just repeat his words “was blown away by the speed and ease with which I could get a basic lab environment up and running”.

What about Kubeless?

It seems the ease of deploying MicroK8s goes well with the ease of software development of serverless frameworks. Users of Kubeless are also kicking the tires on MicroK8s. Have a look at “Files upload from Kubeless on MicroK8s to Minio” and “Serverless MicroK8s Kubernetes.”

SUSE Cloud Application Platform (CAP) on Microk8s

In his blog post Dimitris describes in detail all the configuration he had to do to get the software from SUSE to run on MicroK8s. The most interesting part is the motivation behind this effort. As he says “… MicroK8s… use your machine’s resources without you having to decide on a VM size beforehand.” As he explained to me his application puts significant memory pressure only during bootstrap. MicroK8s enabled him to reclaim the unused memory after the initialization phase.

Kubeflow

Kubeflow is the missing link between Kubernetes and AI/ML. Canonical is actively involved in this so…. you should definitely check it out. Sure, I am biased but let me tell you a true story. I have a friend who was given three machines to deploy Tensorflow and run some experiments. She did not have any prior experience at the time so… none of the three node clusters were setup in exactly the same way. There was always something off. This head-scratching situation is just one reason to use Kubeflow.

Transcrobes

Transcrobes comes from an active member of the MicroK8s community. It serves as a language learning aid. “The system knows what you know, so can give you just the right amount of help to be able to understand the words you don’t know but gets out of the way for the stuff you do know.” Here MicroK8s is used for quick prototyping. We wish you all the best Anton, good luck!

Summing Up

We have seen a number of interesting use cases that include CI/CD, Serverless programming, lab setup, rapid prototyping and application development. If you have a MicroK8s use case do let us know. Come and say hi at #microk8s on the Kubernetes slack and/or issue a Pull Request against our MicroK8s In The Wild page.

References


MicroK8s in the Wild was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

MicroK8s is a local deployment of Kubernetes. Let’s skip all the technical details and just accept that Kubernetes does not run natively on MacOS or Windows. You may be thinking “I have seen Kubernetes running on a MacOS laptop, what kind of sorcery was that?” It’s simple, Kubernetes is running inside a VM. You might not see the VM or it might not even be a full blown virtual system but some level of virtualisation is there. This is exactly what we will show here. We will setup a VM and inside there we will install MicroK8s. After the installation we will discuss how to use the in-VM-Kubernetes.

A multipass VM on MacOS

Arguably the easiest way to get an Ubuntu VM on MacOS is with multipass. Head to the releases page and grab the latest package. Installing it is as simple as double-clicking on the .pkg file.

To start a VM with MicroK8s we:

multipass launch --name microk8s-vm --mem 4G --disk 40G
multipass exec microk8s-vm -- sudo snap install microk8s --classic
multipass exec microk8s-vm -- sudo iptables -P FORWARD ACCEPT

Make sure you reserve enough resources to host your deployments; above, we got 4GB of RAM and 40GB of hard disk. We also make sure packets to/from the pod network interface can be forwarded to/from the default interface.

Our VM has an IP that you can check with:

> multipass list
Name State IPv4 Release
microk8s-vm RUNNING 10.72.145.216 Ubuntu 18.04 LTS

Take a note of this IP since our services will become available there.

Other multipass commands you may find handy:

  • Get a shell inside the VM:
multipass shell microk8s-vm
  • Shutdown the VM:
multipass stop microk8s-vm
  • Delete and cleanup the VM:
multipass delete microk8s-vm 
multipass purge

Using MicroK8s

To run a command in the VM we can get a multipass shell with:

multipass shell microk8s-vm

To execute a command without getting a shell we can use multipass exec like so:

multipass exec microk8s-vm -- /snap/bin/microk8s.status

A third way to interact with MicroK8s is via the Kubernetes API server listening on port 8080 of the VM. We can use microk8s’ kubeconfig file with a local installation of kubectl to access the in-VM-kubernetes. Here is how:

multipass exec microk8s-vm -- /snap/bin/microk8s.config > kubeconfig

Install kubectl on the host machine and then use the kubeconfig:

kubectl --kubeconfig=kubeconfig get all --all-namespaces

Accessing in-VM services — Enabling addons

Let’s first enable dns and the dashboard. In the rest of this blog we will be showing different methods of accessing Grafana:

multipass exec microk8s-vm -- /snap/bin/microk8s.enable dns dashboard

We check the deployment progress with:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get all --all-namespaces

After all services are running we can proceed into looking how to access the dashboard.

The Grafana of our dashboard

Accessing in-VM services — Use the Kubernetes API proxy

The API server is on port 8080 of our VM. Let’s see how the proxy path looks like:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl cluster-info
...
Grafana is running at http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
...

By replacing 127.0.0.1 with the VM’s IP, 10.72.145.216 in this case, we can reach our service at:

http://10.72.145.216:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

Accessing in-VM services — Setup a proxy

In a very similar fashion to what we just did above, we can ask Kubernetes to create a proxy for us. We need to request the proxy to be available to all interfaces and to accept connections from everywhere so that the host can reach it.

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl proxy --address='0.0.0.0' --accept-hosts='.*'
Starting to serve on [::]:8001

Leave the terminal with the proxy open. Again, replacing 127.0.0.1 with the VMs IP we reach the dashboard through:

http://10.72.145.216:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

Make sure you go through the official docs on constructing the proxy paths.

Accessing in-VM services — Use a NodePort service

We can expose our service in a port on the VM and access it from there. This approach is using the NodePort service type. We start by spotting the deployment we want to expose:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get deployment -n kube-system  | grep grafana
monitoring-influxdb-grafana-v4 1 1 1 1 22h

Then we create the NodePort service:

multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl expose deployment.apps/monitoring-influxdb-grafana-v4 -n kube-system --type=NodePort

We have now a port for the Grafana service:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get services -n kube-system  | grep NodePort
monitoring-influxdb-grafana-v4 NodePort 10.152.183.188 <none> 8083:32580/TCP,8086:32152/TCP,3000:32720/TCP 13m

Grafana is on port 3000 mapped here to 32720. This port is randomly selected so it my vary for you. In our case, the service is available on 10.72.145.216:32720.

Conclusions

MicroK8s on MacOS (or Windows) will need a VM to run. This is no different that any other local Kubernetes solution and it comes with some nice benefits. The VM gives you an extra layer of isolation. Instead of using your host and potentially exposing the Kubernetes services to the outside world you have full control of what others can see. Of course, this isolation comes with some extra administrative overhead that may not be applicable for a dev environment. Give it a try and tell us what you think!

Links

CanonicalLtd/multipass


MicroK8s on MacOS was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

Looking at the configuration of a Kubernetes node sounds like a simple thing, yet it not so obvious.

The arguments kubelet takes come either as command line parameters or from a configuration file you pass with --config. Seems, straight forward to do a ps -ex | grep kubelet and look in the file you see after the --config parameter. Simple, right? But… are you sure you got all the arguments right? What if Kubernetes defaulted to a value you did not want? What if you do not have shell access to a node?

There is a way to query the Kubernetes API for the configuration a node is running with: api/vi/nodes/<node_name>/proxy/cofigz. Lets see this in a real deployment.

Deploy a Kubernetes Cluster

I am using the Canonical Distribution of Kubernetes (CDK) on AWS here but you can use whichever cloud and Kubernetes installation method you like.

juju bootstrap aws
juju deploy canonical-kubernetes

..and wait for the deployment to finish

watch juju status 

Change a Configuration

CDK allows for configuring both the command line arguments and the extra arguments of the config file. Here we add arguments to the config file:

juju config kubernetes-worker kubelet-extra-config='{imageGCHighThresholdPercent: 60, imageGCLowThresholdPercent: 39}'

A great question is how we got the imageGCHighThreshholdPercent literal. At the time of this writing the official upstream docs point you to the type definitions; a rather ugly approach. There is an EvictionHard property in the type definitions, however, if you look at the example of the upstream docs you see the same property is with lowercase.

Check the Configuration

We will need two shells. On the first one we will start the API proxy and on the second we will query the API. On the first shell:

juju ssh kubernetes-master/0
kubectl proxy

Now that we have the proxy at 127.0.0.1:8001 on the kubernetes-master, use a second shell to get a node name and we query the API:

juju ssh kubernetes-master/0
kubectl get no
curl -sSL "http://localhost:8001/api/v1/nodes/<node_name>/proxy/configz" | python3 -m json.tool

Here is a full run:

juju ssh kubernetes-master/0
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1023-aws x86_64)
* Documentation:  https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Mon Oct 22 10:40:40 UTC 2018
System load:  0.11               Processes:              115
Usage of /: 13.7% of 15.45GB Users logged in: 1
Memory usage: 20% IP address for ens5: 172.31.0.48
Swap usage: 0% IP address for fan-252: 252.0.48.1
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
Last login: Mon Oct 22 10:38:14 2018 from 2.86.54.15
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@ip-172-31-0-48:~$ kubectl get no
NAME STATUS ROLES AGE VERSION
ip-172-31-14-174 Ready <none> 41m v1.12.1
ip-172-31-24-80 Ready <none> 41m v1.12.1
ip-172-31-63-34 Ready <none> 41m v1.12.1
ubuntu@ip-172-31-0-48:~$ curl -sSL "http://localhost:8001/api/v1/nodes/ip-172-31-14-174/proxy/configz" | python3 -m json.tool
{
"kubeletconfig": {
"syncFrequency": "1m0s",
"fileCheckFrequency": "20s",
"httpCheckFrequency": "20s",
"address": "0.0.0.0",
"port": 10250,
"tlsCertFile": "/root/cdk/server.crt",
"tlsPrivateKeyFile": "/root/cdk/server.key",
"authentication": {
"x509": {
"clientCAFile": "/root/cdk/ca.crt"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"registryPullQPS": 5,
"registryBurst": 10,
"eventRecordQPS": 5,
"eventBurst": 10,
"enableDebuggingHandlers": true,
"healthzPort": 10248,
"healthzBindAddress": "127.0.0.1",
"oomScoreAdj": -999,
"clusterDomain": "cluster.local",
"clusterDNS": [
"10.152.183.93"
],
"streamingConnectionIdleTimeout": "4h0m0s",
"nodeStatusUpdateFrequency": "10s",
"nodeLeaseDurationSeconds": 40,
"imageMinimumGCAge": "2m0s",
"imageGCHighThresholdPercent": 60,
"imageGCLowThresholdPercent": 39,
"volumeStatsAggPeriod": "1m0s",
"cgroupsPerQOS": true,
"cgroupDriver": "cgroupfs",
"cpuManagerPolicy": "none",
"cpuManagerReconcilePeriod": "10s",
"runtimeRequestTimeout": "2m0s",
"hairpinMode": "promiscuous-bridge",
"maxPods": 110,
"podPidsLimit": -1,
"resolvConf": "/run/systemd/resolve/resolv.conf",
"cpuCFSQuota": true,
"cpuCFSQuotaPeriod": "100ms",
"maxOpenFiles": 1000000,
"contentType": "application/vnd.kubernetes.protobuf",
"kubeAPIQPS": 5,
"kubeAPIBurst": 10,
"serializeImagePulls": true,
"evictionHard": {
"imagefs.available": "15%",
"memory.available": "100Mi",
"nodefs.available": "10%",
"nodefs.inodesFree": "5%"
},
"evictionPressureTransitionPeriod": "5m0s",
"enableControllerAttachDetach": true,
"makeIPTablesUtilChains": true,
"iptablesMasqueradeBit": 14,
"iptablesDropBit": 15,
"failSwapOn": false,
"containerLogMaxSize": "10Mi",
"containerLogMaxFiles": 5,
"configMapAndSecretChangeDetectionStrategy": "Watch",
"enforceNodeAllocatable": [
"pods"
]
}
}

Summing Up

There is a way to get the configuration of an online Kubernetes node through the Kubernetes API (api/v1/nodes/<node_name>/proxy/configz). This might be handy if you want to code against Kubernetes or you do not want to get into the intricacies of your particular cluster setup.

References


How to Inspect the Configuration of a Kubernetes Node was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

Istio almost immediately strikes you as enterprise grade software. Not so much because of the complexity it introduces, but more because of the features it adds to your service mesh. Must-have features packaged together in a coherent framework:

  • Traffic Management
  • Security Policies
  • Telemetry
  • Performance Tuning

Since microk8s positions itself as the local Kubernetes cluster developers prototype on, it is no surprise that deployment of Istio is made dead simple. Let’s start with the microk8s deployment itself:

> sudo snap install microk8s --classic

Istio deployment available with:

> microk8s.enable istio

There is a single question that we need to respond to at this point. Do we want to enforce mutual TLS authentication among sidecars? Istio places a proxy to your services so as to take control over routing, security etc. If we know we have a mixed deployment with non-Istio and Istio enabled services we would rather not enforce mutual TLS:

> microk8s.enable istio
Enabling Istio
Enabling DNS
Applying manifest
service/kube-dns created
serviceaccount/kube-dns created
configmap/kube-dns created
deployment.extensions/kube-dns created
Restarting kubelet
DNS is enabled
Enforce mutual TLS authentication (https://bit.ly/2KB4j04) between sidecars? If unsure, choose N. (y/N): y

Believe it or not we are done, Istio v1.0 services are being set up, you can check the deployment progress with:

> watch microk8s.kubectl get all --all-namespaces

We have packaged istioctl in microk8s for your convenience:

> microk8s.istioctl get all --all-namespaces
NAME KIND NAMESPACE AGE
grafana-ports-mtls-disabled Policy.authentication.istio.io.v1alpha1 istio-system 2m
DESTINATION-RULE NAME   HOST                                             SUBSETS   NAMESPACE      AGE
istio-policy istio-policy.istio-system.svc.cluster.local istio-system 3m
istio-telemetry istio-telemetry.istio-system.svc.cluster.local istio-system 3m
GATEWAY NAME                      HOSTS     NAMESPACE      AGE
istio-autogenerated-k8s-ingress * istio-system 3m

Do not get scared by the amount of services and deployments, everything is under the istio-system namespace. We are ready to start exploring!

Demo Time!

Istio needs to inject sidecars to the pods of your deployment. In microk8s auto-injection is supported so the only thing you have to label the namespace you will be using with istion-injection=enabled:

> microk8s.kubectl label namespace default istio-injection=enabled

Let’s now grab the bookinfo example from the v1.0 Istio release and apply it:

> wget https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/platform/kube/bookinfo.yaml
> microk8s.kubectl create -f bookinfo.yaml

The following services should be available soon:

> microk8s.kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) details ClusterIP 10.152.183.33 <none> 9080/TCP kubernetes ClusterIP 10.152.183.1 <none> 443/TCP productpage ClusterIP 10.152.183.59 <none> 9080/TCP ratings ClusterIP 10.152.183.124 <none> 9080/TCP reviews ClusterIP 10.152.183.9 <none> 9080/TCP

We can reach the services using the ClusterIP they have; we can for example get to the productpage in the above example by pointing our browser to 10.152.183.59:9080. But let’s play by the rules and follow the official instructions on exposing the services via NodePort:

> wget https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/networking/bookinfo-gateway.yaml
> microk8s.kubectl create -f bookinfo-gateway.yaml

To get to the productpage through ingress we shamelessly copy the example instructions:

> microk8s.kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}'
31380

And our node is the localhost so we can point our browser to http://localhost:31380/productpage

Show me some graphs!

Of course graphs look nice in a blog post, so here you go.

The Grafana Service

You will need to grab the ClusterIP of the Grafana service:

microk8s.kubectl -n istio-system get svc grafana

Prometheus is also available in the same way.

microk8s.kubectl -n istio-system get svc prometheus
The Prometheus Service

And for traces you will need to look at the jaeger-query.

microk8s.kubectl -n istio-system get service/jaeger-query
The Jaeger Service

The servicegraph endpoint is available with:

microk8s.kubectl -n istio-system get svc servicegraph
The ServiceGraph

I should stop here. Go and checkout the Istio documentation for more details on how to take advantage of what Istio is offering.

What to keep from this post

References


Microk8s puts up its Istio and sails away was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

A friend once asked, why would one prefer microk8s over minikube?… We never spoke since. True story!

That was a hard question, especially for an engineer. The answer is not so obvious largely because it has to do with personal preferences. Let me show you why.

Microk8s-wise this is what you have to do to have a local Kubernetes cluster with a registry:

sudo snap install microk8s --edge --classic
microk8s.enable registry

How is this great?

  • It is super fast! A couple of hundreds of MB over the internet tubes and you are all set.
  • You skip the pain of going through the docs for setting up and configuring Kubernetes with persistent storage and the registry.

So why is this bad?

  • As a Kubernetes engineer you may want to know what happens under the hood. What got deployed? What images? Where?
  • As a Kubernetes user you may want to configure the registry. Where are the images stored? Can you change any access credentials?

Do you see why this is a matter of preference? Minikube is a mature solution for setting up a Kubernetes in a VM. It runs everywhere (even on windows) and it does only one thing, sets up a Kubernetes cluster.

On the other hand, microk8s offers Kubernetes as an application. It is opinionated and it takes a step towards automating common development workflows. Speaking of development workflows...

The full story with the registry

The registry shipped with microk8s is available on port 32000 of the localhost. It is an insecure registry because, let’s be honest, who cares about security when doing local development :) .

And it’s getting better, check this out! The docker daemon used by microk8s is configured to trust this insecure registry. It is this daemon we talk to when we want to upload images. The easiest way to do so is by using the microk8s.docker command coming with microk8s:

# Lets get a Docker file first
wget https://raw.githubusercontent.com/nginxinc/docker-nginx/ddbbbdf9c410d105f82aa1b4dbf05c0021c84fd6/mainline/stretch/Dockerfile
# And build it
microk8s.docker build -t localhost:32000/nginx:testlocal . microk8s.docker push localhost:32000/nginx:testlocal
If you prefer to use an external docker client you should point it to the socket dockerd is listening on:
docker -H unix:///var/snap/microk8s/docker.sock ps

To use an image from the local registry just reference it in your manifests:

apiVersion: v1
kind: Pod
metadata:
name: my-nginx
namespace: default
spec:
containers:
- name: nginx
image: localhost:32000/nginx:testlocal
restartPolicy: Always

And deploy with:

microk8s.kubectl create -f the-above-awesome-manifest.yaml

Microk8s and registry

What to keep from this post?

You want Kubernetes? We deliver it as a (sn)app!

You want to see your tool-chain in microk8s? Drop us a line. Send us a PR!

We are pleased to see happy Kubernauts!

Those of you who are here for the gossip. He was not that good of a friend (obviously!). We only met in a meetup :) !

References


Microk8s Docker Registry was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

In this post we will see how to automate the deployment of an ASP.NET Core application on an On-Prem Kubernetes cluster. We will base our work on the excellent blog “Deploying ASP.NET Core apps on App Engine” by Mete Atamel. Our contribution would be a) showing how to target public as well as private clouds for Kubernetes deployments, and b) automate the delivery of your software through a basic CI/CD based on Jenkins.

Click here to share this article on LinkedIn »

What will we be using

  • An Ubuntu 16.04 machine
  • git command-line client (installed by sudo apt install git)
  • Internet access
  • $0, yes this is going to be a “look how much you can do for free” kind of blog post

Quick outline

  • Create an ASP.NET Core application on the Ubuntu machine and push the code to GitHub.
  • Package the application in a Docker container and upload it to Docker Hub
  • Spin-up a Kubernetes cluster, the Canonical distribution. Here we will deploy that cluster locally, but you can use any private or public cloud you might have access to.
  • Deploy your application on the Kubernetes cluster and expose it.
  • Deploy Jenkins next to Kubernetes and automate the delivery of your app.

Let’s not waste any time we have a long way ahead of us.

ASP.NET Core applications on Linux

Installing .NET on an Ubuntu 16.04 is as simple as adding a repository and getting dotnet-sdk-2.1.4:

$ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg 
$ sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
$ sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
$ sudo apt-get install apt-transport-https
$ sudo apt-get update
$ sudo apt-get install dotnet-sdk-2.1.4

We can now create our application. It is going to be the template razor application as described in Mete’s codelab and we will not even bother to change the application’s name!

$ mkdir -p ~/workspace/dotnet
$ cd ~/workspace/dotnet
$ dotnet new razor -o HelloWorldAspNetCore
$ cd HelloWorldAspNetCore
$ dotnet run

You should see the application on your browser at http://localhost:5000

Our code will be on a public github repository so go ahead and create an account if you do not already have one (https://github.com). Click the “New repository” button to create a public repository named HelloWorldAspNetCore.

Creating a repository for our code.

Lets add a .gitignore file at the root of our project and push our code:

$ cd ~/workspace/dotnet/HelloWorldAspNetCore
$ wget https://raw.githubusercontent.com/OmniSharp/generator-aspnet/master/templates/gitignore.txt
$ mv gitignore.txt .gitignore
$ git init
$ git add .
$ git commit -m "Initial commit"
$ git remote add origin https://github.com/ktsakalozos/HelloWorldAspNetCore.git
$ git push -u origin master

You can now relax! Your code is safe with github.

Package your application in a Docker container

Installing docker would require adding the respective repository and apt getting docker-ce:

$ sudo apt-get install apt-transport-https ca-certificates curl \
software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
$ sudo apt-get update
$ sudo apt-get install docker-ce
$ sudo usermod -a -G docker $USER
$ newgrp docker

While we are in the process of setting up docker we might as well register with Docker Hub and create a repository to store our images. Go to https://hub.docker.com and create an account. I will be here waiting :).

After logging in to Docker Hub click on the “Create Repository” button and create a new public repository. That is where we will be pushing our docker images. For the rest of this blog the docker user is “kjackal” and the repository is named “hello-dotnet”. This will make more sense to you shortly.

New docker repository form.

To package our application we first need to ask dotnet to compile our code and gather any dependencies into a folder for later deployment. This is done with:

$ dotnet publish -c Release

Our docker container should package everything under bin/Release/netcoreapp2.0/publish/. We create a `Dockerfile` on the root of our project with the following contents:

$ cd ~/workspace/dotnet/HelloWorldAspNetCore/
$ cat ./Dockerfile
FROM gcr.io/google-appengine/aspnetcore:2.0
ADD ./bin/Release/netcoreapp2.0/publish/ /app
ENV ASPNETCORE_URLS=http://*:${PORT}
WORKDIR /app
ENTRYPOINT [ "dotnet", "HelloWorldAspNetCore.dll"]

Building the container and testing that it works:

$ docker build -t kjackal/hello-dotnet:v1 .
$ docker run -p 8080:8080 kjackal/hello-dotnet:v1

You should see the output at http://localhost:8080 .

It is time to push the first version to Docker Hub:

$ docker login
$ docker push kjackal/hello-dotnet:v1

The Dockerfile should be part of the code so we commit it and push it to the git repository:

$ git add Dockerfile
$ git commit -m "Adding dockerfile"
$ git push origin master

Deploy a Kubernetes Cluster

For the on-prem Kubernetes deployments we will go with Canonical’s solution. The reason being (apart from me being biased) that Canonical offers a seamless and effortless transition from a toy deployment running on your laptop to a full blown production grade Kubernetes deployed on private, public clouds or even on bare metal.

In this blog we show how to deploy Kubernetes and Jenkins on your localhost (laptop/desktop) — just make sure you have at least 8GB of RAM. We first need to have LXD running. LXD is a really powerful type of container based on the same technologies as Docker. Contrary to Docker, LXD containers resemble more to virtual machines (VMs). Lets simplify things and assume from now on that LXD containers are VMs that boot instantly and have no performance overhead!

In the following snippet we install LXD. While initialising ( /snap/bin/lxd init) make sure you go with the defaults but you do not enable ipv6. When asked “What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]?” Reply with “none”.

$ sudo snap install lxd
$ sudo usermod -a -G lxd $USER
$ newgrp lxd
$ /snap/bin/lxd init

At this point we can use either Juju or Conjure-up to deploy Kubernetes. Conjure-up is essentially a wizard sitting on-top of Juju.

As already mentioned by Tim installing Kubernetes with conjure-up is as simple as:

$ sudo snap install conjure-up --classic
$ conjure-up

Canonical Kubernetes comes in two flavours:

  1. kubernetes-core is a cut down version installed in two machines, in our case two LXD containers running on our localhost.
  2. canonical-kubernetes is the full production grade deployment with features such as HA and monitoring.

We will go for kubernetes-core; on the next screen select localhost as the cloud provider. Follow the wizard’s steps till the end and wait for the deployment to finish. For a headless deployment you can do a conjure-up kubernetes-core localhost .

To review the status of our deployment have a look at:

$ juju status

Getting into one of the LXD containers/machines we use juju ssh, for exmple:

$ juju ssh kubernetes-master/0

Inside kubernetes-master under /home/ubuntu you will find a config file you can use for accessing your cluster. We can fetch that file with:

$ juju scp kubernetes-master/0:config .

Conjure-up has already copied the Kubernetes config locally and installed kubectl for us. How nice!

Automate the deployment process, CI/CD

We will be showing a few Jenkins jobs to automate the process of building, packaging and deploying our application. The intention here is to show everything that happens under the hood and not hide behind a flashy UI.

First things first, we need a Jenkins machine.

$ juju deploy jenkins

We deploy Jenkins next to our Kubernetes cluster. This will take some time, you can check the progress of the deployment with juju status.

Next we need to set a password to Jenkins and expose it so we can access its UI on port 8080. Exposing Jenkins is not needed in the localhost deployment which uses LXD containers but we show it here for completeness.

$ juju config jenkins password='your_secure_password'
$ juju expose jenkins

Before we start crafting our jobs we need to configure Jenkins a bit more. I can tell you beforehand our jobs need to sudo run without asking for a password. The easiest way to do that is to edit /etc/sudoers in the Jenkins machine. Here is how we append a line to the sudoers file with Juju:

$ juju run --unit jenkins/0 -- 'sudo echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers'

We also know that our Jenkins jobs need to talk to the Kubernetes. To this end Jenkins will need the kubeconfig file. We take the file from the kubernetes-master and place it in our Jenkins machine under /var/tmp:

$ juju scp kubernetes-master/0:config .
$ juju scp config jenkins/0:/var/tmp/

Last part of the configuration, I promise! We know our application will be exposed using Kubernetes NodePort on port 31576. We need to make sure there is no firewall blocking that port and requests can reach it:

$ juju run --application kubernetes-worker -- open-port 31576

We are now ready to create our three main jobs:

Three jobs we will be creating
  • The first Jenkins jobs is the “Install dependencies”. This job is just a shell script installing all software packages needed to a) talk to kubernetes, b) build our ASP.NET application and c) package everything into a docker container. Place the following in a shell script job and run it once:
echo "Installing kubectl"
sudo snap install kubectl --classic
echo "Installing dotnet"
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-get install apt-transport-https -y
sudo apt-get update
sudo apt-get install dotnet-sdk-2.1.4 -y
echo "Installing docker"
sudo apt-get install apt-transport-https ca-certificates curl \
software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce -y
  • The second Jenkins job bootstraps the service in Kubernetes. It creates a deployment using the v1 image we created above and it makes sure all operations are recorded ( — record param). This deployment is exposed using NodePort 31576. Place the following in a Jenkins jobs and run it once, remember to update the docker user name (kjackal):
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config run hello-dotnet \
--image=kjackal/hello-dotnet:v1 --port=8080 --record
echo "apiVersion: v1
kind: Service
metadata:
name: hello-dotnet
spec:
type: NodePort
ports:
- port: 8080
nodePort: 31576
name: http
selector:
run: hello-dotnet" > /tmp/expose.yaml
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config apply -f /tmp/expose.yaml

Use juju status to find the IP of a kubernetes worker and open a browser at http://<kubernetes-worker-ip>:31576 . Your application is served from Kubernetes! But we are not done yet.

  • Lets create a third job (“Build and release”) that pulls your code from GitHub, compiles it, puts it in a container and deploys that container. Replace the repository and the docker username in the following snippet. Then create a Jenkins job:
rm -rf HelloWorldAspNetCore
git clone https://github.com/ktsakalozos/HelloWorldAspNetCore.git
cd HelloWorldAspNetCore
dotnet publish -c Release
sudo docker login -u kjackal -p ${DOCKER_PASS}
sudo docker build -t kjackal/hello-dotnet:${DOCKER_TAG} .
sudo docker push kjackal/hello-dotnet:${DOCKER_TAG}
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config set image deployment/hello-dotnet hello-dotnet=kjackal/hello-dotnet:${DOCKER_TAG}

Two parameters are needed: ${DOCKER_TAG} is a string, and ${DOCKER_PASS} holds the docker password of the user (kjackal in this case). You have to tick the checkbox indicating this is a parametrized job and add the two parameters. We are ready! Trigger the job and wait for it to finish. Your code should find its way to our Kubernetes cluster.

You do not believe me? Have a look at the rollout history of our deployment.

$ kubectl rollout history deployment/hello-dotnet

Release from GitHub

Everything looks great so far. Every time we want to release we will login to Jenkins and trigger the “Build and release” job…. Lets try something different. Let’s trigger a release from our code by creating a tag.

Create a “Release from Github” job on Jenkins and have it running periodically every 5 minutes:

Trigger job every 5 minutes

We want this job to look for new tags and if it detects a new one to perform the usual compile, package, deploy cycle. Here is how one such job:

#!/bin/bash
REPO="https://github.com/ktsakalozos/HelloWorldAspNetCore.git"
rm -rf ./HelloWorldAspNetCore
git clone $REPO
cd HelloWorldAspNetCore
# Initialise tags list
if [ ! -f /var/tmp/known-tags ]; then
git tag > /var/tmp/known-tags
echo "Initialising. List of preexisting git tags:"
cat /var/tmp/known-tags
exit 1
fi
mv /var/tmp/known-tags /var/tmp/know-tags.old
git tag > /var/tmp/known-tags
diff /var/tmp/known-tags /var/tmp/know-tags.old
if [ $? == '0' ]; then
echo "No new git tags detected."
exit 2
fi
# We have new tags
last_tag=$(grep -v -f /var/tmp/know-tags.old /var/tmp/known-tags | tail -n 1)
git checkout tags/${last_tag}
echo "Buidling ${last_tag}"
dotnet publish -c Release
sudo docker login -u kjackal -p <replace_with_docker_password>
sudo docker build -t kjackal/hello-dotnet:${last_tag} .
sudo docker push kjackal/hello-dotnet:${last_tag}
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config set image deployment/hello-dotnet hello-dotnet=kjackal/hello-dotnet:${last_tag}

Make sure you run the this job once so it gets initialised. Afterwords, and every 5 minutes, this job will look at the available tags and fail if no new tags are present.

Lets create a new tag/release now. Go to your GitHub repository click releases -as shown below- and fill up the release form.

Here is where you find the releases you have.
Release form on Github

Within 5 minutes your release will reach Kubernetes! Without you needing to login to Jenkins. Automagically!

A few points to note:

  1. There might be Jenkins plugins for this. But we said we will be looking at what is under the hood without hiding behind a GUI.
  2. You should place your Jenkins jobs on GitHub along with your code.

Where to go from here

So far you have seen the parts of a basic, yet fully functional CI/CD that delivers a .NET application onto any Kubernetes cluster. Each of the steps shown above are subject to improvements and tailoring based on your needs.

  • The ASP.NET application would normally have automated tests. You would want to run these tests in your CI. Travis is a great tool for this purpose and depending on the size and nature of your project it could be free. Alternatively you could setup Jenkins to run these tests as often as you please and report back the result.
  • Look into Helm if you intend to distribute your application instead of offering it as a service hosted by you.
  • Experiment with release strategies and find the one that best suits your needs. Make sure you read through Kubernetes deployment strategies.
  • You could consider using the auto scaling features of Kubernetes.
  • The Kubernetes cluster here is on LXD containers. You should use a cloud either private (eg Openstack) or public. With Conjure-up and Juju the cluster deployment process remains the same regardless of the targeted cloud. You have no excuse.
  • Finally, as you move to a cloud make sure you deploy the Canonical Distribution of Kubernetes instead of kubernetes-core. You will get a more robust deployment with HA features, logging and monitoring.

Resources


Automated Delivery of ASP.NET Core Apps on On-Prem Kubernetes was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

I read the Hacker News post Heptio Contour and I thought “Cool! A project from our friends at Heptio, lets see what they got for us”. I wont lie to you, at first I was a bit disappointed because there was no special mention for Canonical Distribution of Kubernetes (CDK) but I understand, I am asking too much :). Let me cover this gap here.

Deploy CDK

To deploy CDK on Ubuntu you need to just do a:

> sudo snap install conjure-up --classic
> conjure-up kubernetes-core

Using ‘kubernetes-core’ will give you a minimal k8s cluster — perfect for our use case. For a larger, more robust cluster, try ‘canonical-kubernetes.’

Deploy Contour and Demo App

CDK already comes with an ingress solution so you need to disable it and deploy Contour. Here we also deploy the demo kuard application:

> juju config kubernetes-worker ingress=false
> kubectl --kubeconfig=/home/jackal/.kube/config apply -f http://j.hept.io/contour-deployment-norbac
> kubectl --kubeconfig=/home/jackal/.kube/config apply -f http://j.hept.io/contour-kuard-example

Get Your App

The Contour service will be on a port that (depending on the cloud you are targeting) might be closed, so you need to open it before accessing kuard:

> kubectl --kubeconfig=/home/jackal/.kube/config get service -n heptio-contour  contour -o wide                                                                                                        
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
contour LoadBalancer 10.152.183.201 <pending> 80:31226/TCP 2m app=contour
juju run --application kubernetes-worker open-port 31226

And here it is running on AWS:

Instead of opening the ports to the outside world you could set the right DNS entries. However, this is specific to the cloud you are deploying to.

As the Contour README says “On AWS, create a CNAME record that maps the host in your Ingress object to the ELB address.”
“If you have an IP address instead (on GCE, for example), create an A record.”
For a localhost deployment your ports should not be blocked and you can fake a DNS entry by editing /etc/hosts.

Thank you Heptio. Keep it up!

Resources

Read more
K.Tsakalozos

In our last post we discussed the steps required to build the Canonical Distribution of Kubernetes (CDK). That post should give you a good picture of the components coming together to form a release. Some of these components are architecture agnostic, some not. Here we will update CDK to support IBM’s s390x architecture.

The CDK bundle is made of charms in python running on Ubuntu. That means we are already in a pretty good shape in terms of running on multiple architectures.

However, charms deploy binaries that are architecture specific. These binaries are of two types:

  1. snaps and
  2. juju resources

Snap packages are cross-distribution but unfortunately they are not cross-architecture. We need to build snaps for the architecture we are targeting. Snapped binaries include Kubernetes with its addons as well as etcd.

There are a few other architecture specific binaries that are not snapped yet. Flannel and CNI plugins are consumed by the charms as Juju resources.

Build snaps for s390x

CDK uses snaps to deploy a) Kubernetes binaries b) add-on services and,
c) etcd binaries.

Snaps with Kubernetes binaries

Kubernetes snaps are built using the branch in, https://github.com/juju-solutions/release/tree/rye/snaps/snap. You will need to login to your s390x machine, clone the repository and checkout the right branch:

On your s390x machine:
> git clone https://github.com/juju-solutions/release.git
> cd release
> git checkout rye/snaps
> cd snap

At this point we can build the Kubernetes snaps:

On your s390x machine:
> make KUBE_ARCH=s390x KUBE_VERSION=v1.7.4 kubectl kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubeadm kubefed

The above will fetch the Kubernetes released binaries from upstream and package them in snaps. You should see a bunch of *_1.7.4_-s390x.snap files in the snap directory.

Snap Kubernetes addons

Apart from the Kubernetes binaries CDK also packages the Kubernetes addons as snaps. The process of building the cdk-addons_1.7.4_s390x.snap is almost identical to the Kubernetes snaps. Clone the cdk-addons repository and make the addons based on the Kubernetes version:

On your s390x machine:
> git clone https://github.com/juju-solutions/cdk-addons.git
> cd cdk-addons
> make KUBE_VERSION=v1.7.4 KUBE_ARCH=s390x

Snap with etcd binaries

The last snap we will need is the snap for etcd. This time we do not have fancy Makafiles but the build process is still very simple:

> git clone https://github.com/tvansteenburgh/etcd-snaps
> cd etcd-snaps/etcd-2.3
> snapcraft — target-arch s390x

At this point you should have etcd_2.3.8_s390x.snap. This snap as well as the addons one and one snap for each of the kubectl kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubeadm kubefed will be attached to the Kubernetes charms upon release.

Make sure you go through this great post on using Kubernetes snaps.

Build juju resources for s390x

CDK needs to deploy the binaries for CNI and Flannel. These two binaries are not packaged as snaps (they might be in the future), instead they are provided as Juju resources. As these two binaries are architecture specific we need to build them for s390x.

CNI resource

For CNI you need to clone the respective repository, checkout the version you need and compile using a docker environment:

> git clone https://github.com/containernetworking/cni.git cni 
> cd cni
> git checkout -f v0.5.1
> docker run — rm -e “GOOS=linux” -e “GOARCH=s390x” -v ${PWD}/cni:/cni golang /bin/bash -c “cd /cni && ./build”

Have a look at how CI creates the tarball needed with the CNI binaries: https://github.com/juju-solutions/kubernetes-jenkins/blob/master/resources/build-cni.sh

Flannel resource

Support for s390x was added from version v0.8.0. Here is the usual cloning and building the repository.

> git clone https://github.com/coreos/flannel.git flannel
> cd flannel
> checkout -f v0.8.0
> make dist/flanneld-s390x

If you look at the CI script for Flannel resource you will see that the tarball also contains the CNI binaries and the etcd client: https://github.com/juju-solutions/kubernetes-jenkins/blob/master/resources/build-flannel.sh

Building etcd:

> git clone https://github.com/coreos/etcd.git etcd 
> cd etcd
> git checkout -f v2.3.7
> docker run --rm -e "GOOS=linux" -e "GOARCH=s390x -v ${PWD}/etcd:/etcd golang /bin/bash -c "cd /etcd &&./build"

Update charms

As we already mentioned, charms are written in python so they run on any platform/architecture. We only need to make sure they are fed the right architecture specific binaries to deploy and manage.

Kubernetes charms have a configuration option called channel. Channel points to a fallback snap channel where snaps are fetched from. It is a fallback because snaps are also attached to the charms when releasing them, and priority is given to those attached snaps. Let me a explain how this mechanism works. When releasing a Kubernetes charm you need to provide a resource file for each of the snaps the charm will deploy. If you upload a zero sized file (.snap) the charm will consider this as a dummy snap and try to snap-install the respective snap from the official snap repository. Grabbing the snaps from the official repository works towards a single multi arch charm since the arch specific repository is always available.

For the non snapped binaries we build above (cni and flannel) we need to patch the charms to make sure those binaries are available as Juju resources. In this pull request we add support for multi arch non snapped resources. We add an additional Juju resource for cni built s390x arch.

# In the metadata.yaml file of Kubernetes worker charm
cni-s390x: type: file
etype: file
filename: cni.tgz
description: CNI plugins for amd64

The charm will concatenate “cni-” and the architecture of the system to form the name of the right cni resource. On an amd64 the resource to be used is cni-amd64, on an s390x the cni-s390x is used.

We follow the same approach for the flannel charm. Have a look at the respective pull request.

Build and release

We would need to build the charms, push them to the store, attach the resources and release them. Lets trace these steps for kubernetes-worker:

> cd <path_to_kubernetes_worker_layer>
> charm build
> cd <output_directory_probably_$JUJU_REPOSITORY/builds/kubernetes-worker>
> charm push . cs:~kos.tsakalozos/kubernetes-worker-s390x

You will get a revision number for your charm. In my case it was 0. Lets list the resources we have for that charm:

> charm list-resources cs:~kos.tsakalozos/kubernetes-worker-s390x-0

And attach the resources to the uploaded charm:

> cd <where_your_resources_are>
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kube-proxy=./kube-proxy.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kubectl=./kubectl.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kubelet=./kubelet.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 cni-s390x=./cni-s390x.tgz
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 cni-amd64=./cni-amd64.tgz

Notice how we have to attach one resource per architecture for the cni non-snapped resource. You can try to provide a valid amd64 build for cni but since we are not building a multi arch charm it wouldn’t matter as it will not be used on the targeted s390x platform.

Now let’s release and grant read visibility to everyone:

> charm release cs:~kos.tsakalozos/kubernetes-worker-s390x-0 — channel edge -r cni-s390x-0 -r cni-amd64-0 -r kube-proxy-0 -r kubectl-0 -r kubelet-0
> charm grant cs:~kos.tsakalozos/kubernetes-worker-s390x-0 everyone

We omit building and releasing rest of the charms for brevity.

Summing up

Things have lined up really nice for CDK to support multiple hardware architectures. The architecture specific binaries are well contained into Juju resources and most of them are now shipped as snaps. Building each binary is probably the toughest part in the process outlined above. Each binary has its own dependencies and build process but the scripts linked here reduce the complexity a lot.

In the future we plan to support other architectures. We should end-up with a single bundle that would deploy on any architecture via a juju deploy canonical-kubernetes. You can follow our progress on our trello board. Do not hesitate to reach out and suggest improvements and tell us what architecture you think we should target next.

Read more
K.Tsakalozos

Patch CDK #1: Build & Release

Happens all the time. You often come across a super cool open source project you would gladly contribute but setting up the development environment and learning to patch and release your fixes puts you off. The Canonical Distribution of Kubernetes (CDK) is not an exception. This set of blog posts will shed some light on the most dark secrets of CDK.

Welcome to the CDK patch journey!

What is your Build & Release workflow? (Figure from xkcd)

Build CDK from source

Prerequisits

You would need to have Juju configured and ready to build charms. We will not be covering that in this blog post. Please, follow the official documentation to setup your environment and build you own first charm with layers.

Build the charms

CDK is made of a few charms, namely:

To build each charm you need to spot the top level charm layer and do a `charm build` on it. The links on the above list will get you to the github repository you will need to clone and build. Lets try this out for easyrsa:

> git clone https://github.com/juju-solutions/layer-easyrsa
Cloning into ‘layer-easyrsa’…
remote: Counting objects: 55, done.
remote: Total 55 (delta 0), reused 0 (delta 0), pack-reused 55
Unpacking objects: 100% (55/55), done.
Checking connectivity… done.
> cd ./layer-easyrsa/
> charm build
build: Composing into /home/jackal/workspace/charms
build: Destination charm directory: /home/jackal/workspace/charms/builds/easyrsa
build: Processing layer: layer:basic
build: Processing layer: layer:leadership
build: Processing layer: easyrsa (from .)
build: Processing interface: tls-certificates
proof: OK!

The above builds the easyrsa charm and prints the output directory (/home/jackal/workspace/charms/builds/easyrsa in this case).

Building the kubernetes-* charms is slightly different. As you might already know the kubernetes charm layers are already upstream under cluster/juju/layers. Building the respective charms requires you to clone the kubernetes repository and pass the path to each layer to your invocation of charm build. Let’s build the kubernetes worker layer here:

> git clone https://github.com/kubernetes/kubernetes
Cloning into ‘kubernetes’…
remote: Counting objects: 602553, done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 602553 (delta 18), reused 20 (delta 15), pack-reused 602481
Receiving objects: 100% (602553/602553), 456.97 MiB | 2.91 MiB/s, done.
Resolving deltas: 100% (409190/409190), done.
Checking connectivity… done.
> cd ./kubernetes/
> charm build cluster/juju/layers/kubernetes-worker/
build: Composing into /home/jackal/workspace/charms
build: Destination charm directory: /home/jackal/workspace/charms/builds/kubernetes-worker
build: Processing layer: layer:basic
build: Processing layer: layer:debug
build: Processing layer: layer:snap
build: Processing layer: layer:nagios
build: Processing layer: layer:docker (from ../../../workspace/charms/layers/layer-docker)
build: Processing layer: layer:metrics
build: Processing layer: layer:tls-client
build: Processing layer: layer:nvidia-cuda (from ../../../workspace/charms/layers/nvidia-cuda)
build: Processing layer: kubernetes-worker (from cluster/juju/layers/kubernetes-worker)
build: Processing interface: nrpe-external-master
build: Processing interface: dockerhost
build: Processing interface: sdn-plugin
build: Processing interface: tls-certificates
build: Processing interface: http
build: Processing interface: kubernetes-cni
build: Processing interface: kube-dns
build: Processing interface: kube-control
proof: OK!

During charm build all layers and interfaces referenced recursively starting by the top charm layer are fetched and merged to form your charm. The layers needed for building a charm are specified in a layer.yaml file on the root of the charm’s directory. For example, looking at cluster/juju/layers/kubernetes-worker/layer.yaml we see that the kubernetes worker charm uses the following layers and interfaces:

- 'layer:basic'
- 'layer:debug'
- 'layer:snap'
- 'layer:docker'
- 'layer:metrics'
- 'layer:nagios'
- 'layer:tls-client'
- 'layer:nvidia-cuda'
- 'interface:http'
- 'interface:kubernetes-cni'
- 'interface:kube-dns'
- 'interface:kube-control'

Layers is an awesome way to share operational logic among charms. For instance, the maintainers of the nagios layer have a better understanding of the operational needs of nagios but that does not mean that the authors of the kubernetes charms cannot use it.

charm build will recursively lookup each layer and interface at http://interfaces.juju.solutions/ to figure out where the source is. Each repository is fetched locally and squashed with all the other layers to form a single package, the charm. Go ahead a do a charm build with “-l debug” to see how and when a layer is fetched. It is important to know that if you already have a local copy of a layer under $JUJU_REPOSITORY/layers or interface under $JUJU_REPOSITORY/interfaces charm build will use those local forks instead of fetching them from the registered repositories. This enables charm authors to work on cross layer patches. Note that you might need to rename the directory of your local copy to match exactly the name of the layer or interface.

Building Resources

Charms will install Kubernetes but to do so they need to have the Kubernetes binaries. We package these binaries in snaps so that they are self-contained and deployed in any Linux distribution. Building such binaries is pretty straight forward as long as you know where to find them :)

Here is the repository holding the Kubernetes snaps: https://github.com/juju-solutions/release.git. The branch we want is rye/snaps:

> git clone https://github.com/juju-solutions/release.git
Cloning into ‘release’…
remote: Counting objects: 1602, done.
remote: Total 1602 (delta 0), reused 0 (delta 0), pack-reused 1602
Receiving objects: 100% (1602/1602), 384.69 KiB | 236.00 KiB/s, done.
Resolving deltas: 100% (908/908), done.
Checking connectivity… done.
> cd release
> git checkout rye/snaps
Branch rye/snaps set up to track remote branch rye/snaps from origin.
Switched to a new branch ‘rye/snaps’

Have a look at the README.md inside the snap directory to see how to build the snaps:

> cd snap/
> ./docker-build.sh KUBE_VERSION=v1.7.4

A number of .snap files should be available after the build.

In similar fashion you can build the snap package holding Kubernetes addons. We refer to this package as cdk-addons and it can be found at: https://github.com/juju-solutions/cdk-addons.git

> git clone https://github.com/juju-solutions/cdk-addons.git
Cloning into ‘cdk-addons’…
remote: Counting objects: 408, done.
remote: Total 408 (delta 0), reused 0 (delta 0), pack-reused 408
Receiving objects: 100% (408/408), 51.16 KiB | 0 bytes/s, done.
Resolving deltas: 100% (210/210), done.
Checking connectivity… done.
> cd cdk-addons/
> make

The last resource you will need (which is not packaged as a snap) is the container network interface (cni). Lets grab the repository and get to a release tag:

> git clone https://github.com/containernetworking/cni.git
Cloning into ‘cni’…
remote: Counting objects: 4048, done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 4048 (delta 0), reused 2 (delta 0), pack-reused 4043
Receiving objects: 100% (4048/4048), 1.76 MiB | 613.00 KiB/s, done.
Resolving deltas: 100% (1978/1978), done.
Checking connectivity… done.
> cd cni
> git checkout -f v0.5.1

Build and package the cni resource:

> docker run — rm -e “GOOS=linux” -e “GOARCH=amd64” -v `pwd`:/cni golang /bin/bash -c “cd /cni && ./build”
Building API
Building reference CLI
Building plugins
flannel
tuning
bridge
ipvlan
loopback
macvlan
ptp
dhcp
host-local
noop
> cd ./bin
> tar -cvzf ../cni.tgz *
bridge
cnitool
dhcp
flannel
host-local
ipvlan
loopback
macvlan
noop
ptp
tuning

You should now have a cni.tgz in the root folder of the cni repository.

Two things to note here:
- We do have a CI for building, testing and releasing charms and bundles. In case you want to follow each step of the build process, you can find our CI scripts here: https://github.com/juju-solutions/kubernetes-jenkins
- You do not need to build all resources yourself. You can grab the resources used in CDK from the Juju store. Starting from the canonical-kubernetes bundle you can navigate to any charm shipped. Select one from the very end of the bundle page and then look for the “resources” sidebar on the right. Download any of them, rename it properly and you are ready to use it in your release.

Releasing Your Charms

After patching the charms to match your needs, please, consider submitting a pull request to tell us what you have been up to. Contrary to many other projects you do not need to wait for your PR to get accepted before you can actually make your work public. You can immediately release your work under your own namespace on the store. This is described in detail on the official charm authors documentation. The developers team is often using private namespaces to test PoCs and new features. The main namespace where CDK is released from is “containers”.

Yet, there is one feature you need to be aware of when attaching snaps to your charms. Snaps have their own release cycle and repositories. If you want to use the officially released snaps instead of attaching them to the charms, you can use a dummy zero sized file with the correct extension (.snap) in the place of each snap resource. The snap layer will see that that resource is empty and will try to grab the snap from the official repositories. Using the official snaps is recommended, however, in network restricted environments you might need to attach your own snaps while you deploy the charms.

Why is it this way?

Building CDK is of average difficulty as long as you know how where to look. It is not perfect by any standards and it will probably continue this path. The reason is that there are opposing forces shaping the build processes. This should come as no surprise. As Kubernetes changes rapidly and constantly expands, the build and release process should be flexible enough to include any new artefacts. Consider for example the switch from flannel cni to calico. In our case it is a resource and a charm that need to be updated. A monolithic build script would have been more “elegant” to the outsiders (eg, make CDK), but we would have been hiding a lot under the carpet. CI should be part of the culture of any team and should be owned by the team or else you get disconnected from the end product causing delays and friction. Our build and release process might look a bit “dirty” with a lot of moving parts but it really is not that bad! I managed to highlight the build and release steps in a single blog. Positive feedback also comes from our field engineers. Most of the time CDK deploys out of the box. When our field engineers are called it is either because our customers have a special requirement from the software or they have a “unconventional” environment in which Kubernetes needs to be deployed. Having such a simple and flexible build and release process enables our people to solve a problem on-site and release it tothe Juju store within a couple of hours.

Next steps

This blog post serves as foundation work for what is coming up. The plan is to go over some easy patches so we further demystify how CDK works. Funny thing this post was originally titled “Saying goodbye to my job security”. Cheers!

Read more
K.Tsakalozos

Sometimes we miss the forest for the trees.

It’s all about portability

Take a step back and think about how much of your effort goes into your product’s portability. A lot, I’d wager. You don’t wait to see what hardware your customer is on before you start coding your app. Widely accepted architectures give you a large customer base, so you target those. You skip optimisations in order to maximize compatibility. The language you are using ensures you can move to another platform/architecture.

Packaging your application is also important if you want to broaden your audience. The latest trend here is to use containers such as docker. Essentially containers (much like VMs) are another layer of abstraction for the sake of portability. You can package your app such that it will run anywhere.

https://medium.com/media/e87190de95f6f33f07f75beb4c8e7f39/href

Things go wrong

When you first create a PoC, even a USB stick is enough to distribute it! After that you should take things more seriously. I see products “dockerised” so that they run everywhere, yet the deployment story is limited to a single cloud. You do all the work to deliver your app everywhere only to fail when you actually need to deliver it! It doesn’t make sense — or to put it more accurately it only makes sense in the short term.

The overlooked feature

Canonical Distribution of Kubernetes (CDK) will deploy exactly the same Kubernetes in all major clouds (AWS, GCE, Azure, VMware, Openstack, Joynet), bare metal, or even your local machine (using lxd). When you first evaluate CDK you may think: “I do not need this”, or even “I hate these extra two lines where I decide the cloud provider because I already know I am going full AWS”. At that point you are tempted to take the easy way out and get pinned to a single Kubernetes provider. However, soon you will find yourself in need of feature F from cloud C or Kubernetes version vX.Y.Z. Worse is when your potential customer is on a different platform and you discover the extent of dependencies you have on your provider.

Before you click off this tab thinking I am exaggerating, think about this: the best devops are done by the devs themselves. How easy is it to train all devs to use AWS for spawning containers? It is trivial, right? Suppose you get two more customers, one on Azure and one on GCE. How easy is it to train your team to perform devops against two additional clouds? You now have to deal with three different monitoring and logging mechanisms. That might still be doable since AWS, GCE, and Azure are all well behaved. What if a customer wants their own on-prem cluster? You are done — kiss your portability and velocity goodbye. Your team will be constantly firefighting, chasing after misaligned versions, and emulating features across platforms.

How is it done

Under the hood of CDK you will find Juju. Juju abstracts the concepts of IaaS. It will provision everything from bare metal machines in your rack, to VMs on any major cloud, to lxd containers on your laptop. On top of these units Juju will deploy Kubernetes thus keeping the deployment agnostic to the underlying provider. Two points here:

  1. Juju has been here for years and is used both internally and externally by Canonical to successfully deliver solutions such as Openstack and Big Data.
  2. Since your Kubernetes is built on IaaS cloud offerings you are able to tailor node properties to match your needs. You may also find it is more cost effective.

Also check out Conjure-up, an easy-to-use wizard that will get your Kubernetes cluster up-and-running in minutes.

To conclude

I am sure you have your reasons to “dockerize” your application and then deliver it on a single cloud. It is just counter-intuitive. If you want to make the most from your portability decisions you should consider CDK.

References:

Read more
Dustin Kirkland

Earlier this month, I spoke at ContainerDays, part of the excellent DevOpsDays series of conferences -- this one in lovely Portland, Oregon.

I gave a live demo of Kubernetes running directly on bare metal.  I was running it on an 11-node Ubuntu Orange Box -- but I used the exact same tools Canonical's world class consulting team uses to deploy Kubernetes onto racks of physical machines.
You see, the ability to run Kubernetes on bare metal, behind your firewall is essential to the yin-yang duality of Cloud Native computing.  Sometimes, what you need is actually a Native Cloud.
Deploying Kubernetes into virtual machines in the cloud is rather easy, straightforward, with dozens of tools now that can handle that.

But there's only one tool today, that can deploy the exact same Kubernetes to AWS, Azure, GCE, as well as VMware, OpenStack, and bare metal machines.  That tools is conjure-up, which acts as a command line front end to several essential Ubuntu tools: MAAS, LXD, and Juju.

I don't know if the presentation was recorded, but I'm happy to share with you my slides for download, and embedded here below.  There are a few screenshots within that help convey the demo.




Cheers,
Dustin

Read more
Dustin Kirkland

Thank you to Oracle Cloud for inviting me to speak at this month's CloudAustin Meetup hosted by Rackspace.

I very much enjoyed deploying Canonical Kubernetes on Ubuntu in the Oracle Cloud, and then exploring Kubernetes a bit, how it works, the architecture, and a simple workload within.  I'm happy to share my slides below, and you can download a PDF here:


If you're interested in learning more, check out:
It was a great audience, with plenty of good questions, pizza, and networking!

I'm pleased to share my slide deck here.

Cheers,
Dustin

Read more
K.Tsakalozos

Introduction

In this post we discus our efforts to setup a Federation of on-prem Kubernetes Clusters using CoreDNS. The Kubernetes Cluster version used is 1.6.2. We use Juju to deliver clusters on AWS, yet the clusters should be considered on-prem since they do not integrate with any of the cloud’s features. The steps described here are repeatable on any pool of resources you may have available (cloud or bare metal). Note that this is still a work in progress and should not be used on a production environment.

What is Federation

Cluster Federation (Fig. from http://blog.kubernetes.io/2016/07/cross-cluster-services.html)

Cluster federation allows you to “connect” your Kubernetes clusters and manage certain functionality from a central point. The need for central management arises when you have more than one clusters possibly spread across different locations. You may need a globally distributed infrastructure for performance, availability or even legal reasons. You might want to have isolated clusters satisfying the needs of different teams. Whatever the case may be, federation can assist in some of the ops tasks. You can find more on the benefits of federation in the official documentation.

Why Cluster Federation with Juju

Juju makes delivery of Kubernetes clusters dead simple! Juju builds an abstraction over the pool of resources you have and allows us to deliver Kubernetes to all the main public and private clouds (AWS, Azure, GCE, Openstack, Oracle etc) and bare metal physical clusters. Juju also enables you to experiment with small scale deployments stood up on your local machine using lxc containers.

Having the option to deploy a cluster with the ease Juju offers, federation comes as a natural next step. Deployed clusters should be considered on-prem clusters as they are exactly the same as if they were deployed on bare metal. This is simply because even if you deploy on a cloud Juju will provision machines (virtual machines in this case) and deploy Kubernetes as if it were a local on-prem deployment. You see, Juju is not magic after all!

There is a excellent blog post showing how you can place Juju clusters under a federation hosting cluster setup on GCE. Here we focus on a federation that has no dependencies on a specific cloud provider. To this end, we go with CoreDNS as a federation DNS provider.

Let’s Federate

Deploy clusters

Our setup is made of 3 clusters:

  • f1: will host the federation controller place
  • c1, c2: federated clusters

At the time of this writing (2nd of June 2017) cluster federation is in beta and is not yet integrated into the official Kubernetes charms. Therefore we will be using a patched version of charms and bundles (source of charms and cdk-addons, charms master and worker, and bundle).

We assume you already have Juju installed and a controller in place. If not have a look at the Juju installation instructions.

Let’s deploy these three clusters.

#> juju add-model f1
#> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2
#> juju add-model c1
#> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2
#> juju add-model c2
#> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2

Here is what the bundle.yaml looks like: http://paste.ubuntu.com/24746868/

Let’s copy and merge the kubeconfig files.

#> juju switch f1
#> juju scp kubernetes-master/0:config ~/.kube/config
#> juju switch c1
#> juju scp kubernetes-master/0:config ~/.kube/config.c1
#> juju switch c2
#> juju scp kubernetes-master/0:config ~/.kube/config.c2

Using this script you can merge the configs

#> ./merge-configs.py config.c1 c1
#> ./merge-configs.py config.c2 c2

To enable CoreDNS on f1 you just need to:

#> juju switch f1
#> juju config kubernetes-master enable-coredns=True

Have a look at your three contexts:

#> kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
f1 f1 ubuntu
c1 c1-cluster c1-user
c2 c2-cluster c2-user

Create the federation

At this point you are ready to deploy the federation control plane on f1. To do so you will need a call to kubefed init. Kubefed init will deploy the required services and it will also perform a health check ping to see everything is ready. During this last check kubefed init will try to reach a service on a port that is not open. At the time of this writing we have a patch waiting upstream to make the port configurable so you can open it before triggering the federation creation. If you do not want to wait for the patch to be merged you can call kubefed init with -v 9 to see what port the services were appointed to and expose them though Juju while keeping kubefed running. This will become clear shortly.

Create a coredns-provider.conf file with the etc endpoint of etcd of f1 and your zone. You can look at the Juju status to find out the IP of the etcd machine:

#> cat ./coredns-provider.conf 
[Global]
etcd-endpoints = http://54.211.106.117:2379
zones = juju-fed.com.

Call kubefed

#> juju switch f1
#> kubefed init federation — host-cluster-context=f1 — dns-provider=”coredns” — dns-zone-name=”juju-fed.com.” — dns-provider-config=coredns-provider.conf — etcd-persistent-storage=false — api-server-service-type=”NodePort” -v 9 — api-server-advertise-address=34.224.101.147 — image=gcr.io/google_containers/hyperkube-amd64:v1.6.2

Note here that the api-server-advertise-address=34.224.101.147 is the IP of the worker on your f1 cluster. You selected NodePort so the federation service is available from all the worker nodes under a random port.

Kubefed will end up trying to do a health check. You should see on your console something similar to this: “GET https://34.224.101.147:30229/healthz” . The port 30229 is randomly selected and will differ in your case. While keeping kubefed init running, go ahead and open this port from a second terminal:

#> juju run — application kubernetes-worker “open-port 30229”

The federation context should be present in the list of contexts you have.

#> kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
f1 f1 ubuntu
federation federation federation
c1 c1-cluster c1-user
c2 c2-cluster c2-user

Now you are ready to have other clusters join the federation:

#> kubectl config use-context federation
#> kubefed join c1 — host-cluster-context=f1
#> kubefed join c2 — host-cluster-context=f1

Two clusters should soon appear ready:

#> kubectl get clusters
NAME STATUS AGE
c1 Ready 3m
c2 Ready 1m

Congratulations! You are all set, your federation is ready to use.

What works

Setup your default namespace:

#> kubectl — context=federation create ns default

Get this replica set and create it.

#> kubectl — context=federation create -f nginx-rs.yaml

See your replica set created:

#> kubectl — context=federation describe rs/nginx
Name: nginx
Namespace: default
Selector: app=nginx
Labels: app=nginx
type=demo
Annotations: <none>
Replicas: 1 current / 1 desired
Pods Status: error in fetching pods: the server could not find the requested resource
Pod Template:
Labels: app=nginx
Containers:
frontend:
Image: nginx
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
26s 26s 1 federated-replicaset-controller Normal CreateInCluster Creating replicaset in cluster c2

Create a service.

#> kubectl — context=federation create -f beacon.yaml

Scale and open ports so the service is available

#> kubectl — context=federation scale rs/nginx — replicas 3

What does not work yet

Federation does not seem to handle NodePort services. To see that you can deploy a busybox and try to resolve the name of the service you deployed right above.

Use this yaml.

#> kubectl — context=c2 create -f bbox-4-test.yaml
#> kubectl — context=c2 exec -it busybox — nslookup beacon

This returns a local cluster IP which is rather unexpected.

There are some configuration options that did not have the effect I was expecting. First, we would want to expose the CoreDNS service as described in the docs. Assigning the same NodePort for both UDP and TCP had no effect; this is rather unfortunate since the issue is already addressed. The configuration described here and here (although it should be automatically applied I thought I would give it a try) did not seem to have any effect.

Open issues:

Conclusions

Federation on on-prem clusters has gone a long way and is getting significant improvements with every release. Some of the issues described here have already been addressed, I am waiting anxiously for the upcoming v1.7.0 release to see the fixes in action.

Do not hesitate to reach out to the sig-federation group, but most importantly I would be extremely happy and grateful if you pointed out some bug I have on my configs/code/thoughts ;)

References and Links

  1. Official federations docs, https://kubernetes.io/docs/concepts/cluster-administration/federation/
  2. Juju docs, https://jujucharms.com/
  3. Canonical Distribution of Kubernetes, https://jujucharms.com/canonical-kubernetes/
  4. Federation with host on GCE, https://medium.com/google-cloud/experimenting-with-cross-cloud-kubernetes-cluster-federation-dfa99f913d54
  5. CDK-addons source, https://github.com/juju-solutions/cdk-addons/tree/feature/coredns
  6. Kubernetes charms, https://github.com/juju-solutions/kubernetes/tree/feature/coredns
  7. Kubernetes master charm, https://jujucharms.com/u/kos.tsakalozos/kubernetes-master/31
  8. Kubernetes worker charm, https://jujucharms.com/u/kos.tsakalozos/kubernetes-worker/17
  9. Kubernetes bundle used for federation, https://jujucharms.com/u/kos.tsakalozos/federated-kubernetes-core/bundle/0
  10. Juju getting started, https://jujucharms.com/docs/stable/getting-started
  11. Patch kubefed init, https://github.com/kubernetes/kubernetes/pull/46283
  12. CoreDNS setup instructions https://kubernetes.io/docs/tasks/federation/set-up-coredns-provider-federation/#setup-coredns-server-in-nameserver-resolvconf-chain
  13. CoreDNS yaml file https://github.com/juju-solutions/cdk-addons/blob/feature/coredns/coredns/coredns-rbac.yaml#L95
  14. Nodeport UDP/TCP issue, https://github.com/kubernetes/kubernetes/issues/20092
  15. Federation DNS on-prem configuration https://kubernetes.io/docs/admin/federation/#kubernetes-15-passing-federations-flag-via-config-map-to-kube-dns

Read more
Dustin Kirkland



Yesterday, I delivered a talk to a lively audience at ContainerWorld in Santa Clara, California.

If I measured "the most interesting slides" by counting "the number of people who took a picture of the slide", then by far "the most interesting slides" are slides 8-11, which pose an answer the question:
"Should I run my PaaS on top of my IaaS, or my IaaS on top of my PaaS"?
In the Ubuntu world, that answer is super easy -- however you like!  At Canonical, we're happy to support:
  1. Kubernetes running on top of Ubuntu OpenStack
  2. OpenStack running on top of Canonical Kubernetes
  3. Kubernetes running along side OpenStack
In all cases, the underlying substrate is perfectly consistent:
  • you've got 1 to N physical or virtual machines
  • which are dynamically provisioned by MAAS or your cloud provider
  • running stable, minimal, secure Ubuntu server image
  • carved up into fast, efficient, independently addressable LXD machine containers
With that as your base, we'll easily to conjure-up a Kubernetes, an OpenStack, or both.  And once you have a Kubernetes or OpenStack, we'll gladly conjure-up one inside the other.


As always, I'm happy to share my slides with you here.  You're welcome to download the PDF, or flip through the embedded slides below.



Cheers,
Dustin

Read more