Canonical Voices

Posts tagged with 'kubernetes'

K. Tsakalozos

Looking at the configuration of a Kubernetes node sounds like a simple thing, yet it not so obvious.

The arguments kubelet takes come either as command line parameters or from a configuration file you pass with --config. Seems, straight forward to do a ps -ex | grep kubelet and look in the file you see after the --config parameter. Simple, right? But… are you sure you got all the arguments right? What if Kubernetes defaulted to a value you did not want? What if you do not have shell access to a node?

There is a way to query the Kubernetes API for the configuration a node is running with: api/vi/nodes/<node_name>/proxy/cofigz. Lets see this in a real deployment.

Deploy a Kubernetes Cluster

I am using the Canonical Distribution of Kubernetes (CDK) on AWS here but you can use whichever cloud and Kubernetes installation method you like.

juju bootstrap aws
juju deploy canonical-kubernetes

..and wait for the deployment to finish

watch juju status 

Change a Configuration

CDK allows for configuring both the command line arguments and the extra arguments of the config file. Here we add arguments to the config file:

juju config kubernetes-worker kubelet-extra-config='{imageGCHighThresholdPercent: 60, imageGCLowThresholdPercent: 39}'

A great question is how we got the imageGCHighThreshholdPercent literal. At the time of this writing the official upstream docs point you to the type definitions; a rather ugly approach. There is an EvictionHard property in the type definitions, however, if you look at the example of the upstream docs you see the same property is with lowercase.

Check the Configuration

We will need two shells. On the first one we will start the API proxy and on the second we will query the API. On the first shell:

juju ssh kubernetes-master/0
kubectl proxy

Now that we have the proxy at 127.0.0.1:8001 on the kubernetes-master, use a second shell to get a node name and we query the API:

juju ssh kubernetes-master/0
kubectl get no
curl -sSL "http://localhost:8001/api/v1/nodes/<node_name>/proxy/configz" | python3 -m json.tool

Here is a full run:

juju ssh kubernetes-master/0
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1023-aws x86_64)
* Documentation:  https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Mon Oct 22 10:40:40 UTC 2018
System load:  0.11               Processes:              115
Usage of /: 13.7% of 15.45GB Users logged in: 1
Memory usage: 20% IP address for ens5: 172.31.0.48
Swap usage: 0% IP address for fan-252: 252.0.48.1
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
Last login: Mon Oct 22 10:38:14 2018 from 2.86.54.15
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@ip-172-31-0-48:~$ kubectl get no
NAME STATUS ROLES AGE VERSION
ip-172-31-14-174 Ready <none> 41m v1.12.1
ip-172-31-24-80 Ready <none> 41m v1.12.1
ip-172-31-63-34 Ready <none> 41m v1.12.1
ubuntu@ip-172-31-0-48:~$ curl -sSL "http://localhost:8001/api/v1/nodes/ip-172-31-14-174/proxy/configz" | python3 -m json.tool
{
"kubeletconfig": {
"syncFrequency": "1m0s",
"fileCheckFrequency": "20s",
"httpCheckFrequency": "20s",
"address": "0.0.0.0",
"port": 10250,
"tlsCertFile": "/root/cdk/server.crt",
"tlsPrivateKeyFile": "/root/cdk/server.key",
"authentication": {
"x509": {
"clientCAFile": "/root/cdk/ca.crt"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"registryPullQPS": 5,
"registryBurst": 10,
"eventRecordQPS": 5,
"eventBurst": 10,
"enableDebuggingHandlers": true,
"healthzPort": 10248,
"healthzBindAddress": "127.0.0.1",
"oomScoreAdj": -999,
"clusterDomain": "cluster.local",
"clusterDNS": [
"10.152.183.93"
],
"streamingConnectionIdleTimeout": "4h0m0s",
"nodeStatusUpdateFrequency": "10s",
"nodeLeaseDurationSeconds": 40,
"imageMinimumGCAge": "2m0s",
"imageGCHighThresholdPercent": 60,
"imageGCLowThresholdPercent": 39,
"volumeStatsAggPeriod": "1m0s",
"cgroupsPerQOS": true,
"cgroupDriver": "cgroupfs",
"cpuManagerPolicy": "none",
"cpuManagerReconcilePeriod": "10s",
"runtimeRequestTimeout": "2m0s",
"hairpinMode": "promiscuous-bridge",
"maxPods": 110,
"podPidsLimit": -1,
"resolvConf": "/run/systemd/resolve/resolv.conf",
"cpuCFSQuota": true,
"cpuCFSQuotaPeriod": "100ms",
"maxOpenFiles": 1000000,
"contentType": "application/vnd.kubernetes.protobuf",
"kubeAPIQPS": 5,
"kubeAPIBurst": 10,
"serializeImagePulls": true,
"evictionHard": {
"imagefs.available": "15%",
"memory.available": "100Mi",
"nodefs.available": "10%",
"nodefs.inodesFree": "5%"
},
"evictionPressureTransitionPeriod": "5m0s",
"enableControllerAttachDetach": true,
"makeIPTablesUtilChains": true,
"iptablesMasqueradeBit": 14,
"iptablesDropBit": 15,
"failSwapOn": false,
"containerLogMaxSize": "10Mi",
"containerLogMaxFiles": 5,
"configMapAndSecretChangeDetectionStrategy": "Watch",
"enforceNodeAllocatable": [
"pods"
]
}
}

Summing Up

There is a way to get the configuration of an online Kubernetes node through the Kubernetes API (api/v1/nodes/<node_name>/proxy/configz). This might be handy if you want to code against Kubernetes or you do not want to get into the intricacies of your particular cluster setup.

References


How to Inspect the Configuration of a Kubernetes Node was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

Istio almost immediately strikes you as enterprise grade software. Not so much because of the complexity it introduces, but more because of the features it adds to your service mesh. Must-have features packaged together in a coherent framework:

  • Traffic Management
  • Security Policies
  • Telemetry
  • Performance Tuning

Since microk8s positions itself as the local Kubernetes cluster developers prototype on, it is no surprise that deployment of Istio is made dead simple. Let’s start with the microk8s deployment itself:

> sudo snap install microk8s --classic

Istio deployment available with:

> microk8s.enable istio

There is a single question that we need to respond to at this point. Do we want to enforce mutual TLS authentication among sidecars? Istio places a proxy to your services so as to take control over routing, security etc. If we know we have a mixed deployment with non-Istio and Istio enabled services we would rather not enforce mutual TLS:

> microk8s.enable istio
Enabling Istio
Enabling DNS
Applying manifest
service/kube-dns created
serviceaccount/kube-dns created
configmap/kube-dns created
deployment.extensions/kube-dns created
Restarting kubelet
DNS is enabled
Enforce mutual TLS authentication (https://bit.ly/2KB4j04) between sidecars? If unsure, choose N. (y/N): y

Believe it or not we are done, Istio v1.0 services are being set up, you can check the deployment progress with:

> watch microk8s.kubectl get all --all-namespaces

We have packaged istioctl in microk8s for your convenience:

> microk8s.istioctl get all --all-namespaces
NAME KIND NAMESPACE AGE
grafana-ports-mtls-disabled Policy.authentication.istio.io.v1alpha1 istio-system 2m
DESTINATION-RULE NAME   HOST                                             SUBSETS   NAMESPACE      AGE
istio-policy istio-policy.istio-system.svc.cluster.local istio-system 3m
istio-telemetry istio-telemetry.istio-system.svc.cluster.local istio-system 3m
GATEWAY NAME                      HOSTS     NAMESPACE      AGE
istio-autogenerated-k8s-ingress * istio-system 3m

Do not get scared by the amount of services and deployments, everything is under the istio-system namespace. We are ready to start exploring!

Demo Time!

Istio needs to inject sidecars to the pods of your deployment. In microk8s auto-injection is supported so the only thing you have to label the namespace you will be using with istion-injection=enabled:

> microk8s.kubectl label namespace default istio-injection=enabled

Let’s now grab the bookinfo example from the v1.0 Istio release and apply it:

> wget https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/platform/kube/bookinfo.yaml
> microk8s.kubectl create -f bookinfo.yaml

The following services should be available soon:

> microk8s.kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) details ClusterIP 10.152.183.33 <none> 9080/TCP kubernetes ClusterIP 10.152.183.1 <none> 443/TCP productpage ClusterIP 10.152.183.59 <none> 9080/TCP ratings ClusterIP 10.152.183.124 <none> 9080/TCP reviews ClusterIP 10.152.183.9 <none> 9080/TCP

We can reach the services using the ClusterIP they have; we can for example get to the productpage in the above example by pointing our browser to 10.152.183.59:9080. But let’s play by the rules and follow the official instructions on exposing the services via NodePort:

> wget https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/networking/bookinfo-gateway.yaml
> microk8s.kubectl create -f bookinfo-gateway.yaml

To get to the productpage through ingress we shamelessly copy the example instructions:

> microk8s.kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}'
31380

And our node is the localhost so we can point our browser to http://localhost:31380/productpage

Show me some graphs!

Of course graphs look nice in a blog post, so here you go.

The Grafana Service

You will need to grab the ClusterIP of the Grafana service:

microk8s.kubectl -n istio-system get svc grafana

Prometheus is also available in the same way.

microk8s.kubectl -n istio-system get svc prometheus
The Prometheus Service

And for traces you will need to look at the jaeger-query.

microk8s.kubectl -n istio-system get service/jaeger-query
The Jaeger Service

The servicegraph endpoint is available with:

microk8s.kubectl -n istio-system get svc servicegraph
The ServiceGraph

I should stop here. Go and checkout the Istio documentation for more details on how to take advantage of what Istio is offering.

What to keep from this post

References


Microk8s puts up its Istio and sails away was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

A friend once asked, why would one prefer microk8s over minikube?… We never spoke since. True story!

That was a hard question, especially for an engineer. The answer is not so obvious largely because it has to do with personal preferences. Let me show you why.

Microk8s-wise this is what you have to do to have a local Kubernetes cluster with a registry:

sudo snap install microk8s --edge --classic
microk8s.enable registry

How is this great?

  • It is super fast! A couple of hundreds of MB over the internet tubes and you are all set.
  • You skip the pain of going through the docs for setting up and configuring Kubernetes with persistent storage and the registry.

So why is this bad?

  • As a Kubernetes engineer you may want to know what happens under the hood. What got deployed? What images? Where?
  • As a Kubernetes user you may want to configure the registry. Where are the images stored? Can you change any access credentials?

Do you see why this is a matter of preference? Minikube is a mature solution for setting up a Kubernetes in a VM. It runs everywhere (even on windows) and it does only one thing, sets up a Kubernetes cluster.

On the other hand, microk8s offers Kubernetes as an application. It is opinionated and it takes a step towards automating common development workflows. Speaking of development workflows...

The full story with the registry

The registry shipped with microk8s is available on port 32000 of the localhost. It is an insecure registry because, let’s be honest, who cares about security when doing local development :) .

And it’s getting better, check this out! The docker daemon used by microk8s is configured to trust this insecure registry. It is this daemon we talk to when we want to upload images. The easiest way to do so is by using the microk8s.docker command coming with microk8s:

# Lets get a Docker file first
wget https://raw.githubusercontent.com/nginxinc/docker-nginx/ddbbbdf9c410d105f82aa1b4dbf05c0021c84fd6/mainline/stretch/Dockerfile
# And build it
microk8s.docker build -t localhost:32000/nginx:testlocal . microk8s.docker push localhost:32000/nginx:testlocal
If you prefer to use an external docker client you should point it to the socket dockerd is listening on:
docker -H unix:///var/snap/microk8s/docker.sock ps

To use an image from the local registry just reference it in your manifests:

apiVersion: v1
kind: Pod
metadata:
name: my-nginx
namespace: default
spec:
containers:
- name: nginx
image: localhost:32000/nginx:testlocal
restartPolicy: Always

And deploy with:

microk8s.kubectl create -f the-above-awesome-manifest.yaml

Microk8s and registry

What to keep from this post?

You want Kubernetes? We deliver it as a (sn)app!

You want to see your tool-chain in microk8s? Drop us a line. Send us a PR!

We are pleased to see happy Kubernauts!

Those of you who are here for the gossip. He was not that good of a friend (obviously!). We only met in a meetup :) !

References


Microk8s Docker Registry was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

In this post we will see how to automate the deployment of an ASP.NET Core application on an On-Prem Kubernetes cluster. We will base our work on the excellent blog “Deploying ASP.NET Core apps on App Engine” by Mete Atamel. Our contribution would be a) showing how to target public as well as private clouds for Kubernetes deployments, and b) automate the delivery of your software through a basic CI/CD based on Jenkins.

Click here to share this article on LinkedIn »

What will we be using

  • An Ubuntu 16.04 machine
  • git command-line client (installed by sudo apt install git)
  • Internet access
  • $0, yes this is going to be a “look how much you can do for free” kind of blog post

Quick outline

  • Create an ASP.NET Core application on the Ubuntu machine and push the code to GitHub.
  • Package the application in a Docker container and upload it to Docker Hub
  • Spin-up a Kubernetes cluster, the Canonical distribution. Here we will deploy that cluster locally, but you can use any private or public cloud you might have access to.
  • Deploy your application on the Kubernetes cluster and expose it.
  • Deploy Jenkins next to Kubernetes and automate the delivery of your app.

Let’s not waste any time we have a long way ahead of us.

ASP.NET Core applications on Linux

Installing .NET on an Ubuntu 16.04 is as simple as adding a repository and getting dotnet-sdk-2.1.4:

$ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg 
$ sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
$ sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
$ sudo apt-get install apt-transport-https
$ sudo apt-get update
$ sudo apt-get install dotnet-sdk-2.1.4

We can now create our application. It is going to be the template razor application as described in Mete’s codelab and we will not even bother to change the application’s name!

$ mkdir -p ~/workspace/dotnet
$ cd ~/workspace/dotnet
$ dotnet new razor -o HelloWorldAspNetCore
$ cd HelloWorldAspNetCore
$ dotnet run

You should see the application on your browser at http://localhost:5000

Our code will be on a public github repository so go ahead and create an account if you do not already have one (https://github.com). Click the “New repository” button to create a public repository named HelloWorldAspNetCore.

Creating a repository for our code.

Lets add a .gitignore file at the root of our project and push our code:

$ cd ~/workspace/dotnet/HelloWorldAspNetCore
$ wget https://raw.githubusercontent.com/OmniSharp/generator-aspnet/master/templates/gitignore.txt
$ mv gitignore.txt .gitignore
$ git init
$ git add .
$ git commit -m "Initial commit"
$ git remote add origin https://github.com/ktsakalozos/HelloWorldAspNetCore.git
$ git push -u origin master

You can now relax! Your code is safe with github.

Package your application in a Docker container

Installing docker would require adding the respective repository and apt getting docker-ce:

$ sudo apt-get install apt-transport-https ca-certificates curl \
software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
$ sudo apt-get update
$ sudo apt-get install docker-ce
$ sudo usermod -a -G docker $USER
$ newgrp docker

While we are in the process of setting up docker we might as well register with Docker Hub and create a repository to store our images. Go to https://hub.docker.com and create an account. I will be here waiting :).

After logging in to Docker Hub click on the “Create Repository” button and create a new public repository. That is where we will be pushing our docker images. For the rest of this blog the docker user is “kjackal” and the repository is named “hello-dotnet”. This will make more sense to you shortly.

New docker repository form.

To package our application we first need to ask dotnet to compile our code and gather any dependencies into a folder for later deployment. This is done with:

$ dotnet publish -c Release

Our docker container should package everything under bin/Release/netcoreapp2.0/publish/. We create a `Dockerfile` on the root of our project with the following contents:

$ cd ~/workspace/dotnet/HelloWorldAspNetCore/
$ cat ./Dockerfile
FROM gcr.io/google-appengine/aspnetcore:2.0
ADD ./bin/Release/netcoreapp2.0/publish/ /app
ENV ASPNETCORE_URLS=http://*:${PORT}
WORKDIR /app
ENTRYPOINT [ "dotnet", "HelloWorldAspNetCore.dll"]

Building the container and testing that it works:

$ docker build -t kjackal/hello-dotnet:v1 .
$ docker run -p 8080:8080 kjackal/hello-dotnet:v1

You should see the output at http://localhost:8080 .

It is time to push the first version to Docker Hub:

$ docker login
$ docker push kjackal/hello-dotnet:v1

The Dockerfile should be part of the code so we commit it and push it to the git repository:

$ git add Dockerfile
$ git commit -m "Adding dockerfile"
$ git push origin master

Deploy a Kubernetes Cluster

For the on-prem Kubernetes deployments we will go with Canonical’s solution. The reason being (apart from me being biased) that Canonical offers a seamless and effortless transition from a toy deployment running on your laptop to a full blown production grade Kubernetes deployed on private, public clouds or even on bare metal.

In this blog we show how to deploy Kubernetes and Jenkins on your localhost (laptop/desktop) — just make sure you have at least 8GB of RAM. We first need to have LXD running. LXD is a really powerful type of container based on the same technologies as Docker. Contrary to Docker, LXD containers resemble more to virtual machines (VMs). Lets simplify things and assume from now on that LXD containers are VMs that boot instantly and have no performance overhead!

In the following snippet we install LXD. While initialising ( /snap/bin/lxd init) make sure you go with the defaults but you do not enable ipv6. When asked “What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]?” Reply with “none”.

$ sudo snap install lxd
$ sudo usermod -a -G lxd $USER
$ newgrp lxd
$ /snap/bin/lxd init

At this point we can use either Juju or Conjure-up to deploy Kubernetes. Conjure-up is essentially a wizard sitting on-top of Juju.

As already mentioned by Tim installing Kubernetes with conjure-up is as simple as:

$ sudo snap install conjure-up --classic
$ conjure-up

Canonical Kubernetes comes in two flavours:

  1. kubernetes-core is a cut down version installed in two machines, in our case two LXD containers running on our localhost.
  2. canonical-kubernetes is the full production grade deployment with features such as HA and monitoring.

We will go for kubernetes-core; on the next screen select localhost as the cloud provider. Follow the wizard’s steps till the end and wait for the deployment to finish. For a headless deployment you can do a conjure-up kubernetes-core localhost .

To review the status of our deployment have a look at:

$ juju status

Getting into one of the LXD containers/machines we use juju ssh, for exmple:

$ juju ssh kubernetes-master/0

Inside kubernetes-master under /home/ubuntu you will find a config file you can use for accessing your cluster. We can fetch that file with:

$ juju scp kubernetes-master/0:config .

Conjure-up has already copied the Kubernetes config locally and installed kubectl for us. How nice!

Automate the deployment process, CI/CD

We will be showing a few Jenkins jobs to automate the process of building, packaging and deploying our application. The intention here is to show everything that happens under the hood and not hide behind a flashy UI.

First things first, we need a Jenkins machine.

$ juju deploy jenkins

We deploy Jenkins next to our Kubernetes cluster. This will take some time, you can check the progress of the deployment with juju status.

Next we need to set a password to Jenkins and expose it so we can access its UI on port 8080. Exposing Jenkins is not needed in the localhost deployment which uses LXD containers but we show it here for completeness.

$ juju config jenkins password='your_secure_password'
$ juju expose jenkins

Before we start crafting our jobs we need to configure Jenkins a bit more. I can tell you beforehand our jobs need to sudo run without asking for a password. The easiest way to do that is to edit /etc/sudoers in the Jenkins machine. Here is how we append a line to the sudoers file with Juju:

$ juju run --unit jenkins/0 -- 'sudo echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers'

We also know that our Jenkins jobs need to talk to the Kubernetes. To this end Jenkins will need the kubeconfig file. We take the file from the kubernetes-master and place it in our Jenkins machine under /var/tmp:

$ juju scp kubernetes-master/0:config .
$ juju scp config jenkins/0:/var/tmp/

Last part of the configuration, I promise! We know our application will be exposed using Kubernetes NodePort on port 31576. We need to make sure there is no firewall blocking that port and requests can reach it:

$ juju run --application kubernetes-worker -- open-port 31576

We are now ready to create our three main jobs:

Three jobs we will be creating
  • The first Jenkins jobs is the “Install dependencies”. This job is just a shell script installing all software packages needed to a) talk to kubernetes, b) build our ASP.NET application and c) package everything into a docker container. Place the following in a shell script job and run it once:
echo "Installing kubectl"
sudo snap install kubectl --classic
echo "Installing dotnet"
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-get install apt-transport-https -y
sudo apt-get update
sudo apt-get install dotnet-sdk-2.1.4 -y
echo "Installing docker"
sudo apt-get install apt-transport-https ca-certificates curl \
software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce -y
  • The second Jenkins job bootstraps the service in Kubernetes. It creates a deployment using the v1 image we created above and it makes sure all operations are recorded ( — record param). This deployment is exposed using NodePort 31576. Place the following in a Jenkins jobs and run it once, remember to update the docker user name (kjackal):
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config run hello-dotnet \
--image=kjackal/hello-dotnet:v1 --port=8080 --record
echo "apiVersion: v1
kind: Service
metadata:
name: hello-dotnet
spec:
type: NodePort
ports:
- port: 8080
nodePort: 31576
name: http
selector:
run: hello-dotnet" > /tmp/expose.yaml
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config apply -f /tmp/expose.yaml

Use juju status to find the IP of a kubernetes worker and open a browser at http://<kubernetes-worker-ip>:31576 . Your application is served from Kubernetes! But we are not done yet.

  • Lets create a third job (“Build and release”) that pulls your code from GitHub, compiles it, puts it in a container and deploys that container. Replace the repository and the docker username in the following snippet. Then create a Jenkins job:
rm -rf HelloWorldAspNetCore
git clone https://github.com/ktsakalozos/HelloWorldAspNetCore.git
cd HelloWorldAspNetCore
dotnet publish -c Release
sudo docker login -u kjackal -p ${DOCKER_PASS}
sudo docker build -t kjackal/hello-dotnet:${DOCKER_TAG} .
sudo docker push kjackal/hello-dotnet:${DOCKER_TAG}
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config set image deployment/hello-dotnet hello-dotnet=kjackal/hello-dotnet:${DOCKER_TAG}

Two parameters are needed: ${DOCKER_TAG} is a string, and ${DOCKER_PASS} holds the docker password of the user (kjackal in this case). You have to tick the checkbox indicating this is a parametrized job and add the two parameters. We are ready! Trigger the job and wait for it to finish. Your code should find its way to our Kubernetes cluster.

You do not believe me? Have a look at the rollout history of our deployment.

$ kubectl rollout history deployment/hello-dotnet

Release from GitHub

Everything looks great so far. Every time we want to release we will login to Jenkins and trigger the “Build and release” job…. Lets try something different. Let’s trigger a release from our code by creating a tag.

Create a “Release from Github” job on Jenkins and have it running periodically every 5 minutes:

Trigger job every 5 minutes

We want this job to look for new tags and if it detects a new one to perform the usual compile, package, deploy cycle. Here is how one such job:

#!/bin/bash
REPO="https://github.com/ktsakalozos/HelloWorldAspNetCore.git"
rm -rf ./HelloWorldAspNetCore
git clone $REPO
cd HelloWorldAspNetCore
# Initialise tags list
if [ ! -f /var/tmp/known-tags ]; then
git tag > /var/tmp/known-tags
echo "Initialising. List of preexisting git tags:"
cat /var/tmp/known-tags
exit 1
fi
mv /var/tmp/known-tags /var/tmp/know-tags.old
git tag > /var/tmp/known-tags
diff /var/tmp/known-tags /var/tmp/know-tags.old
if [ $? == '0' ]; then
echo "No new git tags detected."
exit 2
fi
# We have new tags
last_tag=$(grep -v -f /var/tmp/know-tags.old /var/tmp/known-tags | tail -n 1)
git checkout tags/${last_tag}
echo "Buidling ${last_tag}"
dotnet publish -c Release
sudo docker login -u kjackal -p <replace_with_docker_password>
sudo docker build -t kjackal/hello-dotnet:${last_tag} .
sudo docker push kjackal/hello-dotnet:${last_tag}
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config set image deployment/hello-dotnet hello-dotnet=kjackal/hello-dotnet:${last_tag}

Make sure you run the this job once so it gets initialised. Afterwords, and every 5 minutes, this job will look at the available tags and fail if no new tags are present.

Lets create a new tag/release now. Go to your GitHub repository click releases -as shown below- and fill up the release form.

Here is where you find the releases you have.
Release form on Github

Within 5 minutes your release will reach Kubernetes! Without you needing to login to Jenkins. Automagically!

A few points to note:

  1. There might be Jenkins plugins for this. But we said we will be looking at what is under the hood without hiding behind a GUI.
  2. You should place your Jenkins jobs on GitHub along with your code.

Where to go from here

So far you have seen the parts of a basic, yet fully functional CI/CD that delivers a .NET application onto any Kubernetes cluster. Each of the steps shown above are subject to improvements and tailoring based on your needs.

  • The ASP.NET application would normally have automated tests. You would want to run these tests in your CI. Travis is a great tool for this purpose and depending on the size and nature of your project it could be free. Alternatively you could setup Jenkins to run these tests as often as you please and report back the result.
  • Look into Helm if you intend to distribute your application instead of offering it as a service hosted by you.
  • Experiment with release strategies and find the one that best suits your needs. Make sure you read through Kubernetes deployment strategies.
  • You could consider using the auto scaling features of Kubernetes.
  • The Kubernetes cluster here is on LXD containers. You should use a cloud either private (eg Openstack) or public. With Conjure-up and Juju the cluster deployment process remains the same regardless of the targeted cloud. You have no excuse.
  • Finally, as you move to a cloud make sure you deploy the Canonical Distribution of Kubernetes instead of kubernetes-core. You will get a more robust deployment with HA features, logging and monitoring.

Resources


Automated Delivery of ASP.NET Core Apps on On-Prem Kubernetes was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

I read the Hacker News post Heptio Contour and I thought “Cool! A project from our friends at Heptio, lets see what they got for us”. I wont lie to you, at first I was a bit disappointed because there was no special mention for Canonical Distribution of Kubernetes (CDK) but I understand, I am asking too much :). Let me cover this gap here.

Deploy CDK

To deploy CDK on Ubuntu you need to just do a:

> sudo snap install conjure-up --classic
> conjure-up kubernetes-core

Using ‘kubernetes-core’ will give you a minimal k8s cluster — perfect for our use case. For a larger, more robust cluster, try ‘canonical-kubernetes.’

Deploy Contour and Demo App

CDK already comes with an ingress solution so you need to disable it and deploy Contour. Here we also deploy the demo kuard application:

> juju config kubernetes-worker ingress=false
> kubectl --kubeconfig=/home/jackal/.kube/config apply -f http://j.hept.io/contour-deployment-norbac
> kubectl --kubeconfig=/home/jackal/.kube/config apply -f http://j.hept.io/contour-kuard-example

Get Your App

The Contour service will be on a port that (depending on the cloud you are targeting) might be closed, so you need to open it before accessing kuard:

> kubectl --kubeconfig=/home/jackal/.kube/config get service -n heptio-contour  contour -o wide                                                                                                        
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
contour LoadBalancer 10.152.183.201 <pending> 80:31226/TCP 2m app=contour
juju run --application kubernetes-worker open-port 31226

And here it is running on AWS:

Instead of opening the ports to the outside world you could set the right DNS entries. However, this is specific to the cloud you are deploying to.

As the Contour README says “On AWS, create a CNAME record that maps the host in your Ingress object to the ELB address.”
“If you have an IP address instead (on GCE, for example), create an A record.”
For a localhost deployment your ports should not be blocked and you can fake a DNS entry by editing /etc/hosts.

Thank you Heptio. Keep it up!

Resources

Read more
K.Tsakalozos

In our last post we discussed the steps required to build the Canonical Distribution of Kubernetes (CDK). That post should give you a good picture of the components coming together to form a release. Some of these components are architecture agnostic, some not. Here we will update CDK to support IBM’s s390x architecture.

The CDK bundle is made of charms in python running on Ubuntu. That means we are already in a pretty good shape in terms of running on multiple architectures.

However, charms deploy binaries that are architecture specific. These binaries are of two types:

  1. snaps and
  2. juju resources

Snap packages are cross-distribution but unfortunately they are not cross-architecture. We need to build snaps for the architecture we are targeting. Snapped binaries include Kubernetes with its addons as well as etcd.

There are a few other architecture specific binaries that are not snapped yet. Flannel and CNI plugins are consumed by the charms as Juju resources.

Build snaps for s390x

CDK uses snaps to deploy a) Kubernetes binaries b) add-on services and,
c) etcd binaries.

Snaps with Kubernetes binaries

Kubernetes snaps are built using the branch in, https://github.com/juju-solutions/release/tree/rye/snaps/snap. You will need to login to your s390x machine, clone the repository and checkout the right branch:

On your s390x machine:
> git clone https://github.com/juju-solutions/release.git
> cd release
> git checkout rye/snaps
> cd snap

At this point we can build the Kubernetes snaps:

On your s390x machine:
> make KUBE_ARCH=s390x KUBE_VERSION=v1.7.4 kubectl kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubeadm kubefed

The above will fetch the Kubernetes released binaries from upstream and package them in snaps. You should see a bunch of *_1.7.4_-s390x.snap files in the snap directory.

Snap Kubernetes addons

Apart from the Kubernetes binaries CDK also packages the Kubernetes addons as snaps. The process of building the cdk-addons_1.7.4_s390x.snap is almost identical to the Kubernetes snaps. Clone the cdk-addons repository and make the addons based on the Kubernetes version:

On your s390x machine:
> git clone https://github.com/juju-solutions/cdk-addons.git
> cd cdk-addons
> make KUBE_VERSION=v1.7.4 KUBE_ARCH=s390x

Snap with etcd binaries

The last snap we will need is the snap for etcd. This time we do not have fancy Makafiles but the build process is still very simple:

> git clone https://github.com/tvansteenburgh/etcd-snaps
> cd etcd-snaps/etcd-2.3
> snapcraft — target-arch s390x

At this point you should have etcd_2.3.8_s390x.snap. This snap as well as the addons one and one snap for each of the kubectl kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubeadm kubefed will be attached to the Kubernetes charms upon release.

Make sure you go through this great post on using Kubernetes snaps.

Build juju resources for s390x

CDK needs to deploy the binaries for CNI and Flannel. These two binaries are not packaged as snaps (they might be in the future), instead they are provided as Juju resources. As these two binaries are architecture specific we need to build them for s390x.

CNI resource

For CNI you need to clone the respective repository, checkout the version you need and compile using a docker environment:

> git clone https://github.com/containernetworking/cni.git cni 
> cd cni
> git checkout -f v0.5.1
> docker run — rm -e “GOOS=linux” -e “GOARCH=s390x” -v ${PWD}/cni:/cni golang /bin/bash -c “cd /cni && ./build”

Have a look at how CI creates the tarball needed with the CNI binaries: https://github.com/juju-solutions/kubernetes-jenkins/blob/master/resources/build-cni.sh

Flannel resource

Support for s390x was added from version v0.8.0. Here is the usual cloning and building the repository.

> git clone https://github.com/coreos/flannel.git flannel
> cd flannel
> checkout -f v0.8.0
> make dist/flanneld-s390x

If you look at the CI script for Flannel resource you will see that the tarball also contains the CNI binaries and the etcd client: https://github.com/juju-solutions/kubernetes-jenkins/blob/master/resources/build-flannel.sh

Building etcd:

> git clone https://github.com/coreos/etcd.git etcd 
> cd etcd
> git checkout -f v2.3.7
> docker run --rm -e "GOOS=linux" -e "GOARCH=s390x -v ${PWD}/etcd:/etcd golang /bin/bash -c "cd /etcd &&./build"

Update charms

As we already mentioned, charms are written in python so they run on any platform/architecture. We only need to make sure they are fed the right architecture specific binaries to deploy and manage.

Kubernetes charms have a configuration option called channel. Channel points to a fallback snap channel where snaps are fetched from. It is a fallback because snaps are also attached to the charms when releasing them, and priority is given to those attached snaps. Let me a explain how this mechanism works. When releasing a Kubernetes charm you need to provide a resource file for each of the snaps the charm will deploy. If you upload a zero sized file (.snap) the charm will consider this as a dummy snap and try to snap-install the respective snap from the official snap repository. Grabbing the snaps from the official repository works towards a single multi arch charm since the arch specific repository is always available.

For the non snapped binaries we build above (cni and flannel) we need to patch the charms to make sure those binaries are available as Juju resources. In this pull request we add support for multi arch non snapped resources. We add an additional Juju resource for cni built s390x arch.

# In the metadata.yaml file of Kubernetes worker charm
cni-s390x: type: file
etype: file
filename: cni.tgz
description: CNI plugins for amd64

The charm will concatenate “cni-” and the architecture of the system to form the name of the right cni resource. On an amd64 the resource to be used is cni-amd64, on an s390x the cni-s390x is used.

We follow the same approach for the flannel charm. Have a look at the respective pull request.

Build and release

We would need to build the charms, push them to the store, attach the resources and release them. Lets trace these steps for kubernetes-worker:

> cd <path_to_kubernetes_worker_layer>
> charm build
> cd <output_directory_probably_$JUJU_REPOSITORY/builds/kubernetes-worker>
> charm push . cs:~kos.tsakalozos/kubernetes-worker-s390x

You will get a revision number for your charm. In my case it was 0. Lets list the resources we have for that charm:

> charm list-resources cs:~kos.tsakalozos/kubernetes-worker-s390x-0

And attach the resources to the uploaded charm:

> cd <where_your_resources_are>
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kube-proxy=./kube-proxy.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kubectl=./kubectl.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kubelet=./kubelet.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 cni-s390x=./cni-s390x.tgz
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 cni-amd64=./cni-amd64.tgz

Notice how we have to attach one resource per architecture for the cni non-snapped resource. You can try to provide a valid amd64 build for cni but since we are not building a multi arch charm it wouldn’t matter as it will not be used on the targeted s390x platform.

Now let’s release and grant read visibility to everyone:

> charm release cs:~kos.tsakalozos/kubernetes-worker-s390x-0 — channel edge -r cni-s390x-0 -r cni-amd64-0 -r kube-proxy-0 -r kubectl-0 -r kubelet-0
> charm grant cs:~kos.tsakalozos/kubernetes-worker-s390x-0 everyone

We omit building and releasing rest of the charms for brevity.

Summing up

Things have lined up really nice for CDK to support multiple hardware architectures. The architecture specific binaries are well contained into Juju resources and most of them are now shipped as snaps. Building each binary is probably the toughest part in the process outlined above. Each binary has its own dependencies and build process but the scripts linked here reduce the complexity a lot.

In the future we plan to support other architectures. We should end-up with a single bundle that would deploy on any architecture via a juju deploy canonical-kubernetes. You can follow our progress on our trello board. Do not hesitate to reach out and suggest improvements and tell us what architecture you think we should target next.

Read more
K.Tsakalozos

Patch CDK #1: Build & Release

Happens all the time. You often come across a super cool open source project you would gladly contribute but setting up the development environment and learning to patch and release your fixes puts you off. The Canonical Distribution of Kubernetes (CDK) is not an exception. This set of blog posts will shed some light on the most dark secrets of CDK.

Welcome to the CDK patch journey!

What is your Build & Release workflow? (Figure from xkcd)

Build CDK from source

Prerequisits

You would need to have Juju configured and ready to build charms. We will not be covering that in this blog post. Please, follow the official documentation to setup your environment and build you own first charm with layers.

Build the charms

CDK is made of a few charms, namely:

To build each charm you need to spot the top level charm layer and do a `charm build` on it. The links on the above list will get you to the github repository you will need to clone and build. Lets try this out for easyrsa:

> git clone https://github.com/juju-solutions/layer-easyrsa
Cloning into ‘layer-easyrsa’…
remote: Counting objects: 55, done.
remote: Total 55 (delta 0), reused 0 (delta 0), pack-reused 55
Unpacking objects: 100% (55/55), done.
Checking connectivity… done.
> cd ./layer-easyrsa/
> charm build
build: Composing into /home/jackal/workspace/charms
build: Destination charm directory: /home/jackal/workspace/charms/builds/easyrsa
build: Processing layer: layer:basic
build: Processing layer: layer:leadership
build: Processing layer: easyrsa (from .)
build: Processing interface: tls-certificates
proof: OK!

The above builds the easyrsa charm and prints the output directory (/home/jackal/workspace/charms/builds/easyrsa in this case).

Building the kubernetes-* charms is slightly different. As you might already know the kubernetes charm layers are already upstream under cluster/juju/layers. Building the respective charms requires you to clone the kubernetes repository and pass the path to each layer to your invocation of charm build. Let’s build the kubernetes worker layer here:

> git clone https://github.com/kubernetes/kubernetes
Cloning into ‘kubernetes’…
remote: Counting objects: 602553, done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 602553 (delta 18), reused 20 (delta 15), pack-reused 602481
Receiving objects: 100% (602553/602553), 456.97 MiB | 2.91 MiB/s, done.
Resolving deltas: 100% (409190/409190), done.
Checking connectivity… done.
> cd ./kubernetes/
> charm build cluster/juju/layers/kubernetes-worker/
build: Composing into /home/jackal/workspace/charms
build: Destination charm directory: /home/jackal/workspace/charms/builds/kubernetes-worker
build: Processing layer: layer:basic
build: Processing layer: layer:debug
build: Processing layer: layer:snap
build: Processing layer: layer:nagios
build: Processing layer: layer:docker (from ../../../workspace/charms/layers/layer-docker)
build: Processing layer: layer:metrics
build: Processing layer: layer:tls-client
build: Processing layer: layer:nvidia-cuda (from ../../../workspace/charms/layers/nvidia-cuda)
build: Processing layer: kubernetes-worker (from cluster/juju/layers/kubernetes-worker)
build: Processing interface: nrpe-external-master
build: Processing interface: dockerhost
build: Processing interface: sdn-plugin
build: Processing interface: tls-certificates
build: Processing interface: http
build: Processing interface: kubernetes-cni
build: Processing interface: kube-dns
build: Processing interface: kube-control
proof: OK!

During charm build all layers and interfaces referenced recursively starting by the top charm layer are fetched and merged to form your charm. The layers needed for building a charm are specified in a layer.yaml file on the root of the charm’s directory. For example, looking at cluster/juju/layers/kubernetes-worker/layer.yaml we see that the kubernetes worker charm uses the following layers and interfaces:

- 'layer:basic'
- 'layer:debug'
- 'layer:snap'
- 'layer:docker'
- 'layer:metrics'
- 'layer:nagios'
- 'layer:tls-client'
- 'layer:nvidia-cuda'
- 'interface:http'
- 'interface:kubernetes-cni'
- 'interface:kube-dns'
- 'interface:kube-control'

Layers is an awesome way to share operational logic among charms. For instance, the maintainers of the nagios layer have a better understanding of the operational needs of nagios but that does not mean that the authors of the kubernetes charms cannot use it.

charm build will recursively lookup each layer and interface at http://interfaces.juju.solutions/ to figure out where the source is. Each repository is fetched locally and squashed with all the other layers to form a single package, the charm. Go ahead a do a charm build with “-l debug” to see how and when a layer is fetched. It is important to know that if you already have a local copy of a layer under $JUJU_REPOSITORY/layers or interface under $JUJU_REPOSITORY/interfaces charm build will use those local forks instead of fetching them from the registered repositories. This enables charm authors to work on cross layer patches. Note that you might need to rename the directory of your local copy to match exactly the name of the layer or interface.

Building Resources

Charms will install Kubernetes but to do so they need to have the Kubernetes binaries. We package these binaries in snaps so that they are self-contained and deployed in any Linux distribution. Building such binaries is pretty straight forward as long as you know where to find them :)

Here is the repository holding the Kubernetes snaps: https://github.com/juju-solutions/release.git. The branch we want is rye/snaps:

> git clone https://github.com/juju-solutions/release.git
Cloning into ‘release’…
remote: Counting objects: 1602, done.
remote: Total 1602 (delta 0), reused 0 (delta 0), pack-reused 1602
Receiving objects: 100% (1602/1602), 384.69 KiB | 236.00 KiB/s, done.
Resolving deltas: 100% (908/908), done.
Checking connectivity… done.
> cd release
> git checkout rye/snaps
Branch rye/snaps set up to track remote branch rye/snaps from origin.
Switched to a new branch ‘rye/snaps’

Have a look at the README.md inside the snap directory to see how to build the snaps:

> cd snap/
> ./docker-build.sh KUBE_VERSION=v1.7.4

A number of .snap files should be available after the build.

In similar fashion you can build the snap package holding Kubernetes addons. We refer to this package as cdk-addons and it can be found at: https://github.com/juju-solutions/cdk-addons.git

> git clone https://github.com/juju-solutions/cdk-addons.git
Cloning into ‘cdk-addons’…
remote: Counting objects: 408, done.
remote: Total 408 (delta 0), reused 0 (delta 0), pack-reused 408
Receiving objects: 100% (408/408), 51.16 KiB | 0 bytes/s, done.
Resolving deltas: 100% (210/210), done.
Checking connectivity… done.
> cd cdk-addons/
> make

The last resource you will need (which is not packaged as a snap) is the container network interface (cni). Lets grab the repository and get to a release tag:

> git clone https://github.com/containernetworking/cni.git
Cloning into ‘cni’…
remote: Counting objects: 4048, done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 4048 (delta 0), reused 2 (delta 0), pack-reused 4043
Receiving objects: 100% (4048/4048), 1.76 MiB | 613.00 KiB/s, done.
Resolving deltas: 100% (1978/1978), done.
Checking connectivity… done.
> cd cni
> git checkout -f v0.5.1

Build and package the cni resource:

> docker run — rm -e “GOOS=linux” -e “GOARCH=amd64” -v `pwd`:/cni golang /bin/bash -c “cd /cni && ./build”
Building API
Building reference CLI
Building plugins
flannel
tuning
bridge
ipvlan
loopback
macvlan
ptp
dhcp
host-local
noop
> cd ./bin
> tar -cvzf ../cni.tgz *
bridge
cnitool
dhcp
flannel
host-local
ipvlan
loopback
macvlan
noop
ptp
tuning

You should now have a cni.tgz in the root folder of the cni repository.

Two things to note here:
- We do have a CI for building, testing and releasing charms and bundles. In case you want to follow each step of the build process, you can find our CI scripts here: https://github.com/juju-solutions/kubernetes-jenkins
- You do not need to build all resources yourself. You can grab the resources used in CDK from the Juju store. Starting from the canonical-kubernetes bundle you can navigate to any charm shipped. Select one from the very end of the bundle page and then look for the “resources” sidebar on the right. Download any of them, rename it properly and you are ready to use it in your release.

Releasing Your Charms

After patching the charms to match your needs, please, consider submitting a pull request to tell us what you have been up to. Contrary to many other projects you do not need to wait for your PR to get accepted before you can actually make your work public. You can immediately release your work under your own namespace on the store. This is described in detail on the official charm authors documentation. The developers team is often using private namespaces to test PoCs and new features. The main namespace where CDK is released from is “containers”.

Yet, there is one feature you need to be aware of when attaching snaps to your charms. Snaps have their own release cycle and repositories. If you want to use the officially released snaps instead of attaching them to the charms, you can use a dummy zero sized file with the correct extension (.snap) in the place of each snap resource. The snap layer will see that that resource is empty and will try to grab the snap from the official repositories. Using the official snaps is recommended, however, in network restricted environments you might need to attach your own snaps while you deploy the charms.

Why is it this way?

Building CDK is of average difficulty as long as you know how where to look. It is not perfect by any standards and it will probably continue this path. The reason is that there are opposing forces shaping the build processes. This should come as no surprise. As Kubernetes changes rapidly and constantly expands, the build and release process should be flexible enough to include any new artefacts. Consider for example the switch from flannel cni to calico. In our case it is a resource and a charm that need to be updated. A monolithic build script would have been more “elegant” to the outsiders (eg, make CDK), but we would have been hiding a lot under the carpet. CI should be part of the culture of any team and should be owned by the team or else you get disconnected from the end product causing delays and friction. Our build and release process might look a bit “dirty” with a lot of moving parts but it really is not that bad! I managed to highlight the build and release steps in a single blog. Positive feedback also comes from our field engineers. Most of the time CDK deploys out of the box. When our field engineers are called it is either because our customers have a special requirement from the software or they have a “unconventional” environment in which Kubernetes needs to be deployed. Having such a simple and flexible build and release process enables our people to solve a problem on-site and release it tothe Juju store within a couple of hours.

Next steps

This blog post serves as foundation work for what is coming up. The plan is to go over some easy patches so we further demystify how CDK works. Funny thing this post was originally titled “Saying goodbye to my job security”. Cheers!

Read more
K.Tsakalozos

Sometimes we miss the forest for the trees.

It’s all about portability

Take a step back and think about how much of your effort goes into your product’s portability. A lot, I’d wager. You don’t wait to see what hardware your customer is on before you start coding your app. Widely accepted architectures give you a large customer base, so you target those. You skip optimisations in order to maximize compatibility. The language you are using ensures you can move to another platform/architecture.

Packaging your application is also important if you want to broaden your audience. The latest trend here is to use containers such as docker. Essentially containers (much like VMs) are another layer of abstraction for the sake of portability. You can package your app such that it will run anywhere.

https://medium.com/media/e87190de95f6f33f07f75beb4c8e7f39/href

Things go wrong

When you first create a PoC, even a USB stick is enough to distribute it! After that you should take things more seriously. I see products “dockerised” so that they run everywhere, yet the deployment story is limited to a single cloud. You do all the work to deliver your app everywhere only to fail when you actually need to deliver it! It doesn’t make sense — or to put it more accurately it only makes sense in the short term.

The overlooked feature

Canonical Distribution of Kubernetes (CDK) will deploy exactly the same Kubernetes in all major clouds (AWS, GCE, Azure, VMware, Openstack, Joynet), bare metal, or even your local machine (using lxd). When you first evaluate CDK you may think: “I do not need this”, or even “I hate these extra two lines where I decide the cloud provider because I already know I am going full AWS”. At that point you are tempted to take the easy way out and get pinned to a single Kubernetes provider. However, soon you will find yourself in need of feature F from cloud C or Kubernetes version vX.Y.Z. Worse is when your potential customer is on a different platform and you discover the extent of dependencies you have on your provider.

Before you click off this tab thinking I am exaggerating, think about this: the best devops are done by the devs themselves. How easy is it to train all devs to use AWS for spawning containers? It is trivial, right? Suppose you get two more customers, one on Azure and one on GCE. How easy is it to train your team to perform devops against two additional clouds? You now have to deal with three different monitoring and logging mechanisms. That might still be doable since AWS, GCE, and Azure are all well behaved. What if a customer wants their own on-prem cluster? You are done — kiss your portability and velocity goodbye. Your team will be constantly firefighting, chasing after misaligned versions, and emulating features across platforms.

How is it done

Under the hood of CDK you will find Juju. Juju abstracts the concepts of IaaS. It will provision everything from bare metal machines in your rack, to VMs on any major cloud, to lxd containers on your laptop. On top of these units Juju will deploy Kubernetes thus keeping the deployment agnostic to the underlying provider. Two points here:

  1. Juju has been here for years and is used both internally and externally by Canonical to successfully deliver solutions such as Openstack and Big Data.
  2. Since your Kubernetes is built on IaaS cloud offerings you are able to tailor node properties to match your needs. You may also find it is more cost effective.

Also check out Conjure-up, an easy-to-use wizard that will get your Kubernetes cluster up-and-running in minutes.

To conclude

I am sure you have your reasons to “dockerize” your application and then deliver it on a single cloud. It is just counter-intuitive. If you want to make the most from your portability decisions you should consider CDK.

References:

Read more
Dustin Kirkland

Earlier this month, I spoke at ContainerDays, part of the excellent DevOpsDays series of conferences -- this one in lovely Portland, Oregon.

I gave a live demo of Kubernetes running directly on bare metal.  I was running it on an 11-node Ubuntu Orange Box -- but I used the exact same tools Canonical's world class consulting team uses to deploy Kubernetes onto racks of physical machines.
You see, the ability to run Kubernetes on bare metal, behind your firewall is essential to the yin-yang duality of Cloud Native computing.  Sometimes, what you need is actually a Native Cloud.
Deploying Kubernetes into virtual machines in the cloud is rather easy, straightforward, with dozens of tools now that can handle that.

But there's only one tool today, that can deploy the exact same Kubernetes to AWS, Azure, GCE, as well as VMware, OpenStack, and bare metal machines.  That tools is conjure-up, which acts as a command line front end to several essential Ubuntu tools: MAAS, LXD, and Juju.

I don't know if the presentation was recorded, but I'm happy to share with you my slides for download, and embedded here below.  There are a few screenshots within that help convey the demo.




Cheers,
Dustin

Read more
Dustin Kirkland

Thank you to Oracle Cloud for inviting me to speak at this month's CloudAustin Meetup hosted by Rackspace.

I very much enjoyed deploying Canonical Kubernetes on Ubuntu in the Oracle Cloud, and then exploring Kubernetes a bit, how it works, the architecture, and a simple workload within.  I'm happy to share my slides below, and you can download a PDF here:


If you're interested in learning more, check out:
It was a great audience, with plenty of good questions, pizza, and networking!

I'm pleased to share my slide deck here.

Cheers,
Dustin

Read more
K.Tsakalozos

Introduction

In this post we discus our efforts to setup a Federation of on-prem Kubernetes Clusters using CoreDNS. The Kubernetes Cluster version used is 1.6.2. We use Juju to deliver clusters on AWS, yet the clusters should be considered on-prem since they do not integrate with any of the cloud’s features. The steps described here are repeatable on any pool of resources you may have available (cloud or bare metal). Note that this is still a work in progress and should not be used on a production environment.

What is Federation

Cluster Federation (Fig. from http://blog.kubernetes.io/2016/07/cross-cluster-services.html)

Cluster federation allows you to “connect” your Kubernetes clusters and manage certain functionality from a central point. The need for central management arises when you have more than one clusters possibly spread across different locations. You may need a globally distributed infrastructure for performance, availability or even legal reasons. You might want to have isolated clusters satisfying the needs of different teams. Whatever the case may be, federation can assist in some of the ops tasks. You can find more on the benefits of federation in the official documentation.

Why Cluster Federation with Juju

Juju makes delivery of Kubernetes clusters dead simple! Juju builds an abstraction over the pool of resources you have and allows us to deliver Kubernetes to all the main public and private clouds (AWS, Azure, GCE, Openstack, Oracle etc) and bare metal physical clusters. Juju also enables you to experiment with small scale deployments stood up on your local machine using lxc containers.

Having the option to deploy a cluster with the ease Juju offers, federation comes as a natural next step. Deployed clusters should be considered on-prem clusters as they are exactly the same as if they were deployed on bare metal. This is simply because even if you deploy on a cloud Juju will provision machines (virtual machines in this case) and deploy Kubernetes as if it were a local on-prem deployment. You see, Juju is not magic after all!

There is a excellent blog post showing how you can place Juju clusters under a federation hosting cluster setup on GCE. Here we focus on a federation that has no dependencies on a specific cloud provider. To this end, we go with CoreDNS as a federation DNS provider.

Let’s Federate

Deploy clusters

Our setup is made of 3 clusters:

  • f1: will host the federation controller place
  • c1, c2: federated clusters

At the time of this writing (2nd of June 2017) cluster federation is in beta and is not yet integrated into the official Kubernetes charms. Therefore we will be using a patched version of charms and bundles (source of charms and cdk-addons, charms master and worker, and bundle).

We assume you already have Juju installed and a controller in place. If not have a look at the Juju installation instructions.

Let’s deploy these three clusters.

#> juju add-model f1
#> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2
#> juju add-model c1
#> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2
#> juju add-model c2
#> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2

Here is what the bundle.yaml looks like: http://paste.ubuntu.com/24746868/

Let’s copy and merge the kubeconfig files.

#> juju switch f1
#> juju scp kubernetes-master/0:config ~/.kube/config
#> juju switch c1
#> juju scp kubernetes-master/0:config ~/.kube/config.c1
#> juju switch c2
#> juju scp kubernetes-master/0:config ~/.kube/config.c2

Using this script you can merge the configs

#> ./merge-configs.py config.c1 c1
#> ./merge-configs.py config.c2 c2

To enable CoreDNS on f1 you just need to:

#> juju switch f1
#> juju config kubernetes-master enable-coredns=True

Have a look at your three contexts:

#> kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
f1 f1 ubuntu
c1 c1-cluster c1-user
c2 c2-cluster c2-user

Create the federation

At this point you are ready to deploy the federation control plane on f1. To do so you will need a call to kubefed init. Kubefed init will deploy the required services and it will also perform a health check ping to see everything is ready. During this last check kubefed init will try to reach a service on a port that is not open. At the time of this writing we have a patch waiting upstream to make the port configurable so you can open it before triggering the federation creation. If you do not want to wait for the patch to be merged you can call kubefed init with -v 9 to see what port the services were appointed to and expose them though Juju while keeping kubefed running. This will become clear shortly.

Create a coredns-provider.conf file with the etc endpoint of etcd of f1 and your zone. You can look at the Juju status to find out the IP of the etcd machine:

#> cat ./coredns-provider.conf 
[Global]
etcd-endpoints = http://54.211.106.117:2379
zones = juju-fed.com.

Call kubefed

#> juju switch f1
#> kubefed init federation — host-cluster-context=f1 — dns-provider=”coredns” — dns-zone-name=”juju-fed.com.” — dns-provider-config=coredns-provider.conf — etcd-persistent-storage=false — api-server-service-type=”NodePort” -v 9 — api-server-advertise-address=34.224.101.147 — image=gcr.io/google_containers/hyperkube-amd64:v1.6.2

Note here that the api-server-advertise-address=34.224.101.147 is the IP of the worker on your f1 cluster. You selected NodePort so the federation service is available from all the worker nodes under a random port.

Kubefed will end up trying to do a health check. You should see on your console something similar to this: “GET https://34.224.101.147:30229/healthz” . The port 30229 is randomly selected and will differ in your case. While keeping kubefed init running, go ahead and open this port from a second terminal:

#> juju run — application kubernetes-worker “open-port 30229”

The federation context should be present in the list of contexts you have.

#> kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
f1 f1 ubuntu
federation federation federation
c1 c1-cluster c1-user
c2 c2-cluster c2-user

Now you are ready to have other clusters join the federation:

#> kubectl config use-context federation
#> kubefed join c1 — host-cluster-context=f1
#> kubefed join c2 — host-cluster-context=f1

Two clusters should soon appear ready:

#> kubectl get clusters
NAME STATUS AGE
c1 Ready 3m
c2 Ready 1m

Congratulations! You are all set, your federation is ready to use.

What works

Setup your default namespace:

#> kubectl — context=federation create ns default

Get this replica set and create it.

#> kubectl — context=federation create -f nginx-rs.yaml

See your replica set created:

#> kubectl — context=federation describe rs/nginx
Name: nginx
Namespace: default
Selector: app=nginx
Labels: app=nginx
type=demo
Annotations: <none>
Replicas: 1 current / 1 desired
Pods Status: error in fetching pods: the server could not find the requested resource
Pod Template:
Labels: app=nginx
Containers:
frontend:
Image: nginx
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
26s 26s 1 federated-replicaset-controller Normal CreateInCluster Creating replicaset in cluster c2

Create a service.

#> kubectl — context=federation create -f beacon.yaml

Scale and open ports so the service is available

#> kubectl — context=federation scale rs/nginx — replicas 3

What does not work yet

Federation does not seem to handle NodePort services. To see that you can deploy a busybox and try to resolve the name of the service you deployed right above.

Use this yaml.

#> kubectl — context=c2 create -f bbox-4-test.yaml
#> kubectl — context=c2 exec -it busybox — nslookup beacon

This returns a local cluster IP which is rather unexpected.

There are some configuration options that did not have the effect I was expecting. First, we would want to expose the CoreDNS service as described in the docs. Assigning the same NodePort for both UDP and TCP had no effect; this is rather unfortunate since the issue is already addressed. The configuration described here and here (although it should be automatically applied I thought I would give it a try) did not seem to have any effect.

Open issues:

Conclusions

Federation on on-prem clusters has gone a long way and is getting significant improvements with every release. Some of the issues described here have already been addressed, I am waiting anxiously for the upcoming v1.7.0 release to see the fixes in action.

Do not hesitate to reach out to the sig-federation group, but most importantly I would be extremely happy and grateful if you pointed out some bug I have on my configs/code/thoughts ;)

References and Links

  1. Official federations docs, https://kubernetes.io/docs/concepts/cluster-administration/federation/
  2. Juju docs, https://jujucharms.com/
  3. Canonical Distribution of Kubernetes, https://jujucharms.com/canonical-kubernetes/
  4. Federation with host on GCE, https://medium.com/google-cloud/experimenting-with-cross-cloud-kubernetes-cluster-federation-dfa99f913d54
  5. CDK-addons source, https://github.com/juju-solutions/cdk-addons/tree/feature/coredns
  6. Kubernetes charms, https://github.com/juju-solutions/kubernetes/tree/feature/coredns
  7. Kubernetes master charm, https://jujucharms.com/u/kos.tsakalozos/kubernetes-master/31
  8. Kubernetes worker charm, https://jujucharms.com/u/kos.tsakalozos/kubernetes-worker/17
  9. Kubernetes bundle used for federation, https://jujucharms.com/u/kos.tsakalozos/federated-kubernetes-core/bundle/0
  10. Juju getting started, https://jujucharms.com/docs/stable/getting-started
  11. Patch kubefed init, https://github.com/kubernetes/kubernetes/pull/46283
  12. CoreDNS setup instructions https://kubernetes.io/docs/tasks/federation/set-up-coredns-provider-federation/#setup-coredns-server-in-nameserver-resolvconf-chain
  13. CoreDNS yaml file https://github.com/juju-solutions/cdk-addons/blob/feature/coredns/coredns/coredns-rbac.yaml#L95
  14. Nodeport UDP/TCP issue, https://github.com/kubernetes/kubernetes/issues/20092
  15. Federation DNS on-prem configuration https://kubernetes.io/docs/admin/federation/#kubernetes-15-passing-federations-flag-via-config-map-to-kube-dns

Read more
Dustin Kirkland



Yesterday, I delivered a talk to a lively audience at ContainerWorld in Santa Clara, California.

If I measured "the most interesting slides" by counting "the number of people who took a picture of the slide", then by far "the most interesting slides" are slides 8-11, which pose an answer the question:
"Should I run my PaaS on top of my IaaS, or my IaaS on top of my PaaS"?
In the Ubuntu world, that answer is super easy -- however you like!  At Canonical, we're happy to support:
  1. Kubernetes running on top of Ubuntu OpenStack
  2. OpenStack running on top of Canonical Kubernetes
  3. Kubernetes running along side OpenStack
In all cases, the underlying substrate is perfectly consistent:
  • you've got 1 to N physical or virtual machines
  • which are dynamically provisioned by MAAS or your cloud provider
  • running stable, minimal, secure Ubuntu server image
  • carved up into fast, efficient, independently addressable LXD machine containers
With that as your base, we'll easily to conjure-up a Kubernetes, an OpenStack, or both.  And once you have a Kubernetes or OpenStack, we'll gladly conjure-up one inside the other.


As always, I'm happy to share my slides with you here.  You're welcome to download the PDF, or flip through the embedded slides below.



Cheers,
Dustin

Read more