Canonical Voices

Posts tagged with 'canonical'

Dustin Kirkland

One of the many excellent suggestions from last year's HackerNews thread, Ask HN: What do you want to see in Ubuntu 17.10?, was to refresh the Ubuntu server's command line installer:



We're pleased to introduce this new installer, which will be the default Server installer for 18.04 LTS, and solicit your feedback.

Follow the instructions below, to download the current daily image, and install it into a KVM.  Alternatively, you could write it to a flash drive and install a physical machine, or try it in your virtual machine of your choice (VMware, VirtualBox, etc.).

$ wget http://cdimage.ubuntu.com/ubuntu-server/daily-live/current/bionic-live-server-amd64.iso
$ qemu-img create -f raw target.img 10G
$ kvm -m 1024 -boot d -cdrom bionic-live-server-amd64.iso -hda target.img
...
$ kvm -m 1024 target.img

For those too busy to try it themselves at the moment, I've taken a series of screenshots below, for your review.














Finally, you can provide feedback, bugs, patches, and feature requests against the Subiquity project in Launchpad:



Cheers,
Dustin

Read more
Dustin Kirkland

February 2008, Canonical's office in Lexington, MA
10 years ago today, I joined Canonical, on the very earliest version of the Ubuntu Server Team!

And in the decade since, I've had the tremendous privilege to work with so many amazing people, and the opportunity to contribute so much open source software to the Ubuntu ecosystem.

Marking the occasion, I've reflected about much of my work over that time period and thought I'd put down in writing a few of the things I'm most proud of (in chronological order)...  Maybe one day, my daughters will read this and think their daddy was a real geek :-)

1. update-motd / motd.ubuntu.com (September 2008)

Throughout the history of UNIX, the "message of the day" was always manually edited and updated by the local system administrator.  Until Ubuntu's message-of-the-day.  In fact, I received an email from Dennis Ritchie and Jon "maddog" Hall, confirming this, in April 2010.  This started as a feature request for the Landscape team, but has turned out to be tremendously useful and informative to all Ubuntu users.  Just last year, we launched motd.ubuntu.com, which provides even more dynamic information about important security vulnerabilities and general news from the Ubuntu ecosystem.  Mathias Gug help me with the design and publication.

2. manpages.ubuntu.com (September 2008)

This was the first public open source project I worked on, in my spare time at Canonical.  I had a local copy of the Ubuntu archive and I was thinking about what sorts of automated jobs I could run on it.  So I wrote some scripts that extracted the manpages out of each one, formatted them as HTML, and published into a structured set of web directories.  10 years later, it's still up and running, serving thousands of hits per day.  In fact, this was one of the ways we were able to shrink the Ubuntu minimal image, but removing the manpages, since they're readable online.  Colin Watson and Kees Cook helped me with the initial implementation, and Matthew Nuzum helped with the CSS and Ubuntu theme in the HTML.

3. Byobu (December 2008)

If you know me at all, you know my passion for the command line UI/UX that is "Byobu".  Byobu was born as the "screen-profiles" project, over lunch at Google in Mountain View, in December of 2008, at the Ubuntu Developer Summit.  Around the lunch table, several of us (including Nick Barcet, Dave Walker, Michael Halcrow, and others), shared our tips and tricks from our own ~/.screenrc configuration files.  In Cape Town, February 2010, at the suggestion of Gustavo Niemeyer, I ported Byobu from Screen to Tmux.  Since Ubuntu Servers don't generally have GUIs, Byobu is designed to be a really nice interface to the Ubuntu command line environment.

4. eCryptfs / Ubuntu Encrypted Home Directories (October 2009)

I was familiar with eCryptfs from its inception in 2005, in the IBM Linux Technology Center's Security Team, sitting next to Michael Halcrow who was the original author.  When I moved to Canonical, I helped Michael maintain the userspace portion of eCryptfs (ecryptfs-utils) and I shepherded into Ubuntu.  eCryptfs was super powerful, with hundreds of options and supported configurations, but all of that proved far to difficult for users at large.  So I set out to simplify it drastically, with an opinionated set of basic defaults.  I started with a simple command to mount a "Private" directory inside of your home directory, where you could stash your secrets.  A few months later, on a long flight to Paris, I managed to hack a new PAM module, pam_ecryptfs.c, that actually encrypted your entire home directory!  This was pretty revolutionary at the time -- predating Apple's FileVault or Microsoft's Bitlocker, even.  Today, tens of millions of Ubuntu users have used eCryptfs to secure their personal data.  I worked closely with Tyler Hicks, Kees Cook, Jamie Strandboge, Michael Halcrow, Colin Watson, and Martin Pitt on this project over the years.

5. ssh-import-id (March 2010)

With the explosion of virtual machines and cloud instances in 2009 / 2010, I found myself constantly copying public SSH keys around.  Moreover, given Canonical's globally distributed nature, I also regularly found myself asking someone for their public SSH keys, so that I could give them access to an instance, perhaps for some pair programming or assistance debugging.  As it turns out, everyone I worked with, had a Launchpad.net account, and had their public SSH keys available there.  So I created (at first) a simple shell script to securely fetch and install those keys.  Scott Moser helped clean up that earliest implementation.  Eventually, I met Casey Marshall, who helped rewrite it entirely in Python.  Moreover, we contacted the maintainers of Github, and asked them to expose user public SSH keys by the API -- which they did!  Now, ssh-import-id is integrated directly into Ubuntu's new subiquity installer and used by many other tools, such as cloud-init and MAAS.

6. Orchestra / MAAS (August 2011)

In 2009, Canonical purchased 5 Dell laptops, which was the Ubuntu Server team's first "cloud".  These laptops were our very first lab for deploying and testing Eucalyptus clouds.  I was responsible for those machines at my house for a while, and I automated their installation with PXE, TFTP, DHCP, DNS, and a ton of nasty debian-installer preseed data.  That said -- it worked!  As it turned out, Scott Moser and Mathias Gug had both created similar setups at their houses for the same reason.  I was mentoring a new hire at Canonical, named Andres Rodriquez at the time, and he took over our part-time hacks and we worked together to create the Orchestra project.  Orchestra, itself was short lived.  It was severely limited by Cobbler as a foundation technology.  So the Orchestra project was killed by Canonical.  But, six months later, a new project was created, based on the same general concept -- physical machine provisioning at scale -- with an entire squad of engineers led by...Andres Rodriguez :-)  MAAS today is easily one of the most important projects the Ubuntu ecosystem and one of the most successful products in Canonical's portfolio.

7. pollinate / pollen / entropy.ubuntu.com (February 2014)

In 2013, I set out to secure Ubuntu at large from a set of attacks ranging from insufficient entropy at first boot.  This was especially problematic in virtual machine instances, in public clouds, where every instance is, by design, exactly identical to many others.  Moreover, the first thing that instance does, is usually ... generate SSH keys.  This isn't hypothetical -- it's quite real.  Raspberry Pi's running Debian were deemed susceptible to this exact problem in November 2015.  So designed and implemented a client (shell script that runs at boot, and fetches some entropy from one to many sources), as well as a high-performance server (golang).  The client is the 'pollinate' script, which runs on the first boot of every Ubuntu server, and the server is the cluster of physical machines processing hundreds of requests per minute at entropy.ubuntu.com.  Many people helped review the design and implementation, including Kees Cook, Jamie Strandboge, Seth Arnold, Tyler Hicks, James Troup, Scott Moser, Steve Langasek, Gustavo Niemeyer, and others.

8. The Orange Box (May 2014)

In December of 2011, in my regular 1:1 with my manager, Mark Shuttleworth, I told him about these new "Intel NUCs", which I had bought and placed them around my house.  I had 3, each of which was running Ubuntu, and attached to a TV around the house, as a media player (music, videos, pictures, etc).  In their spare time, though, they were OpenStack Nova nodes, capable of running a couple of virtual machines.  Mark immediately asked, "How many of those could you fit into a suitcase?"  Within 24 hours, Mark had reached out to the good folks at TranquilPC and introduced me to my new mission -- designing the Orange Box.  I worked with the Tranquil folks through Christmas, and we took our first delivery of 5 of these boxes in January of 2014.  Each chassis held 10 little Intel NUC servers, and a switch, as well as a few peripherals.  Effectively, it's a small data center that travels.  We spend the next 4 months working on the hardware under wraps and then unveiled them at the OpenStack Summit in Atlanta in May 2014.  We've gone through a couple of iterations on the hardware and software over the last 4 years, and these machines continue to deliver tremendous value, from live demos on the booth, to customer workshops on premises, or simply accelerating our own developer productivity by "shipping them a lab in a suitcase".  I worked extensively with Dan Poler on this project, over the course of a couple of years.

9. Hollywood (December 2014)

Perhaps the highlight of my professional career came in October of 2016.  Watching Saturday Night Live with my wife Kim, we were laughing at a skit that poked fun at another of my favorite shows, Mr. Robot.  On the computer screen behind the main character, I clearly spotted Hollywood!  Hollywood is just a silly, fun little project I created on a plane one day, mostly to amuse Kim.  But now, it's been used in Saturday Night LiveNBC Dateline News, and an Experian TV commercials!  Even Jess Frazelle created a Docker container

10. petname / golang-petname / python-petname (January 2015)

From "warty warthog" to "bionic beaver", we've always had a focus on fun, and user experience here in Ubuntu.  How hard is it to talk to your colleague about your Amazon EC2 instance, "i-83ab39f93e"?  Or your container "adfxkenw"?  We set out to make something a little more user-friendly with our "petnames".  Petnames are randomly generated "adjective-animal" names, which are easy to pronounce, spell, and remember.  I curated and created libraries that are easily usable in Shell, Golang, and Python.  With the help of colleagues like Stephane Graber and Andres Rodriguez, we now use these in many places in the Ubuntu ecosystem, such as LXD and MAAS.

If you've read this post, thank you for indulging me in a nostalgic little trip down memory lane!  I've had an amazing time designing, implementing, creating, and innovating with some of the most amazing people in the entire technology industry.  And here's to a productive, fun future!

Cheers,
:-Dustin

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 alpha 1 and python-libmaas 0.6.0 have now been released and are available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 alpha 1 is available in the Bionic -proposed archive or in the following PPA:
ppa:maas/next
 
Python-libmaas Availability
Libmaas is available in the Ubuntu Bionic archive or you can download the source from:

MAAS 2.4.0 (alpha1)

Important announcements

Dependency on tgt (iSCSI) has now been dropped

Starting from MAAS 2.3, the way run ephemeral environments and perform deployments was changed away from using iSCSI. Instead, we introduced the ability to do the same using a squashfs image. With that, we completely removed the requirement for having tgt at all, but we didn’t drop the dependency in 2.3. As of 2.4, however, tgt has now been completely removed.

Dependency on apache2 has now been dropped in the debian packages

Starting from MAAS 2.0, MAAS now made the UI available in port 5240 and deprecated the use of port 80. However, as a mechanism to not break users when upgrading from the previous LTS, MAAS continued to have apache2 as a dependency to provide a reverse proxy to allow users to connect via port 80.

However, the MAAS snap changed that behavior no longer providing access to MAAS via port 80. In order to keep MAAS consistent with the snap, starting from MAAS 2.4, the debian package no longer depends on apache2 to provide a reverse proxy capability from port 80.

Python libmaas (0.6.0) now available in the Ubuntu Archive

I’m happy to announce that the new MAAS Client Library is now available in the Ubuntu Archives for Bionic. Libmaas is an asyncio based client library that provides a nice interface to interact with MAAS. More details below.

New Features & Improvements

Machine Locking

MAAS now adds the ability to lock machines, which prevents the user from performing actions on machines that could change their state. This gives MAAS a prevention mechanism of potentially catastrophic actions. For example, it will prevent mistakenly powering off machines or mistanly releasing machines that could bring workloads down.

Audit logging

MAAS 2.4 now allows the administrators to audit the user’s actions, with the introduction of audit logging. The audit logs are available to administrators via the MAAS CLI/API, giving administrators a centralized location to access these logs.

Documentation is in the process of being published. For raw access please refer to the following link:

https://github.com/CanonicalLtd/maas-docs/pull/766/commits/eb05fb5efa42ba850446a21ca0d55cf34ced2f5d

Commissioning Harness – Supporting firmware upgrade and hardware specific scripts

The commissioning harness has been expanded with various improvements to help administrators write their own firmware upgrade and hardware specific scripts. These improvements addresses various of the challenges administrators face when performing such tasks at scale. The improvements include:

  • Ability to auto-select all the firmware upgrade/storage hardware changes (API only, UI will be available soon)

  • Ability to run scripts only for the hardware they are intended to run on.

  • Ability to reboot the machine while on the commissioning environment without disrupting the commissioning process.

This allows administrators to:

  • Create a hardware specific by declaring in which machine it needs to be run, by specifying the hardware specific PCI ID, modalias, vendor or model of the machine or device.

  • Create firmware upgrade scripts that require a reboot before the machine finishes the commissioning process, by allowing to describe this in the script’s metadata.

  • Allows administrators to define where the script can obtain proprietary firmware and/or proprietary tools to perform any of the operations required.

Minor improvements – Gather information about BIOS & firmware

MAAS now gathers more information about the underlying system, such as the Model, Serial, BIOS and firmware information of a machine (where available). It also gathers the information for storage devices as well as network interfaces.

MAAS Client Library (python-libmaas)

New upstream release – 0.6.0

A new upstream release is now available in the Ubuntu Archive for Bionic. The new release includes the following changes:

  • Add/read/update/delete storage devices attached to machines.

  • Configure partitions and mount points

  • Configure Bcache

  • Configure RAID

  • Configure LVM

Known issues & work arounds

LP: #1748712  – 2.4.0a1 upgrade failed with old node event data

It has been reported that an upgrade to MAAS 2.4.0a1 failed due to having old data from a non-existent know stored in the database. This could have been due to a older devel version of MAAS which would have left an entry in the node event table. A work around is provided in the bug report.

If you hit this issue, please update the bug report immediately so MAAS developers.

Bug fixes

Please refer to the following for all bug fixes in this release.

https://launchpad.net/maas/+milestone/2.4.0alpha1

Read more
K. Tsakalozos

In this post we will see how to automate the deployment of an ASP.NET Core application on an On-Prem Kubernetes cluster. We will base our work on the excellent blog “Deploying ASP.NET Core apps on App Engine” by Mete Atamel. Our contribution would be a) showing how to target public as well as private clouds for Kubernetes deployments, and b) automate the delivery of your software through a basic CI/CD based on Jenkins.

Click here to share this article on LinkedIn »

What will we be using

  • An Ubuntu 16.04 machine
  • git command-line client (installed by sudo apt install git)
  • Internet access
  • $0, yes this is going to be a “look how much you can do for free” kind of blog post

Quick outline

  • Create an ASP.NET Core application on the Ubuntu machine and push the code to GitHub.
  • Package the application in a Docker container and upload it to Docker Hub
  • Spin-up a Kubernetes cluster, the Canonical distribution. Here we will deploy that cluster locally, but you can use any private or public cloud you might have access to.
  • Deploy your application on the Kubernetes cluster and expose it.
  • Deploy Jenkins next to Kubernetes and automate the delivery of your app.

Let’s not waste any time we have a long way ahead of us.

ASP.NET Core applications on Linux

Installing .NET on an Ubuntu 16.04 is as simple as adding a repository and getting dotnet-sdk-2.1.4:

$ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg 
$ sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
$ sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
$ sudo apt-get install apt-transport-https
$ sudo apt-get update
$ sudo apt-get install dotnet-sdk-2.1.4

We can now create our application. It is going to be the template razor application as described in Mete’s codelab and we will not even bother to change the application’s name!

$ mkdir -p ~/workspace/dotnet
$ cd ~/workspace/dotnet
$ dotnet new razor -o HelloWorldAspNetCore
$ cd HelloWorldAspNetCore
$ dotnet run

You should see the application on your browser at http://localhost:5000

Our code will be on a public github repository so go ahead and create an account if you do not already have one (https://github.com). Click the “New repository” button to create a public repository named HelloWorldAspNetCore.

Creating a repository for our code.

Lets add a .gitignore file at the root of our project and push our code:

$ cd ~/workspace/dotnet/HelloWorldAspNetCore
$ wget https://raw.githubusercontent.com/OmniSharp/generator-aspnet/master/templates/gitignore.txt
$ mv gitignore.txt .gitignore
$ git init
$ git add .
$ git commit -m "Initial commit"
$ git remote add origin https://github.com/ktsakalozos/HelloWorldAspNetCore.git
$ git push -u origin master

You can now relax! Your code is safe with github.

Package your application in a Docker container

Installing docker would require adding the respective repository and apt getting docker-ce:

$ sudo apt-get install apt-transport-https ca-certificates curl \
software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
$ sudo apt-get update
$ sudo apt-get install docker-ce
$ sudo usermod -a -G docker $USER
$ newgrp docker

While we are in the process of setting up docker we might as well register with Docker Hub and create a repository to store our images. Go to https://hub.docker.com and create an account. I will be here waiting :).

After logging in to Docker Hub click on the “Create Repository” button and create a new public repository. That is where we will be pushing our docker images. For the rest of this blog the docker user is “kjackal” and the repository is named “hello-dotnet”. This will make more sense to you shortly.

New docker repository form.

To package our application we first need to ask dotnet to compile our code and gather any dependencies into a folder for later deployment. This is done with:

$ dotnet publish -c Release

Our docker container should package everything under bin/Release/netcoreapp2.0/publish/. We create a `Dockerfile` on the root of our project with the following contents:

$ cd ~/workspace/dotnet/HelloWorldAspNetCore/
$ cat ./Dockerfile
FROM gcr.io/google-appengine/aspnetcore:2.0
ADD ./bin/Release/netcoreapp2.0/publish/ /app
ENV ASPNETCORE_URLS=http://*:${PORT}
WORKDIR /app
ENTRYPOINT [ "dotnet", "HelloWorldAspNetCore.dll"]

Building the container and testing that it works:

$ docker build -t kjackal/hello-dotnet:v1 .
$ docker run -p 8080:8080 kjackal/hello-dotnet:v1

You should see the output at http://localhost:8080 .

It is time to push the first version to Docker Hub:

$ docker login
$ docker push kjackal/hello-dotnet:v1

The Dockerfile should be part of the code so we commit it and push it to the git repository:

$ git add Dockerfile
$ git commit -m "Adding dockerfile"
$ git push origin master

Deploy a Kubernetes Cluster

For the on-prem Kubernetes deployments we will go with Canonical’s solution. The reason being (apart from me being biased) that Canonical offers a seamless and effortless transition from a toy deployment running on your laptop to a full blown production grade Kubernetes deployed on private, public clouds or even on bare metal.

In this blog we show how to deploy Kubernetes and Jenkins on your localhost (laptop/desktop) — just make sure you have at least 8GB of RAM. We first need to have LXD running. LXD is a really powerful type of container based on the same technologies as Docker. Contrary to Docker, LXD containers resemble more to virtual machines (VMs). Lets simplify things and assume from now on that LXD containers are VMs that boot instantly and have no performance overhead!

In the following snippet we install LXD. While initialising ( /snap/bin/lxd init) make sure you go with the defaults but you do not enable ipv6. When asked “What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]?” Reply with “none”.

$ sudo snap install lxd
$ sudo usermod -a -G lxd $USER
$ newgrp lxd
$ /snap/bin/lxd init

At this point we can use either Juju or Conjure-up to deploy Kubernetes. Conjure-up is essentially a wizard sitting on-top of Juju.

As already mentioned by Tim installing Kubernetes with conjure-up is as simple as:

$ sudo snap install conjure-up --classic
$ conjure-up

Canonical Kubernetes comes in two flavours:

  1. kubernetes-core is a cut down version installed in two machines, in our case two LXD containers running on our localhost.
  2. canonical-kubernetes is the full production grade deployment with features such as HA and monitoring.

We will go for kubernetes-core; on the next screen select localhost as the cloud provider. Follow the wizard’s steps till the end and wait for the deployment to finish. For a headless deployment you can do a conjure-up kubernetes-core localhost .

To review the status of our deployment have a look at:

$ juju status

Getting into one of the LXD containers/machines we use juju ssh, for exmple:

$ juju ssh kubernetes-master/0

Inside kubernetes-master under /home/ubuntu you will find a config file you can use for accessing your cluster. We can fetch that file with:

$ juju scp kubernetes-master/0:config .

Conjure-up has already copied the Kubernetes config locally and installed kubectl for us. How nice!

Automate the deployment process, CI/CD

We will be showing a few Jenkins jobs to automate the process of building, packaging and deploying our application. The intention here is to show everything that happens under the hood and not hide behind a flashy UI.

First things first, we need a Jenkins machine.

$ juju deploy jenkins

We deploy Jenkins next to our Kubernetes cluster. This will take some time, you can check the progress of the deployment with juju status.

Next we need to set a password to Jenkins and expose it so we can access its UI on port 8080. Exposing Jenkins is not needed in the localhost deployment which uses LXD containers but we show it here for completeness.

$ juju config jenkins password='your_secure_password'
$ juju expose jenkins

Before we start crafting our jobs we need to configure Jenkins a bit more. I can tell you beforehand our jobs need to sudo run without asking for a password. The easiest way to do that is to edit /etc/sudoers in the Jenkins machine. Here is how we append a line to the sudoers file with Juju:

$ juju run --unit jenkins/0 -- 'sudo echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers'

We also know that our Jenkins jobs need to talk to the Kubernetes. To this end Jenkins will need the kubeconfig file. We take the file from the kubernetes-master and place it in our Jenkins machine under /var/tmp:

$ juju scp kubernetes-master/0:config .
$ juju scp config jenkins/0:/var/tmp/

Last part of the configuration, I promise! We know our application will be exposed using Kubernetes NodePort on port 31576. We need to make sure there is no firewall blocking that port and requests can reach it:

$ juju run --application kubernetes-worker -- open-port 31576

We are now ready to create our three main jobs:

Three jobs we will be creating
  • The first Jenkins jobs is the “Install dependencies”. This job is just a shell script installing all software packages needed to a) talk to kubernetes, b) build our ASP.NET application and c) package everything into a docker container. Place the following in a shell script job and run it once:
echo "Installing kubectl"
sudo snap install kubectl --classic
echo "Installing dotnet"
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-get install apt-transport-https -y
sudo apt-get update
sudo apt-get install dotnet-sdk-2.1.4 -y
echo "Installing docker"
sudo apt-get install apt-transport-https ca-certificates curl \
software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce -y
  • The second Jenkins job bootstraps the service in Kubernetes. It creates a deployment using the v1 image we created above and it makes sure all operations are recorded ( — record param). This deployment is exposed using NodePort 31576. Place the following in a Jenkins jobs and run it once, remember to update the docker user name (kjackal):
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config run hello-dotnet \
--image=kjackal/hello-dotnet:v1 --port=8080 --record
echo "apiVersion: v1
kind: Service
metadata:
name: hello-dotnet
spec:
type: NodePort
ports:
- port: 8080
nodePort: 31576
name: http
selector:
run: hello-dotnet" > /tmp/expose.yaml
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config apply -f /tmp/expose.yaml

Use juju status to find the IP of a kubernetes worker and open a browser at http://<kubernetes-worker-ip>:31576 . Your application is served from Kubernetes! But we are not done yet.

  • Lets create a third job (“Build and release”) that pulls your code from GitHub, compiles it, puts it in a container and deploys that container. Replace the repository and the docker username in the following snippet. Then create a Jenkins job:
rm -rf HelloWorldAspNetCore
git clone https://github.com/ktsakalozos/HelloWorldAspNetCore.git
cd HelloWorldAspNetCore
dotnet publish -c Release
sudo docker login -u kjackal -p ${DOCKER_PASS}
sudo docker build -t kjackal/hello-dotnet:${DOCKER_TAG} .
sudo docker push kjackal/hello-dotnet:${DOCKER_TAG}
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config set image deployment/hello-dotnet hello-dotnet=kjackal/hello-dotnet:${DOCKER_TAG}

Two parameters are needed: ${DOCKER_TAG} is a string, and ${DOCKER_PASS} holds the docker password of the user (kjackal in this case). You have to tick the checkbox indicating this is a parametrized job and add the two parameters. We are ready! Trigger the job and wait for it to finish. Your code should find its way to our Kubernetes cluster.

You do not believe me? Have a look at the rollout history of our deployment.

$ kubectl rollout history deployment/hello-dotnet

Release from GitHub

Everything looks great so far. Every time we want to release we will login to Jenkins and trigger the “Build and release” job…. Lets try something different. Let’s trigger a release from our code by creating a tag.

Create a “Release from Github” job on Jenkins and have it running periodically every 5 minutes:

Trigger job every 5 minutes

We want this job to look for new tags and if it detects a new one to perform the usual compile, package, deploy cycle. Here is how one such job:

#!/bin/bash
REPO="https://github.com/ktsakalozos/HelloWorldAspNetCore.git"
rm -rf ./HelloWorldAspNetCore
git clone $REPO
cd HelloWorldAspNetCore
# Initialise tags list
if [ ! -f /var/tmp/known-tags ]; then
git tag > /var/tmp/known-tags
echo "Initialising. List of preexisting git tags:"
cat /var/tmp/known-tags
exit 1
fi
mv /var/tmp/known-tags /var/tmp/know-tags.old
git tag > /var/tmp/known-tags
diff /var/tmp/known-tags /var/tmp/know-tags.old
if [ $? == '0' ]; then
echo "No new git tags detected."
exit 2
fi
# We have new tags
last_tag=$(grep -v -f /var/tmp/know-tags.old /var/tmp/known-tags | tail -n 1)
git checkout tags/${last_tag}
echo "Buidling ${last_tag}"
dotnet publish -c Release
sudo docker login -u kjackal -p <replace_with_docker_password>
sudo docker build -t kjackal/hello-dotnet:${last_tag} .
sudo docker push kjackal/hello-dotnet:${last_tag}
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config set image deployment/hello-dotnet hello-dotnet=kjackal/hello-dotnet:${last_tag}

Make sure you run the this job once so it gets initialised. Afterwords, and every 5 minutes, this job will look at the available tags and fail if no new tags are present.

Lets create a new tag/release now. Go to your GitHub repository click releases -as shown below- and fill up the release form.

Here is where you find the releases you have.
Release form on Github

Within 5 minutes your release will reach Kubernetes! Without you needing to login to Jenkins. Automagically!

A few points to note:

  1. There might be Jenkins plugins for this. But we said we will be looking at what is under the hood without hiding behind a GUI.
  2. You should place your Jenkins jobs on GitHub along with your code.

Where to go from here

So far you have seen the parts of a basic, yet fully functional CI/CD that delivers a .NET application onto any Kubernetes cluster. Each of the steps shown above are subject to improvements and tailoring based on your needs.

  • The ASP.NET application would normally have automated tests. You would want to run these tests in your CI. Travis is a great tool for this purpose and depending on the size and nature of your project it could be free. Alternatively you could setup Jenkins to run these tests as often as you please and report back the result.
  • Look into Helm if you intend to distribute your application instead of offering it as a service hosted by you.
  • Experiment with release strategies and find the one that best suits your needs. Make sure you read through Kubernetes deployment strategies.
  • You could consider using the auto scaling features of Kubernetes.
  • The Kubernetes cluster here is on LXD containers. You should use a cloud either private (eg Openstack) or public. With Conjure-up and Juju the cluster deployment process remains the same regardless of the targeted cloud. You have no excuse.
  • Finally, as you move to a cloud make sure you deploy the Canonical Distribution of Kubernetes instead of kubernetes-core. You will get a more robust deployment with HA features, logging and monitoring.

Resources


Automated Delivery of ASP.NET Core Apps on On-Prem Kubernetes was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
Dustin Kirkland

  • To date, we've shaved the Bionic (18.04 LTS) minimal images down by over 53%, since Ubuntu 14.04 LTS, and trimmed nearly 100 packages and thousands of files.
  • Feedback welcome here: https://ubu.one/imgSurvey
In last year's AskHN HackerNews post, "Ask HN: What do you want to see in Ubuntu 17.10?", and the subsequent treatment of the data, we noticed a recurring request for "lighter, smaller, more minimal" Ubuntu images.

This is particularly useful for container images (Docker, LXD, Kubernetes, etc.), embedded device environments, and anywhere a developer wants to bootstrap an Ubuntu system from the smallest possible starting point.  Smaller images generally:
  • are subject to fewer security vulnerabilities and subsequent updates
  • reduce overall network bandwidth consumption
  • and require less on disk storage
First, a definition...
"The Ubuntu Minimal Image is the smallest base upon which a user can apt install any package in the Ubuntu archive."
By design, Ubuntu Minimal Images specifically lack the creature comforts, user interfaces and user design experience that have come to define the Ubuntu Desktop and Ubuntu Cloud images.

To date, we've shaved the Bionic (18.04 LTS) minimal images down by over 53%, since Ubuntu 14.04 LTS, and trimmed nearly 100 packages and thousands of files.





   




   

Read more
Dustin Kirkland


I'm the proud owner of a new Dell XPS 13 Developer Edition (9360) laptop, pre-loaded from the Dell factory with Ubuntu 16.04 LTS Desktop.

Kudos to the Dell and the Canonical teams that have engineered a truly remarkable developer desktop experience.  You should also check out the post from Dell's senior architect behind the XPS 13, Barton George.

As it happens, I'm also the proud owner of a long loved, heavily used, 1st Generation Dell XPS 13 Developer Edition laptop :-)  See this post from May 7, 2012.  You'll be happy to know that machine is still going strong.  It's now my wife's daily driver.  And I use it almost every day, for any and all hacking that I do from the couch, after hours, after I leave the office ;-)

Now, this latest XPS edition is a real dream of a machine!

From a hardware perspective, this newer XPS 13 sports an Intel i7-7660U 2.5GHz processor and 16GB of memory.  While that's mildly exciting to me (as I've long used i7's and 16GB), here's what I am excited about...

The 500GB NVME storage and a whopping 1239 MB/sec I/O throughput!

kirkland@xps13:~$ sudo hdparm -tT /dev/nvme0n1
/dev/nvme0n1:
Timing cached reads: 25230 MB in 2.00 seconds = 12627.16 MB/sec
Timing buffered disk reads: 3718 MB in 3.00 seconds = 1239.08 MB/sec

And on top of that, this is my first QHD+ touch screen laptop display, sporting a magnificent 3200x1800 resolution.  The graphics are nothing short of spectacular.  Here's nearly 4K of Hollywood hard "at work" :-)


The keyboard is super comfortable.  I like it a bit better than the 1st generation.  Unlike your Apple friends, we still have our F-keys, which is important to me as a Byobu user :-)  The placement of the PgUp, PgDn, Home, and End keys (as Fn + Up/Down/Left/Right) takes a while to get used to.


The speakers are decent for a laptop, and the microphone is excellent.  The webcam is placed in an odd location (lower left of the screen), but it has quite nice resolution and focus quality.


And Bluetooth and WiFi, well, they "just work".  I got 98.2 Mbits/sec of throughput over WiFi.

kirkland@xps:~$ iperf -c 10.0.0.45
------------------------------------------------------------
Client connecting to 10.0.0.45, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.149 port 40568 connected with 10.0.0.45 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 118 MBytes 98.2 Mbits/sec

There's no external display port, so you'll need something like this USB-C-to-HDMI adapter to project to a TV or monitor.


There's 1x USB-C port, 2x USB-3 ports, and an SD-Card reader.


One of the USB-3 ports can be used to charge your phone or other devices, even while your laptop is suspended.  I use this all the time, to keep my phone topped up while I'm aboard planes, trains, and cars.  To do so, you'll need to enable "USB PowerShare" in the BIOS.  Here's an article from Dell's KnowledgeBase explaining how.


Honestly, I have only one complaint...  And that's that there is no Trackstick mouse (which is available on some Dell models).  I'm not a huge fan of the Touchpad.  It's too sensitive, and my palms are always touching it inadvertently.  So I need to use an external mouse to be effective.  I'll continue to provide this feedback to the Dell team, in the hopes that one day I'll have my perfect developer laptop!  Otherwise, this machine is a beauty.  I'm sure you'll love it too.

Cheers,
Dustin

Read more
Dustin Kirkland


For up-to-date patch, package, and USN links, please refer to: https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown

This is cross-posted on Canonical's official Ubuntu Insights blog:
https://insights.ubuntu.com/2018/01/04/ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/


Unfortunately, you’ve probably already read about one of the most widespread security issues in modern computing history -- colloquially known as “Meltdown” (CVE-2017-5754) and “Spectre” (CVE-2017-5753 and CVE-2017-5715) -- affecting practically every computer built in the last 10 years, running any operating system. That includes Ubuntu.

I say “unfortunately”, in part because there was a coordinated release date of January 9, 2018, agreed upon by essentially every operating system, hardware, and cloud vendor in the world. By design, operating system updates would be available at the same time as the public disclosure of the security vulnerability. While it happens rarely, this an industry standard best practice, which has broken down in this case.

At its heart, this vulnerability is a CPU hardware architecture design issue. But there are billions of affected hardware devices, and replacing CPUs is simply unreasonable. As a result, operating system kernels -- Windows, MacOS, Linux, and many others -- are being patched to mitigate the critical security vulnerability.

Canonical engineers have been working on this since we were made aware under the embargoed disclosure (November 2017) and have worked through the Christmas and New Years holidays, testing and integrating an incredibly complex patch set into a broad set of Ubuntu kernels and CPU architectures.

Ubuntu users of the 64-bit x86 architecture (aka, amd64) can expect updated kernels by the original January 9, 2018 coordinated release date, and sooner if possible. Updates will be available for:

  • Ubuntu 17.10 (Artful) -- Linux 4.13 HWE
  • Ubuntu 16.04 LTS (Xenial) -- Linux 4.4 (and 4.4 HWE)
  • Ubuntu 14.04 LTS (Trusty) -- Linux 3.13
  • Ubuntu 12.04 ESM** (Precise) -- Linux 3.2
    • Note that an Ubuntu Advantage license is required for the 12.04 ESM kernel update, as Ubuntu 12.04 LTS is past its end-of-life
Ubuntu 18.04 LTS (Bionic) will release in April of 2018, and will ship a 4.15 kernel, which includes the KPTI patchset as integrated upstream.

Ubuntu optimized kernels for the Amazon, Google, and Microsoft public clouds are also covered by these updates, as well as the rest of Canonical's Certified Public Clouds including Oracle, OVH, Rackspace, IBM Cloud, Joyent, and Dimension Data.

These kernel fixes will not be Livepatch-able. The source code changes required to address this problem is comprised of hundreds of independent patches, touching hundreds of files and thousands of lines of code. The sheer complexity of this patchset is not compatible with the Linux kernel Livepatch mechanism. An update and a reboot will be required to active this update.

Furthermore, you can expect Ubuntu security updates for a number of other related packages, including CPU microcode, GCC and QEMU in the coming days.

We don't have a performance analysis to share at this time, but please do stay tuned here as we'll followup with that as soon as possible.

Thanks,
@DustinKirkland
VP of Product
Canonical / Ubuntu

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 (final) is now available!
This new MAAS release introduces a set of exciting features and improvements to the overall user experience. It now becomes the focus of maintenance, as it fully replaces MAAS 2.2
In order to provide with sufficient notice, please be aware that 2.3.0 will replace MAAS 2.2 in the Ubuntu Archive in the coming weeks. In the meantime, MAAS 2.3 is available in PPA and as a Snap.
PPA’s Availability
MAAS 2.3.0 is currently available in ppa:maas/next for the coming week.
sudo add-apt-repository ppa:maas/next
sudo apt-get update
sudo apt-get install maas
Please be aware that MAAS 2.3 will replace MAAS 2.2 in ppa:maas/stable within a week.
Snap Availability
For those wanting to use the snap, you can obtain it from the stable channel:
sudo snap install maas –devmode –stable

MAAS 2.3.0 (final)

Important announcements

Machine network configuration now deferred to cloud-init.

Starting from MAAS 2.3, machine network configuration is now handled by cloud-init. In previous MAAS (and curtin) releases, the network configuration was performed by curtin during the installation process. In an effort to improve robustness, network configuration has now been consolidated in cloud-init. MAAS will continue to pass network configuration to curtin, which in turn, will delegate the configuration to cloud-init.

Ephemeral images over HTTP

As part of the effort to reduce dependencies and improve reliability, MAAS ephemeral (network boot) images are no longer loaded using iSCSI (tgt). By default, the ephemeral images are now obtained using HTTP requests to the rack controller.

After upgrading to MAAS 2.3, please ensure you have the latest available images. For more information please refer to the section below (New features & improvements).

Advanced network configuration for CentOS & Windows

MAAS 2.3 now supports the ability to perform network configuration for CentOS and Windows. The network configuration is performed via cloud-init. MAAS CentOS images now use the latest available version of cloud-init that includes these features.

New features & improvements

CentOS network configuration

MAAS can now perform machine network configuration for CentOS 6 and 7, providing networking feature parity with Ubuntu for those operating systems. The following can now be configured for MAAS deployed CentOS images:

  • Bonds, VLAN and bridge interfaces.
  • Static network configuration.

Our thanks to the cloud-init team for improving the network configuration support for CentOS.

Windows network configuration

MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information (https://maas.io/contact-us).

Improved Hardware Testing

MAAS 2.3 introduces a new and improved hardware testing framework that significantly improves the granularity and provision of hardware testing feedback. These improvements include:

  • An improved testing framework that allows MAAS to run each component individually. This allows MAAS to run tests against storage devices for example, and capture results individually.
  • The ability to describe custom hardware tests with a YAML definition:
    • This provides MAAS with information about the tests themselves, such as script name, description, required packages, and other metadata about what information the script will gather. All of which will be used by MAAS to render in the UI.
    • Determines whether the test supports a parameter, such as storage, allowing the test to be run against individual storage devices.
    • Provides the ability to run tests in parallel by setting this in the YAML definition.
  • Capture performance metrics for tests that can provide it.
    • CPU performance tests now offer a new ‘7z’ test, providing metrics.
    • Storage performance tests now include a new ‘fio’ test providing metrics.
    • Storage test ‘badblocks’ has been improved to provide the number of badblocks found as a metric.
  • The ability to override a machine that has been marked ‘Failed testing’. This allows administrators to acknowledge that a machine is usable despite it having failed testing.

Hardware testing improvements include the following UI changes:

  • Machine Listing page
    • Displays whether a test is pending, running or failed for the machine components (CPU, Memory or Storage.)
    • Displays whether a test not related to CPU, Memory or Storage has failed.
    • Displays a warning when the machine has been overridden and has failed tests, but is in a ‘Ready’ or ‘Deployed’ state.
  • Machine Details page
    • Summary tab – Provides hardware testing information about the different components (CPU, Memory, Storage).
    • Hardware Tests /Commission tab – Provides an improved view of the latest test run, its runtime as well as an improved view of previous results. It also adds more detailed information about specific tests, such as status, exit code, tags, runtime and logs/output (such as stdout and stderr).
    • Storage tab – Displays the status of specific disks, including whether a test is OK or failed after running hardware tests.

For more information please refer to https://docs.ubuntu.com/maas/2.3/en/nodes-hw-testing.

Network discovery & beaconing

In order to confirm network connectivity and aide with the discovery of VLANs, fabrics and subnets, MAAS 2.3 introduces network beaconing.

MAAS now sends out encrypted beacons, facilitating network discovery and monitoring. Beacons are sent using IPv4 and IPv6 multicast (and unicast) to UDP port 5240. When registering a new controller, MAAS uses the information gathered from the beaconing protocol to ensure that newly registered interfaces on each controller are associated with existing known networks in MAAS. This aids MAAS by providing better information on determining the network topology.

Using network beaconing, MAAS can better correlate which networks are connected to its controllers, even if interfaces on those controller are not configured with IP addresses. Future uses for beaconing could include validation of networks from commissioning nodes, MTU verification, and a better user experience for registering new controllers.

Upstream Proxy

MAAS 2.3 now enables an upstream HTTP proxy to be used while allowing MAAS deployed machines to continue to use the caching proxy for the repositories. Doing so provides greater flexibility for closed environments, including:

  • Enabling MAAS itself to use a corporate proxy while allowing machines to continue to use the MAAS proxy.
  • Allowing machines that don’t have access to a corporate proxy to gain network access using the MAAS proxy.

Adding upstream proxy support also includes an improved configuration on the settings page. Please refer to Settings > Proxy for more details.

Ephemeral Images over HTTP

Historically, MAAS has used ‘tgt’ to provide images over iSCSI for the ephemeral environments (e.g commissioning, deployment environment, rescue mode, etc). MAAS 2.3 changes the default behaviour by now providing images over HTTP.

These images are now downloaded directly by the initrd. The change means that the initrd loaded on PXE will contact the rack controller to download the image to load in the ephemeral environment. Support for using ‘tgt’ is being phased out in MAAS 2.3, and will no longer be supported from MAAS 2.4 onwards.

For users who would like to continue to use & load their ephemeral images via ‘tgt’, they can disable http boot with the following command.

  maas <user> maas set-config name=http_boot value=False

UI Improvements

Machines, Devices, Controllers

MAAS 2.3 introduces an improved design for the machines, devices and controllers detail pages that include the following changes.

  • “Summary” tab now only provides information about the specific node (machine, device or controller), organised across cards.
  • “Configuration” has been introduced, which includes all editable settings for the specific node (machine, device or controllers).
  • “Logs” consolidates the commissioning output and the installation log output.

Other UI improvements

Other UI improvements that have been made for MAAS 2.3 include:

  • Added DHCP status column on the ‘Subnet’s tab.
  • Added architecture filters
  • Updated VLAN and Space details page to no longer allow inline editing.
  • Updated VLAN page to include the IP ranges tables.
  • Zones page converted to AngularJS (away from YUI).
  • Added warnings when changing a Subnet’s mode (Unmanaged or Managed).
  • Renamed “Device Discovery” to “Network Discovery”.
  • Discovered devices where MAAS cannot determine the hostname now show the hostname as “unknown” and greyed out instead of using the MAC address manufacturer as the hostname.

Rack Controller Deployment

MAAS 2.3 can now automatically deploy rack controllers when deploying a machine. This is done by providing cloud-init user data, and once a machine is deployed, cloud-init will install and configure the rack controller. Upon rack controller registration, MAAS will automatically detect the machine is now a rack controller and it will be transitioned automatically. To deploy a rack controller, users can do so via the API (or CLI), e.g:

maas <user> machine deploy <system_id> install_rackd=True

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Controller Versions & Notifications

MAAS now surfaces the version of each running controller and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as within a HA setup.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

API Improvements

The machines API endpoint now provides more information on the configured storage and provides additional output that includes volume_groups, raids, cache_sets, and bcaches fields.

Django 1.11 support

MAAS 2.3 now supports the latest Django LTS version, Django 1.11. This allows MAAS to work with the newer Django version in Ubuntu Artful, which serves as a preparation for the next Ubuntu LTS release.

  • Users running MAAS in Ubuntu Artful will use Django 1.11.
  • Users running MAAS in Ubuntu Xenial will continue to use Django 1.9.

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 RC1 has now been released and it is currently available in PPA and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use RC1, please use the following PPA:
ppa:maas/next
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode —beta
 

MAAS 2.3.0 (RC1)

Issues fixed in this release

For more information, visit: https://launchpad.net/maas/+milestone/2.3.0rc1

  • LP: #1727576    [2.3, HWTv2] When specific tests timesout there’s no log/output
  • LP: #1728300    [2.3, HWTv2] smartctl interval time checking is too short
  • LP: #1721887    [2.3, HWTv2] No way to override a machine that Failed Testing
  • LP: #1728302    [2.3, HWTv2, UI] Overall health status is redundant
  • LP: #1721827    [2.3, HWTv2] Logging when and why a machine failed testing (due to missing heartbeats/locked/hanged) not available in maas.log
  • LP: #1722665    [2.3, HWTv2] MAAS stores a limited amount of test results
  • LP: #1718779    [2.3] 00-maas-06-get-fruid-api-data fails to run on controller
  • LP: #1729857    [2.3, UI] Whitespace after checkbox on node listing page
  • LP: #1696122    [2.2] Failed to get virsh pod storage: cryptic message if no pools are defined
  • LP: #1716328    [2.2] VM creation with pod accepts the same hostname and push out the original VM
  • LP: #1718044    [2.2] Failed to process node status messages – twisted.internet.defer.QueueOverflow
  • LP: #1723944    [2.x, UI] Node auto-assigned address is not always shown while in rescue mode
  • LP: #1718776    [UI] Tooltips missing from the machines listing page
  • LP: #1724402    no output for failing test
  • LP: #1724627    00-maas-06-get-fruid-api-data fails relentlessly, causes commissioning to fail
  • LP: #1727962    Intermittent failure: TestDeviceHandler.test_list_num_queries_is_the_expected_number
  • LP: #1727360    Make partition size field optional in the API (CLI)
  • LP: #1418044    Avoid picking the wrong IP for MAAS_URL and DEFAULT_MAAS_URL
  • LP: #1729902    When commissioning don’t show message that user has overridden testing

Read more
K. Tsakalozos

I read the Hacker News post Heptio Contour and I thought “Cool! A project from our friends at Heptio, lets see what they got for us”. I wont lie to you, at first I was a bit disappointed because there was no special mention for Canonical Distribution of Kubernetes (CDK) but I understand, I am asking too much :). Let me cover this gap here.

Deploy CDK

To deploy CDK on Ubuntu you need to just do a:

> sudo snap install conjure-up --classic
> conjure-up kubernetes-core

Using ‘kubernetes-core’ will give you a minimal k8s cluster — perfect for our use case. For a larger, more robust cluster, try ‘canonical-kubernetes.’

Deploy Contour and Demo App

CDK already comes with an ingress solution so you need to disable it and deploy Contour. Here we also deploy the demo kuard application:

> juju config kubernetes-worker ingress=false
> kubectl --kubeconfig=/home/jackal/.kube/config apply -f http://j.hept.io/contour-deployment-norbac
> kubectl --kubeconfig=/home/jackal/.kube/config apply -f http://j.hept.io/contour-kuard-example

Get Your App

The Contour service will be on a port that (depending on the cloud you are targeting) might be closed, so you need to open it before accessing kuard:

> kubectl --kubeconfig=/home/jackal/.kube/config get service -n heptio-contour  contour -o wide                                                                                                        
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
contour LoadBalancer 10.152.183.201 <pending> 80:31226/TCP 2m app=contour
juju run --application kubernetes-worker open-port 31226

And here it is running on AWS:

Instead of opening the ports to the outside world you could set the right DNS entries. However, this is specific to the cloud you are deploying to.

As the Contour README says “On AWS, create a CNAME record that maps the host in your Ingress object to the ELB address.”
“If you have an IP address instead (on GCE, for example), create an A record.”
For a localhost deployment your ports should not be blocked and you can fake a DNS entry by editing /etc/hosts.

Thank you Heptio. Keep it up!

Resources

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 Beta 3 has now been released and it is currently available in PPA and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use beta 3, please use the following PPA:
ppa:maas/next
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode —beta
 

MAAS 2.3.0 (beta3)

Issues fixed in this release

For more information, visit: https://launchpad.net/maas/+milestone/2.3.0beta3
  • LP: #1727551    [2.3] Commissioning shows results from script that no longer exists

  • LP: #1696485    [2.2, HA] MAAS dhcp does not offer up multiple domains to search

  • LP: #1696661    [2.2, HA] MAAS should offer multiple DNS servers in HA case

  • LP: #1724235    [2.3, HWTv2] Aborted test should not show as failure

  • LP: #1721824    [2.3, HWTv2] Overall health status is missing

  • LP: #1727547    [2.3, HWTv2] Aborting testing goes back into the incorrect state

  • LP: #1722848    [2.3, HWTv2] Memtester test is not robust

  • LP: #1727568    [2.3, HWTv2, regression] Hardware Tests tab does not show what tests are running

  • LP: #1721268    [2.3, UI, HWTv2] Metrics table (e.g. from fio test) is not padded to MAAS’ standard

  • LP: #1721823    [2.3, UI, HWTv2] No way to surface a failed test that’s non CPU, Mem, Storage in machine listing page

  • LP: #1721886    [2.3, UI, HWTv2] Hardware Test tab doesn’t auto-update

  • LP: #1559353    [2.0a3] “Add Hardware > Chassis” cannot find off-subnet chassis BMCs

  • LP: #1705594    [2.2] rackd errors after fresh install

  • LP: #1718517    [2.3] Exceptions while processing commissioning output cause timeouts rather than being appropriately surfaced

  • LP: #1722406    [2.3] API allows “deploying” a machine that’s already deployed

  • LP: #1724677    [2.x] [critical] TFTP back-end failed right after node repeatedly requests same file via tftp

  • LP: #1726474    [2.x] psycopg2.IntegrityError: update or delete on table “maasserver_node” violates foreign key constraint

  • LP: #1727073    [2.3] rackd — 12% connected to region controllers.

  • LP: #1722671    [2.3, pod] Unable to delete a machine or a pod if the pod no longer exists

  • LP: #1680819    [2.x, UI] Tooltips go off screen

  • LP: #1725908    [2.x] deleting user with static ip mappings throws 500

  • LP: #1726865    [snap,2.3beta3] maas init uses the default gateway in the default region URL

  • LP: #1724181    maas-cli missing dependencies: netifaces, tempita

  • LP: #1724904    Changing PXE lease in DHCP snippets global sections does not work

Read more
admin

Hello MAASters!

It’s been two weeks since my last update when the MAAS 2.3 beta 2 release was announced! Since then, the MAAS team has been split, both by participating in internal events (and recovering from travel) as well as continue to focus on the stabilization of MAAS 2.3. As such, I’m happy to provide the updates of the past couple of weeks.

MAAS 2.3
In the past couple of weeks the team has been focused on stabilizing MAAS 2.3 and has fixed the following issues:

  • LP: #1705594    [2.2, HA] rackd errors after fresh install
  • LP: #1722848    [2.3, HWTv2] Memtester test is not robust
  • LP: #1724677    [2.x] [critical] TFTP back-end failed right after node repeatedly requests same file via tftp
  • LP: #1727073    [2.3, HA] rackd — 12% connected to region controllers.
  • LP: #1696485    [2.2, HA] MAAS dhcp does not offer up multiple domains to search
  • LP: #1696661    [2.2, HA] MAAS should offer multiple DNS servers in HA case
  • LP: #1721268    [2.3, UI, HWTv2] Metrics table (e.g. from fio test) is not padded to MAAS’ standard
  • LP: #1721886    [2.3, UI, HWTv2] Hardware Test tab doesn’t auto-update
  • LP: #1722671    [2.3, pod] Unable to delete a machine or a pod if the pod no longer exists
  • LP: #1724181    maas-cli missing dependencies: netifaces, tempita
  • LP: #1724235    [2.3, HWTv2] Aborted test should not show as failure
  • LP: #1726865    [snap,2.3beta3] maas init uses the default gateway in the default region URL
  • LP: #1724904    Changing PXE lease in DHCP snippets global sections does not work
  • LP: #1680819    [2.x, UI] Tooltips go off screen

 

MAAS 2.4
I’m happy to announce that the roadmap for  MAAS 2.4 has now been defined, and it is targeted for April 2018. However, I’ll create a bit of suspense as we will announce the upcoming features once MAAS 2.3 final has been released! Stay tuned!

Read more
K. Tsakalozos

When was the last time you had one of those “I wish I knew about that <time period> ago” moments? One of mine was when I saw MAAS (Metal as a Service). Back at the University, as a member of the MADgIK lab, I had to administer a couple of racks. These were the days of Xen paravirtualization, Eucaliptus and the GRID. Interesting times, but I now realise that if I had MAAS and Juju back then, I would have finished with that adventure a couple of years sooner!

Was told blogs with figures are more attractive. There you go!

From a high level MAAS will render your physical cluster into an IaaS cloud that provisions physical instead of virtual machines. As you might expect a PXE server will serve the images you ask via a good looking web UI. MAAS will discover what is on your network and make it dead simple to set VLANs and network spaces. Juju can use your MaaS deployment as if it were a cloud. This enables you to deploy OpenStack, Kubernetes and BigData solutions directly onto your hardware. It is a really powerful combination that can save you time (in the range of years in my case).

Lets do it!

Do not waste any more time on this blog, go and try MAAS on QEMU instances on your own machine. This play list will get you started. At the end you will have a (virtual) cluster on QEMU. One of the machines will act as the MAAS controller, the rest will be the cluster nodes.

Your virtual cluster.
MAAS nodes

Next step is to use MAAS from Juju.

Just to show off :)

Conclusion

I am off to the shop to get more RAM. No wonder this is such a short blog.

Resources

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 Beta 2 has now been released and it is currently available in PPA and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use beta 2, please use the following PPA:
ppa:maas/next
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode —beta
 

MAAS 2.3.0 (beta 2)

Issues fixed in this release

https://launchpad.net/maas/+milestone/2.3.0beta2

  • LP: #1711760    [2.3] resolv.conf is not set (during commissioning or testing)

  • LP: #1721108    [2.3, UI, HWTv2] Machine details cards – Don’t show “see results” when no tests have been run on a machine

  • LP: #1721111    [2.3, UI, HWTv2] Machine details cards – Storage card doesn’t match CPU/Memory one

  • LP: #1721548    [2.3] Failure on controller refresh seem to be causing version to not get updated

  • LP: #1710092    [2.3, HWTv2] Hardware Tests have a short timeout

  • LP: #1721113    [2.3, UI, HWTv2] Machine details cards – Storage – If multiple disks, condense the card instead of showing all disks

  • LP: #1721524    [2.3, UI, HWTv2] When upgrading from older MAAS, Storage HW tests are not mapped to the disks

  • LP: #1721587    [2.3, UI, HWTv2] Commissioning logs (and those of v2 HW Tests) are not being shown

  • LP: #1719015    $TTL in zone definition is not updated

  • LP: #1721276    [2.3, UI, HWTv2] Hardware Test tab – Table alignment for the results doesn’t align with titles

  • LP: #1721525    [2.3, UI, HWTv2] Storage card on machine details page missing red bar on top if there are failed tests

  • LP: #1722589    syslog full of “topology hint” logs

  • LP: #1719353    [2.3a3, Machine listing] Improve the information presentation of the exact tasks MAAS is running when running hardware testing

  • LP: #1719361    [2.3 alpha 3, HWTv2] On machine listing page, remove success icons for components that passed the tests

  • LP: #1721105    [2.3, UI, HWTv2] Remove green success icon from Machine listing page

  • LP: #1721273    [2.3, UI, HWTv2] Storage section on Hardware Test tab does not describe each disk to match the design

Read more
admin

MAAS 2.3.0 (beta1)

New Features & Improvements

Hardware Testing

MAAS 2.3 beta overhauls and improves the visibility of hardware test results and information. This includes various changes across MAAS:

  • Machine Listing page
    • Surface progress and failures of hardware tests, actively showing when a test is pending, running, successful or failed.
  • Machine Details page
    • Summary tab – Provide hardware testing information about the different components (CPU, Memory, Storage)
    • Hardware Tests tab – Completely re-design of the Hardware Test tab. It now shows a list of test results per component. Adds the ability to view more details about the test itself.
    • Storage tab – Ability to view tests results per storage component.

UI Improvements

Machines, Devices, Controllers

MAAS 2.3 beta 1 introduces a new design for the node summary pages:

  • “Summary tab” now only shows information of the machine, in a complete new design.
  • “Settings tab” has been introduced. It now includes the ability to edit such node.
  • “Logs tab” now consolidates the commissioning output and the installation log output.

Other UI improvements

Other UI improvements that have been made for MAAS 2.3 beta 1 includes:

  • Add DHCP status column on the ‘Subnet’s tab.
  • Add architecture filters
  • Update VLAN and Space details page to no longer allow inline editing.
  • Update VLAN page to include the IP ranges tables.
  • Convert the Zones page into AngularJS (away from YUI).
  • Add warnings when changing a Subnet’s mode (Unmanaged or Managed).

Rack Controller Deployment

MAAS beta 1 now adds the ability to deploy any machine with the rack controller, which is only available via the API.

API Improvements

MAAS 2.3 beta 1 introduces API output for volume_groups, raids, cache_sets, and bcaches field to the machines endpoint.

Known issues:

The following are a list of known UI issues affecting hardware testing:

Issues fixed in this release

https://launchpad.net/maas/+milestone/2.3.0beta1

  • #1711320    [2.3, UI] Can’t ‘Save changes’ and ‘Cancel’ on machine/device details page
  • #1696270    [2.3] Toggling Subnet from Managed to Unmanaged doesn’t warn the user that behavior changes
  • #1717287    maas-enlist doesn’t work when provided with serverurl with IPv6 address
  • #1718209    PXE configuration for dhcpv6 is wrong
  • #1718270    [2.3] MAAS improperly determines the version of some installs
  • #1718686    [2.3, master] Machine lists shows green checks on components even when no tests have been run
  • #1507712    cli: maas logout causes KeyError for other profiles
  • #1684085    [2.x, Accessibility] Inconsistent save states for fabric/subnet/vlan/space editing
  • #1718294    [packaging] dpkg-reconfigure for region controller refers to an incorrect network topology assumption

Read more
admin

Hello MAASters!

This past week, the MAAS team met face to face in NYC! The week was concentrated on finalizing the improvements that users will be able to enjoy in MAAS 2.3 and preparing the for the first beta release. While MAAS 2.3.0 beta 1 will be announced separately, we wanted to bring you an update of the work the team has been doing over the past couple weeks.

MAAS 2.3 (current development release)

  • Hardware Testing Phase 2
    • Backend work to support the new UX changes for hardware testing. This includes websockets handlers, directives and triggers.
    • UI – Add ability to upload custom hardware tests scripts.
    • UI – Update the machine listing page to show hardware status. This shows status of hardware testing while pending, running, failed, degraded, timed out, etc.
    • UI – Implement new designs for Hardware Testing:
      • Add cards (new design) on node details pages that include metrics (if tests have been run) and hardware test information.
      • Add a new Hardware Test tab that better surfaces status of hardware tests per component
      • Add a more detailed log view of hardware test results.
      • Surface hardware test results per storage device on each of the block devices (on the machines details page).
      • Add ability to view all test results performed on each of the components overtime.
  • Switch Support
    • Add actions to switch listing page (still under a feature flag)
    • Fetch Wedge 100 switch metadata using the FRUID API endpoint on the BMC.
    • UI – Add websockets and triggers to support the UI changes for switches.
    • UI – Update the UI to display the vendor and model on the switch listing page (behind feature flag)

  • UI improvements
    • Add DHCP status column on the ‘Subnet’s tab.
    • Add architecture filters
    • Implement a new design for node details page:
      • Consolidate all of machine, devices, controllers, switches Summary tab into cards.
      • Add a new Settings tab, combined with the Power tab to allow editing different components of machines, devices, controllers, etc.
      • Consolidate commissioning output and installation logs in a “Log” tab.
    • Update VLAN and Space details page to no longer allow inline editing.
    • Update VLAN page to include the IP ranges tables.
    • Convert the Zones page into AngularJS (away from YUI).
    • Add warnings when changing a Subnet’s mode (Unmanaged or Managed).

  • Rack controller deployment
    • Add ability to deploy any machine as a rack controller via the API.

  • API changes:
    • Add volume_groups, raids, cache_sets, and bcaches field to the Machine API output.

  • Issues fixed:
    • #1711320    [2.3, UI] Can’t ‘Save changes’ and ‘Cancel’ on machine/device details page
    • #1696270    [2.3] Toggling Subnet from Managed to Unmanaged doesn’t warn the user that behavior changes
    • #1717287    maas-enlist doesn’t work when provided with serverurl with IPv6 address
    • #1718209    PXE configuration for dhcpv6 is wrong
    • #1718270    [2.3] MAAS improperly determines the version of some installs
    • #1718686    [2.3, master] Machine lists shows green checks on components even when no tests have been run
    • #1507712    cli: maas logout causes KeyError for other profiles
    • #1684085    [2.x, Accessibility] Inconsistent save states for fabric/subnet/vlan/space editing
    • #1718294    [packaging] dpkg-reconfigure for region controller refers to an incorrect network topology assumption

Libmaas

We have improved the library to allow the managing of block devices and partitions.

  • Add ability to list machine’s block devices.
  • Add ability to update, create and delete block devices.
  • Add ability to list machine’s partitions.
  • Add ability to update, create and delete partitions.
  • Add ability to format/unformat partitions and block devices.
  • Add ability to mount/unmount partitions and block devices.

The release of a new version of libmaas will be announced separately.

CLI

MAAS has been working on a new CLI that’s based (and uses) MAAS’ python client library. The work that has been done includes:

  • Add ability to log in/log out via user and password.
  • Add ability to switch between profiles.
  • Add support for interactive login.
  • Add help command.
  • Ability to list nodes, machines, devices, controllers.
  • Ability to list all components in the networking model (subnets, vlans, spaces, fabrics).
  • Ability to obtain details on machines, devices and controllers.
  • Ability to obtain details on subnets, vlans, spaces, fabrics.
  • Ability to perform actions on machines (with the exception of testing and rescue mode).
  • Add ability to perform actions for multiple nodes
  • Add a ‘maas ssh’ command.
  • When listing, add support for automatic paging.
  • Add ability to view output in different formats (pretty, plain, json, yaml, csv).
  • Show progress indication on actions that are synchronous or blocking.

The release of the new CLI will be announced separately.

Read more
jdstrand

Finding your VMs and containers via DNS resolution so you can ssh into them can be tricky. I was talking with Stéphane Graber today about this and he reminded me of his excellent article: Easily ssh to your containers and VMs on Ubuntu 12.04.

These days, libvirt has the `virsh dominfo` command and LXD has a slightly different way of finding the IP address.

Here is an updated `~/.ssh/config` that I’m now using (thank you Stéphane for the update for LXD):

Host *.lxd
    #User ubuntu
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(lxc list -c s4 $(echo %h | sed "s/\.lxd//g") %h | grep RUNNING | cut -d' ' -f4) %p
 
Host *.vm
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(virsh domifaddr $(echo %h | sed "s/\.vm//g") | awk -F'[ /]+' '{if (NR>2 && $5) print $5}') %p

You may want to uncomment `StrictHostKeyChecking` and `UserKnownHostsFile` depending on your environment (see `man ssh_config`) for details.

With the above, I can ssh in with:

$ ssh foo.vm uptime
16:37:26 up 50 min, 0 users, load average: 0.00, 0.00, 0.00
$ ssh bar.lxd uptime
21:37:35 up 12:39, 2 users, load average: 0.55, 0.73, 0.66

Enjoy!


Filed under: canonical, ubuntu, ubuntu-server

Read more
admin

MAAS 2.3.0 (alpha3)

New Features & Improvements

Hardware Testing (backend only)

MAAS has now introduced an improved hardware testing framework. This new framework allows MAAS to test individual components of a single machine, as well as providing better feedback to the user for each of those tests. This feature has introduced:

  • Ability to define a custom testing script with a YAML definition – Each custom test can be defined with YAML that will provide information about the test. This information includes the script name, description, required packages, and other metadata about what information the script will gather. This information can then be displayed in the UI.

  • Ability to pass parameters – Adds the ability to pass specific parameters to the scripts. For example, in upcoming beta releases, users would be able to select which disks they want to test if they don’t want to test all disks.

  • Running test individually – Improves the way how hardware tests are run per component. This allows MAAS to run tests against any individual component (such a single disk).

  • Adding additional performance tests

    • Added a CPU performance test with 7z.

    • Added a storage performance test with fio.

Please note that individual results for each of the components is currently only available over the API. Upcoming beta release will include various UI improvements that will allow the user to better surface and interface with these new features.

Rack Controller Deployment in Whitebox Switches

MAAS has now the ability to install and configure a MAAS rack controller once a machine has been deployed. As of today, this feature is only available when MAAS detects the machine is a whitebox switch. As such, all MAAS certified whitebox switches will be deployed with a MAAS rack controller. Currently certified switches include the Wedge 100 and the Wedge 40.

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

UI – Controller Versions & Notifications

MAAS now surfaces the version of each running controller, and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as a HA setup.

Issues fixed in this release

  • #1702703    Cannot run maas-regiond without /bin/maas-rack
  • #1711414    [2.3, snap] Cannot delete a rack controller running from the snap
  • #1712450    [2.3] 500 error when uploading a new commissioning script
  • #1714273    [2.3, snap] Rack Controller from the snap fails to power manage on IPMI
  • #1715634    ‘tags machines’ takes 30+ seconds to respond with list of 9 nodes
  • #1676992    [2.2] Zesty ISO install fails on region controller due to postgresql not running
  • #1703035    MAAS should warn on version skew between controllers
  • #1708512    [2.3, UI] DNS and Description Labels misaligned on subnet details page
  • #1711700    [2.x] MAAS should avoid updating DNS if nothing changed
  • #1712422    [2.3] MAAS does not report form errors on script upload
  • #1712423    [2.3] 500 error when clicking the ‘Upload’ button with no script selected.
  • #1684094    [2.2.0rc2, UI, Subnets] Make the contextual menu language consistent across MAAS
  • #1688066    [2.2] VNC/SPICE graphical console for debugging purpose on libvirt pod created VMs
  • #1707850    [2.2] MAAS doesn’t report cloud-init failures post-deployment
  • #1711714    [2.3] cloud-init reporting not configured for deployed ubuntu core systems
  • #1681801    [2.2, UI] Device discovery – Tooltip misspelled
  • #1686246    [CLI help] set-storage-layout says Allocated when it should say Ready
  • #1621175    BMC acc setup during auto-enlistment fails on Huawei model RH1288 V3

For full details please visit:

https://launchpad.net/maas/+milestone/2.3.0alpha3

Read more
admin

Hello MAASters! This is the development summary for the past couple of weeks:

MAAS 2.3 (current development release)

  • Hardware Testing Phase 2
    • Added parameters form for script parameters validation.
    • Accept and validate results from nodes.
    • Added hardware testing 7zip CPU benchmarking builtin script.
    • WIP – ability to send parameters to test scripts and process results of individual components. (e.g. will provide the ability for users to select which disk they want to test, and capture results accordingly)
    • WIP – disk benchmark test via Fio.
  • Network beaconing & better network discovery
    • MAAS controllers now send out beacon advertisements every 30 seconds, regardless of whether or not any solicitations were received.
  • Switch Support
    • Backend changes to automatically detect switches (during commissioning) and make use of the new switch model.
    • Introduce base infrastructure for NOS drivers, similar to the power management one.
    • Install the Rack Controller when deploying a supported Switch (Wedge 40, Wedge 100)
    • UI – Add a switch listing tab behind a feature flag.
  • Minor UI improvements
    • The version of MAAS installed on each controller is now reported on the controller details page.
  • python-libmaas
    • Added ability to power on, power off, and query the power state of a machine.
    • Added PowerState enum to make it easy to check the current power state of a machine.
    • Added ability to reference the children and parent interfaces of an interface.
    • Added ability to reference the owner of node.
    • Added base level `Node` object that `Machine`, `Device`, `RackController`, and `RegionController` extend from.
    • Added `as_machine`, `as_device`, `as_rack_controller`, and `as_region_controller` to the Node object. Allowing the ability to convert a `Node` into the type you need to perform an action on.
  • Bug fixes:
    • LP: #1676992 – force Postgresql restart on maas-region-controller installation.
    • LP: #1708512 – Fix DNS & Description misalignment
    • LP: #1711714 – Add cloud-init reporting for deployed Ubuntu Core systems
    • LP: #1684094 – Make context menu language consistent for IP ranges.
    • LP: #1686246 – Fix docstring for set-storage-layout operation
    • LP: #1681801 – Device discovery – Tooltip misspelled
    • LP: #1688066 – Add Spice graphical console to pod created VM’s
    • LP: #1711700 – Improve DNS reloading so its happens only when required.
    • LP: #1712423, #1712450, #1712422 – Properly handle a ScriptForm being sent an empty file.
    • LP: #1621175 – Generate password for BMC’s with non-spec compliant password policy
    • LP: #1711414 – Fix deleting a rack when it is installed via the snap
    • LP: #1702703 – Can’t run region controller without a rack controller installed.

Read more
K.Tsakalozos

Sometimes we miss the forest for the trees.

It’s all about portability

Take a step back and think about how much of your effort goes into your product’s portability. A lot, I’d wager. You don’t wait to see what hardware your customer is on before you start coding your app. Widely accepted architectures give you a large customer base, so you target those. You skip optimisations in order to maximize compatibility. The language you are using ensures you can move to another platform/architecture.

Packaging your application is also important if you want to broaden your audience. The latest trend here is to use containers such as docker. Essentially containers (much like VMs) are another layer of abstraction for the sake of portability. You can package your app such that it will run anywhere.

https://medium.com/media/e87190de95f6f33f07f75beb4c8e7f39/href

Things go wrong

When you first create a PoC, even a USB stick is enough to distribute it! After that you should take things more seriously. I see products “dockerised” so that they run everywhere, yet the deployment story is limited to a single cloud. You do all the work to deliver your app everywhere only to fail when you actually need to deliver it! It doesn’t make sense — or to put it more accurately it only makes sense in the short term.

The overlooked feature

Canonical Distribution of Kubernetes (CDK) will deploy exactly the same Kubernetes in all major clouds (AWS, GCE, Azure, VMware, Openstack, Joynet), bare metal, or even your local machine (using lxd). When you first evaluate CDK you may think: “I do not need this”, or even “I hate these extra two lines where I decide the cloud provider because I already know I am going full AWS”. At that point you are tempted to take the easy way out and get pinned to a single Kubernetes provider. However, soon you will find yourself in need of feature F from cloud C or Kubernetes version vX.Y.Z. Worse is when your potential customer is on a different platform and you discover the extent of dependencies you have on your provider.

Before you click off this tab thinking I am exaggerating, think about this: the best devops are done by the devs themselves. How easy is it to train all devs to use AWS for spawning containers? It is trivial, right? Suppose you get two more customers, one on Azure and one on GCE. How easy is it to train your team to perform devops against two additional clouds? You now have to deal with three different monitoring and logging mechanisms. That might still be doable since AWS, GCE, and Azure are all well behaved. What if a customer wants their own on-prem cluster? You are done — kiss your portability and velocity goodbye. Your team will be constantly firefighting, chasing after misaligned versions, and emulating features across platforms.

How is it done

Under the hood of CDK you will find Juju. Juju abstracts the concepts of IaaS. It will provision everything from bare metal machines in your rack, to VMs on any major cloud, to lxd containers on your laptop. On top of these units Juju will deploy Kubernetes thus keeping the deployment agnostic to the underlying provider. Two points here:

  1. Juju has been here for years and is used both internally and externally by Canonical to successfully deliver solutions such as Openstack and Big Data.
  2. Since your Kubernetes is built on IaaS cloud offerings you are able to tailor node properties to match your needs. You may also find it is more cost effective.

Also check out Conjure-up, an easy-to-use wizard that will get your Kubernetes cluster up-and-running in minutes.

To conclude

I am sure you have your reasons to “dockerize” your application and then deliver it on a single cloud. It is just counter-intuitive. If you want to make the most from your portability decisions you should consider CDK.

References:

Read more