Canonical Voices

Posts tagged with 'ubuntu'

Alex Cattle

How to build a lightweight system container cluster

LXD, the system container manager, developed by Canonical and shipped by default with Ubuntu, makes it possible to create many containers of various Linux distributions and manage them in a way similar to virtual machines (VMs) but with lower overhead costs associated with them.

Unlike VMs, containers have the benefit of using a shared kernel such as; kernel security updates in Ubuntu, Livepatch support, minimal memory footprint, ease of sharing resources and an extremely low CPU usage/wakeups at idle.

This whitepaper explores the use of LXD containers as part of a team development environment, effectively setting up a shared lab on physical hardware or in the cloud.

You will learn how working in this environment:

  • Reduces the time spent by team members getting a functional work environment
  • Makes it easy to collaborate with colleagues, accessing their containers if needed
  • Makes it possible to access the work environment of team members who are on leave
  • Better use and control of resources by using shared systems
  • Easy to implement snapshots and backups, huge time savers when a mistake happens

To view the whitepaper, complete the below form:

The post How to build a lightweight system container cluster appeared first on Ubuntu Blog.

Read more
Alex Cattle

Traditional development methods do not scale into the IoT sphere. Strong inter-dependencies and blurred boundaries among components in the edge device stack result in fragmentation, slow updates, security issues, increased cost, and reduced reliability of platforms.

This reality places a major strain on IoT players who need to contend with varying cycles and priorities in the development stack, limiting their flexibility to innovate and introduce changes into their products, both on the hardware and software sides.

One notable way to reduce the complexity in multi-component stacks is through the use DevOps – a paradigm that combines development with operations in a single and streamlined process. However, DevOps alone cannot solve the complexity and dependency that exists in monolithic IoT products.

Highlights of this whitepaper include:

  • How the decoupling of components in a reliable and predictable fashion will reduce the inter-dependency, improve security and allow for faster development cycles.
  • A look at the technical architecture of Ubuntu Core and snaps in the context of an IoT DevOps model.
  • Real life case studies of accelerated Linux development cycles as an alternative to the existing model.

To download the whitepaper complete the form below:

In submitting this form, I confirm that I have read and agree to Canonical’s Privacy Notice and Privacy Policy.

The post The DevOps guide to IoT projects appeared first on Ubuntu Blog.

Read more
Alex Cattle

An image displaying a range of devices connected to a mobile network.

Mobile operators face a range of challenges today from saturation, competition and regulation – all of which are having a negative impact on revenues. The introduction of 5G offers new customer segments and services to offset this decline. However, unlike the introduction of 4G which was dominated by consumer benefits, 5G is expected to be driven by enterprise use. According to IDC, enterprises will generate 60 percent of the world’s data by 2025.

Rather than rely on costly proprietary hardware and operating models, the use of open source technologies offers the ability to commoditise and democratise the wireless network infrastructure. Major operators such as Vodafone, Telefonica and China Mobile have already adopted such practices.

Shifting to open source technology and taking a software defined approach enables mobile operators to differentiate based on the services they offer, rather than network coverage or subscription costs.

This whitepaper will explain how mobile operators can break the proprietary stranglehold and adopt an open approach including:

  • The open source initiatives and technologies available today and being adopted by major operators.
  • How a combination of software defined radio, mobile base stations and 3rd party app development can provide a way for mobile operators to differentiate and drive down CAPEX
  • Use cases by Vodafone and EE on successful implementations by adopting an open source approach

To view the whitepaper, sign up using the form below:

The post The future of mobile connectivity appeared first on Ubuntu Blog.

Read more
Alex Cattle

An image displaying a range of devices connected to a mobile network.

Mobile operators face a range of challenges today from saturation, competition and regulation – all of which are having a negative impact on revenues. The introduction of 5G offers new customer segments and services to offset this decline. However, unlike the introduction of 4G which was dominated by consumer benefits, 5G is expected to be driven by enterprise use. According to IDC, enterprises will generate 60 percent of the world’s data by 2025.

Rather than rely on costly proprietary hardware and operating models, the use of open source technologies offers the ability to commoditise and democratise the wireless network infrastructure. Major operators such as Vodafone, Telefonica and China Mobile have already adopted such practices.

Shifting to open source technology and taking a software defined approach enables mobile operators to differentiate based on the services they offer, rather than network coverage or subscription costs.

This whitepaper will explain how mobile operators can break the proprietary stranglehold and adopt an open approach including:

  • The open source initiatives and technologies available today and being adopted by major operators.
  • How a combination of software defined radio, mobile base stations and 3rd party app development can provide a way for mobile operators to differentiate and drive down CAPEX
  • Use cases by Vodafone and EE on successful implementations by adopting an open source approach

To view the whitepaper, sign up using the form below:

In submitting this form, I confirm that I have read and agree to Canonical’s Privacy Notice and Privacy Policy.

The post The future of mobile connectivity appeared first on Ubuntu Blog.

Read more
Canonical

Thanks to the huge amount of feedback this weekend from gamers, Ubuntu Studio, and the WINE community, we will change our plan and build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS.

We will put in place a community process to determine which 32-bit packages are needed to support legacy software, and can add to that list post-release if we miss something that is needed.

Community discussions can sometimes take unexpected turns, and this is one of those. The question of support for 32-bit x86 has been raised and seriously discussed in Ubuntu developer and community forums since 2014. That’s how we make decisions.

After the Ubuntu 18.04 LTS release we had extensive threads on the ubuntu-devel list and also consulted Valve in detail on the topic. None of those discussions raised the passions we’ve seen here, so we felt we had sufficient consensus for the move in Ubuntu 20.04 LTS. We do think it’s reasonable to expect the community to participate and to find the right balance between enabling the next wave of capabilities and maintaining the long tail. Nevertheless, in this case it’s relatively easy for us to change plan and enable natively in Ubuntu 20.04 LTS the applications for which there is a specific need.

We will also work with the WINE, Ubuntu Studio and gaming communities to use container technology to address the ultimate end of life of 32-bit libraries; it should stay possible to run old applications on newer versions of Ubuntu. Snaps and LXD enable us both to have complete 32-bit environments, and bundled libraries, to solve these issues in the long term.

There is real risk to anybody who is running a body of software that gets little testing. The facts are that most 32-bit x86 packages are hardly used at all. That means fewer eyeballs, and more bugs. Software continues to grow in size at the high end, making it very difficult to even build new applications in 32-bit environments. You’ve heard about Spectre and Meltdown – many of the mitigations for those attacks are unavailable to 32-bit systems.

This led us to stop creating Ubuntu install media for i386 last year and to consider dropping the port altogether at a future date.  It has always been our intention to maintain users’ ability to run 32-bit applications on 64-bit Ubuntu – our kernels specifically support that.

The Ubuntu developers remain committed as always to the principle of making Ubuntu the best open source operating system across desktop, server, cloud, and IoT.  We look forward to the ongoing engagement of our users in continuing to make this principle a reality.

The post Statement on 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS appeared first on Ubuntu Blog.

Read more
Canonical

Thanks to the huge amount of feedback this weekend from gamers, Ubuntu Studio, and the WINE community, we will change our plan and build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS.

We will put in place a community process to determine which 32-bit packages are needed to support legacy software, and can add to that list post-release if we miss something that is needed.

Community discussions can sometimes take unexpected turns, and this is one of those. The question of support for 32-bit x86 has been raised and seriously discussed in Ubuntu developer and community forums since 2014. That’s how we make decisions.

After the Ubuntu 18.04 LTS release we had extensive threads on the ubuntu-devel list and also consulted Valve in detail on the topic. None of those discussions raised the passions we’ve seen here, so we felt we had sufficient consensus for the move in Ubuntu 20.04 LTS. We do think it’s reasonable to expect the community to participate and to find the right balance between enabling the next wave of capabilities and maintaining the long tail. Nevertheless, in this case it’s relatively easy for us to change plan and enable natively in Ubuntu 20.04 LTS the applications for which there is a specific need.

We will also work with the WINE, Ubuntu Studio and gaming communities to use container technology to address the ultimate end of life of 32-bit libraries; it should stay possible to run old applications on newer versions of Ubuntu. Snaps and LXD enable us both to have complete 32-bit environments, and bundled libraries, to solve these issues in the long term.

There is real risk to anybody who is running a body of software that gets little testing. The facts are that most 32-bit x86 packages are hardly used at all. That means fewer eyeballs, and more bugs. Software continues to grow in size at the high end, making it very difficult to even build new applications in 32-bit environments. You’ve heard about Spectre and Meltdown – many of the mitigations for those attacks are unavailable to 32-bit systems.

This led us to stop creating Ubuntu install media for i386 last year and to consider dropping the port altogether at a future date.  It has always been our intention to maintain users’ ability to run 32-bit applications on 64-bit Ubuntu – our kernels specifically support that.

The Ubuntu developers remain committed as always to the principle of making Ubuntu the best open source operating system across desktop, server, cloud, and IoT.  We look forward to the ongoing engagement of our users in continuing to make this principle a reality.

The post Statement on 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS appeared first on Ubuntu Blog.

Read more
Canonical

Drones, and their wide-ranging uses, have been a constant topic of conversation for some years now, but we’re only just beginning to move away from the hypothetical and into reality. The FAA estimates that there will be 2 million drones in the United States alone in 2019, as adoption within the likes of distribution, construction, healthcare and other industries accelerates.

Driven by this demand, Ubuntu – the most popular Linux operating system for the Internet of Things (IoT) – is now available on the Manifold 2, a high-performance embedded computer offered by leading drone manufacturer, DJI. The Manifold 2 is designed to fit seamlessly onto DJI’s drone platforms via the onboard SDK and enables developers to transform aerial platforms into truly smarter drones, performing complex computing tasks and advanced image processing, which in-turn creates rapid flexibility for enterprise usage.

As part of the offering, the Manifold 2 is planning to feature snaps. Snaps are containerised software packages, designed to work perfectly across cloud, desktop, and IoT devices – with this the first instance of the technology’s availability on drones. The ability to add multiple snaps means a drone’s functionality can be altered, updated, and expanded over time. Depending on the desired use case, enterprises can ensure the form a drone is shipped in does not represent its final iteration or future worth.

Snaps also feature enhanced security and greater flexibility for developers. Drones can receive automatic updates in the field, which will become vital as enterprises begin to deploy large-scale fleets. Snaps also support roll back functionality in the event of failure, meaning developers can innovate with more confidence across this growing field.

Designed for developers, having the Manifold 2 pre-installed with Ubuntu means support for Linux, CUDA, OpenCV, and ROS. It is ideal for the research and development of professional applications, and can access flight data and perform intelligent control and data analysis. It can be easily mounted to the expansion bay of DJI’s Matrice 100, Matrice 200 Series V2 and Matrice 600, and is also compatible with the A3 and N3 flight controller.

DJI has now counted at least 230 people have been rescued with the help of a drone since 2013. As well as being used by emergency services, drones are helping to protect lives by eradicating the dangerous elements of certain occupations. Apellix is one such example; supplying drones which run on Ubuntu to alleviate the need for humans to be at the forefront of work in elevated, hazardous environments, such as aircraft carriers and oil rigs.

Utilising the freedom brought by snaps, it is exciting to see how developers drive the drone industry forward. Software is allowing the industrial world to move from analog to digital, and mission-critical industries will continue to evolve based on its capabilities.

The post Customisable for the enterprise: the next-generation of drones appeared first on Ubuntu Blog.

Read more
Anthony Dillon

This was a fairly busy two weeks for the Web & design team at Canonical.  Here are some of the highlights of our completed work.

Web

Web is the squad that develop and maintain most of the brochure websites across the Canonical.

Integrating the blog into www.ubuntu.com

We have been working on integrating the blog into www.ubuntu.com, building new templates and an improved blog module that will serve pages more quickly.

Takeovers and engage pages

We built and launched a few new homepage takeovers and landing pages, including:

– Small Robot Company case study

– 451 Research webinar takeover and engage page

– Whitepaper takeover and engage page for Getting started with AI

– Northstar robotics case study engage page  

– Whitepaper about Active directory

Verifying checksum on download thank you pages

We have added the steps to verify your Ubuntu download on the website. To see these steps download Ubuntu and check the thank you page.

Mir Server homepage refresh

A new homepage hero section was designed and built for www.mir-server.io.

The goal was to update that section with an image related to digital signage/kiosk and also to give a more branded look and feel by using our Canonical brand colours and Suru folds.

Brand

Brand squad champion a consistent look and feel across all media from web to social to print and logos.

Usage guide to using the company slide deck

The team have been working on storyboarding a video to guide people to use the new company slide deck correctly and highlight best practices for creating great slides.

Illustration and UI work

We have been working hard on breaking down the illustrations we have into multiple levels. We have identified x3 levels of illustrations we use and are in the process of gathering them across all our websites and reproducing them in a consistent style.

Alongside this we have started to look at the UI icons we use in all our products with the intention of creating a single master set that will be used across all products to create a consistent user experience.

Marketing support

Created multiple documents for the Marketing team, these included x2 whitepapers and x3 case studies for the Small Robot Company, Northstar and Yahoo Japan.

We also created an animated screen for the stand back wall at Kubecon in Barcelona.

MAAS

The MAAS squad develop the UI for the maas project.

Renamed Pod to KVM in the MAAS UI

MAAS has been using the label “pod” for any KVM (libvert) or RSD host – a label that is not industry standard and can be confused with the use of pods in Kubernetes. In order to avoid this, we went through the MAAS app and renamed all instances of pod to KVM and separated the interface for KVM and RSD hosts.

Replaced Karma tests with Jest

The development team working on MAAS have been focusing on modernising areas of the application. This lead to moving from the Karma test framework to Jest.

Absolute import paths to modules

Another area the development team would like to tackle is migrating from AngularJS to React. To decouple us from Angular we moved to loading modules from a relative path.

KVM/RSD: In-table graphs for CPU, RAM and storage

MAAS CPU, RAM and Storage mini charts
MAAS usage tooltip
MAAS storage tooltip

JAAS

The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

Design update for JAAS.ai

We have worked on a design update for jaas.ai which includes new colours and page backgrounds. The aim is to bring the website in line with recent updates carried out by the brand team.

Top of the new JAAS homepage

Design refresh of the top of the store page

We have also updated the design of the top section of the Store page, to make it clearer and more attractive, and again including new brand assets.

Top of jaas store page

UX review of the CLI between snap and juju

Our team has also carried out some research in the first step to more closely aligning the commands used in the CLI for Juju and Snaps. This will help to make the experience of using our products more consistent.

Vanilla

The Vanilla squad design and maintain the design system and Vanilla framework library. They ensure a consistent style throughout web assets.  

Vanilla 2.0.0 release

Since our last major release, v1.8.0 back in July last year, we’ve been working hard to bring you new features, improve the framework and make it the most stable version of Vanilla yet. These changes have been released in v2.0.0.

Over the past 2 weeks, we’ve been running QA tests across our marketing sites and web applications using our pre-release 2.0.0-alpha version. During testing, we’ve been filing and fixings bugs against this version, and have pushed up to a pre-release 2.0.0-beta version.

Vanilla framework v2.0.0 banner

We plan to launch Vanilla 2.0.0 today once we have finalised our release notes, completed our upgrade document which will help and guide users during upgrades of their sites.

Look out for our press release posts on Vanilla 2.0.0 and our Upgrade document to go along with it.

Snapcraft

The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

Install any snap on any platform

We’re pleased to announce we’ll be releasing distribution install pages for all Snaps. They’re one-stop-shops for any combination of Snap and supported distro. Simply visit https://snapcraft.io/install/spotify/debian or, say https://snapcraft.io/install/vlc/arch. The combinations are endless and not only do the pages give you that comfy at-home feeling when it comes to branding they’re also pretty useful. If you’ve never installed Snaps before we provide some easy step-by-step instructions to get the snap running and suggest some other Snaps you might like.

Snap how to install VSC

The post Design and Web team summary – 10 June 2019 appeared first on Ubuntu Blog.

Read more
K. Tsakalozos

If you take a look at MicroK8s’ channel information with snap info microk8s you will see all available Kubernetes releases:

channels:
stable: v1.14.1 2019-04-18 (522) 214MB classic
candidate: v1.14.1 2019-04-15 (522) 214MB classic
beta: v1.14.1 2019-04-15 (522) 214MB classic
edge: v1.14.1 2019-05-10 (587) 217MB classic
1.15/stable: –
1.15/candidate: –
1.15/beta: –
1.15/edge: v1.15.0-alpha.3 2019-05-08 (578) 215MB classic
1.14/stable: v1.14.1 2019-04-18 (521) 214MB classic
1.14/candidate: v1.14.1 2019-04-15 (521) 214MB classic
1.14/beta: v1.14.1 2019-04-15 (521) 214MB classic
1.14/edge: v1.14.1 2019-05-11 (590) 217MB classic
1.13/stable: v1.13.5 2019-04-22 (526) 237MB classic
1.13/candidate: v1.13.6 2019-05-09 (581) 237MB classic
1.13/beta: v1.13.6 2019-05-09 (581) 237MB classic
1.13/edge: v1.13.6 2019-05-08 (581) 237MB classic
1.12/stable: v1.12.8 2019-05-02 (547) 259MB classic
1.12/candidate: v1.12.8 2019-05-01 (547) 259MB classic
1.12/beta: v1.12.8 2019-05-01 (547) 259MB classic
1.12/edge: v1.12.8 2019-04-24 (547) 259MB classic
1.11/stable: v1.11.10 2019-05-10 (557) 258MB classic
1.11/candidate: v1.11.10 2019-05-02 (557) 258MB classic
1.11/beta: v1.11.10 2019-05-02 (557) 258MB classic
1.11/edge: v1.11.10 2019-05-01 (557) 258MB classic
1.10/stable: v1.10.13 2019-04-22 (546) 222MB classic
1.10/candidate: v1.10.13 2019-04-22 (546) 222MB classic
1.10/beta: v1.10.13 2019-04-22 (546) 222MB classic
1.10/edge: v1.10.13 2019-04-22 (546) 222MB classic

If you want to follow the v1.14 Kubernetes releases you would:

sudo snap install microk8s --classic --channel=1.14/stable

Whereas if you always want to be on the latest stable release you would:

sudo snap install microk8s --classic

What is new in the channels list above is the pre-stable releases found under the 1.15 track (at the time of this writing the latest stable release is v1.14).

Following the pre-stable releases

We are committed to shipping MicroK8s with pre-stable releases under the following scheme.

  • The edge channel (eg 1.15/edge) holds the alpha upstream releases.
  • The beta channel (eg 1.15/beta) holds the beta upstream releases.
  • The candidate channel (eg 1.15/candidate) holds the release candidate of upstream releases.

Pre-stable releases will be available the same day they are released upstream.

If you want to test your work against the alpha 1.15 release simply do:

sudo snap install microk8s --classic --channel=1.15/edge

However, be aware that pre-stable releases may change before the stable release. Be sure to test any work against the stable release once it becomes available.

Tracks with stable releases

Tracks are meant to serve specific Kubernetes releases. For example the 1.15 track with its four channels, 1.15/edge, 1.15/beta, 1.15/candidate, 1.15/stable, serves the v1.15 K8s release. As soon as a new K8s stable release is made, all channels of the corresponding track are updated. In our example, as soon as v1.15 stable is released the corresponding track channels are updated in the following way:

  • The 1.15/edge channel is updated on every commit merged on the MicroK8s repository paired with the v1.15 stable K8s release.
  • The 1.15/beta and 1.15/candidate channels are updated on every upstream patch release. They hold whatever the 1.15/edge channel has at the time of the patch release.
  • The 1.15/stable channel gets updated with what 1.15/candidate holds a week after a new revision is put into 1.15/candidate.

I am confused. Which channel is right for me?

The single question you need to answer is what to put in the channel argument below:

sudo snap install microk8s --classic --channel=<What_to_use_here?>

Here are some suggestions for the channel to use based on your needs:

  • I want to always be on the latest stable Kubernetes.
    Use --channel=latest
  • I want to always be on the latest release in a specific upstream K8s release.
    Use --channel=<release>/stable eg --channel=1.14/stable.
  • I want to test-drive a pre-stable release.
    Use --channel=<next_release>/edge for alpha releases
    Use --channel=<next_release>/beta for beta releases
    Use --channel=<next_release>/candidate for candidate releases
  • I am waiting for a bug fix on MicroK8s:
    Use --channel=<release>/edge
  • I am waiting for a bug fix on upstream Kubernetes:
    Use --channel=<release>/candidate

Developing K8s core services with MicroK8s

One of the purposes of pre-stable releases is to assist K8s core service developers in their task. Let’s see how we can hook a local build of kubelet to a MicroK8s deployment.

Following the build instructions for Kubernetes we:

git clone https://github.com/kubernetes/kubernetes
cd kubernetes
build/run.sh make kubelet

The kubelet binary should be available under:

_output/dockerized/bin/linux/amd64/kubelet

Let’s grab a MicroK8s deployment:

sudo snap install microk8s --classic --channel=1.15/edge

To see what arguments the kubelet is running with we:

> ps -ef | grep kubelet
root 24184 1 2 17:28 ? 00:00:54 /snap/microk8s/578/kubelet
--kubeconfig=/snap/microk8s/578/configs/kubelet.config
--cert-dir=/var/snap/microk8s/578/certs
--client-ca-file=/var/snap/microk8s/578/certs/ca.crt
--anonymous-auth=false
--network-plugin=kubenet
--root-dir=/var/snap/microk8s/common/var/lib/kubelet
--fail-swap-on=false
--pod-cidr=10.1.1.0/24
--non-masquerade-cidr=10.152.183.0/24
--cni-bin-dir=/snap/microk8s/578/opt/cni/bin/
--feature-gates=DevicePlugins=true
--eviction-hard=memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi
--container-runtime=remote
--container-runtime-endpoint=/var/snap/microk8s/common/run/containerd.sock
--node-labels=microk8s.io/cluster=true

We now need to stop the kubelet that comes with MicroK8s and start our own build:

sudo systemctl stop snap.microk8s.daemon-kubelet.service
sudo _output/dockerized/bin/linux/amd64/kubelet 
--kubeconfig=/snap/microk8s/578/configs/kubelet.config
--cert-dir=/var/snap/microk8s/578/certs
--clit-ca-file=/var/snap/microk8s/578/certs/ca.crt
--anonymous-auth=false --network-plugin=kubenet
--root-dir=/var/snap/microk8s/common/var/lib/kubelet
--fail-swap-on=false --pod-cidr=10.1.1.0/24
--container-runtime=remote
--container-runtime-endpoint=/var/snap/microk8s/common/run/containerd.sock
--node-labels=microk8s.io/cluster=true --eviction-hard='memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi'

That’s it! Your kubelet now runs in place of the one in MicroK8s! You have to admit it is as simple as it gets.

What you should be aware is that some microk8s commands will restart services through systemd. For example, microk8s.enable dns will initiate a services restart including the kubelet shipped with MicroK8s.

Happy coding!

Further reading


Kubernetes pre-stable releases now available with MicroK8s was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
jdstrand

It is useful for testing to want to work with official cloud images as local VMs. Eg, when I work on snapd, I like to have different images available to work with its spread tests.

The autopkgtest package makes working with Ubuntu images quite easy:

$ sudo apt-get install qemu-kvm autopkgtest
$ autopkgtest-buildvm-ubuntu-cloud -r bionic # -a i386
Downloading https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img...
# and to integrate into spread
$ mkdir -p ~/.spread/qemu
$ mv ./autopkgtest-bionic-amd64.img ~/.spread/qemu/ubuntu-18.04-64.img
# now can run any test from 'spread -list' starting with
# 'qemu:ubuntu-18.04-64:'

This post isn’t really about autopkgtest, snapd or spread specifically though….

I found myself wanting an official Debian unstable cloud image so I could use it in spread while testing snapd. I learned it is easy enough to create the images yourself but then I found that Debian started providing raw and qcow2 cloud images for use in OpenStack and so I started exploring how to use them and generalize how to use arbitrary cloud images.

General procedure

The basic steps are:

  1. obtain a cloud image
  2. make copy of the cloud image for safekeeping
  3. resize the copy
  4. create a seed.img with cloud-init to set the username/password
  5. boot with networking and the seed file
  6. login, update, etc
  7. cleanly shutdown
  8. use normally (ie, without seed file)

In this case, I grabbed the ‘debian-testing-openstack-amd64.qcow2’ image from http://cdimage.debian.org/cdimage/openstack/testing/ and verified it. Since this is based on Debian ‘testing’ (current stable images are also available), when I copied it I named it accordingly. Eg, I knew for spread it needed to be ‘debian-sid-64.img’ so I did:

$ cp ./debian-testing-openstack-amd64.qcow2 ./debian-sid-64.img

I then resized it. I picked 20G since I recalled that is what autopkgtest uses:

$ qemu-img resize debian-sid-64.img 20G

These are already setup for cloud-init, so I created a cloud-init data file (note, the ‘#cloud-config’ comment at the top is important):

$ cat ./debian-data
#cloud-config
password: debian
chpasswd: { expire: false }
ssh_pwauth: true

and a cloud-init meta-data file:

$ cat ./debian-meta-data
instance-id: i-debian-sid-64
local-hostname: debian-sid-64

and fed that into cloud-localds to create a seed file:

$ cloud-localds -v ./debian-seed.img ./debian-data ./debian-meta-data

Then start the image with:

$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda ./debian-sid-64.img -drive "file=./debian-seed.img,if=virtio,format=raw" -net nic -net user,hostfwd=tcp:127.0.0.1:59355-:22

(I’m using the invocation that is reminiscent of how spread invokes it; feel free to use a virtio invocation as described by Scott Moser if that better suits your environment.)

Here, the “59355” can be any unused high port. The idea is after the image boots, you can login with ssh using:

$ ssh -p 59355 debian@127.0.0.1

Once logged in, perform any updates, etc that you want in place when tests are run, then disable cloud-init for the next boot and cleanly shutdown with:

$ sudo touch /etc/cloud/cloud-init.disabled
$ sudo shutdown -h now

The above is the generalized procedure which can hopefully be adapted for other distros that provide cloud images, etc.

For integrating into spread, just copy the image to ‘~/.spread/qemu’, naming it how spread expects. spread will use ‘-snapshot’ with the VM as part of its tests, so if you want to update the images later since they might be out of date, omit the seed file (and optionally ‘-net nic -net user,hostfwd=tcp:127.0.0.1:59355-:22’ if you don’t need port forwarding), and use:

$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda ./debian-sid-64.img

UPDATE 2019-04-23: the above is confirmed to work with Fedora 28 and 29 (though, if using the resulting image to test snapd, be sure to configure the password as ‘fedora’ and then be sure to ‘yum update ; yum install kernel-modules nc strace’ in the image).

UPDATE 2019-04-22: the above is confirmed to work with CentOS 7 (though, if using the resulting image to test snapd, be sure to configure the password as ‘centos’ and then be sure to ‘yum update ; yum install epel-release ; yum install golang nc strace’ in the image).

Extra steps for Debian cloud images without default e1000 networking

Unfortunately, for the Debian cloud images, there were additional steps because spread doesn’t use virtio, but instead the default the e1000 driver, and the Debian cloud kernel doesn’t include this:

$ grep E1000 /boot/config-4.19.0-4-cloud-amd64
# CONFIG_E1000 is not set
# CONFIG_E1000E is not set

So… when the machine booted, there was no networking. To adjust for this, I blew away the image, copied from the safely kept downloaded image, resized then started it with:

$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda $HOME/.spread/qemu/debian-sid-64.img -drive "file=$HOME/.spread/qemu/debian-seed.img,if=virtio,format=raw" -device virtio-net-pci,netdev=eth0 -netdev type=user,id=eth0

This allowed the VM to start with networking, at which point I adjusted /etc/apt/sources.list to refer to ‘sid’ instead of ‘buster’ then ran apt-get update then apt-get dist-upgrade to upgrade to sid. I then installed the Debian distro kernel with:

$ sudo apt-get install linux-image-amd64

Then uninstalled the currently running kernel with:

$ sudo apt-get remove --purge linux-image-cloud-amd64 linux-image-4.19.0-4-cloud-amd64

(I used ‘dpkg -l | grep linux-image’ to see the cloud kernels I wanted to remove). Removing the package that provides the currently running kernel is a dangerous operation for most systems, so there is a scary message to abort the operation. In our case, it isn’t so scary (we can just try again ;) and this is exactly what we want to do.

Next I cleanly shutdown the VM with:

$ sudo shutdown -h now

and try to start it again like with the ‘general procedures’, above (I’m keeping the seed file here because I want cloud-init to be re-run with the e1000 driver):

$ kvm -M pc -m 1024 -smp 1 -monitor pty -nographic -hda ./debian-sid-64.img -drive "file=./debian-seed.img,if=virtio,format=raw" -net nic -net user,hostfwd=tcp:127.0.0.1:59355-:22

Now I try to login via ssh:
$ ssh -p 59355 debian@127.0.0.1
...
debian@127.0.0.1's password:
...
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Apr 16 16:13:15 2019
debian@debian:~$ sudo touch /etc/cloud/cloud-init.disabled
debian@debian:~$ sudo shutdown -h now
Connection to 127.0.0.1 closed.

While this VM is no longer the official cloud image, it is still using the Debian distro kernel and Debian archive, which is good enough for my purposes and at this point I’m ready to use this VM in my testing (eg, for me, copy ‘debian-sid-64.img’ to ‘~/.spread/qemu’).

Read more
James Henstridge

Last week I gave a talk at Perth Linux Users Group about building IoT projects using Ubuntu Core and Snapcraft. The video is now available online. Unfortunately there were some problems with the audio setup leading to some background noise in the video, but it is still intelligible:

The slides used in the talk can be found here.

The talk was focused on how Ubuntu Core could be used to help with the ongoing security and maintenance of IoT projects. While it might be easy to buy a Raspberry Pi, install Linux and your application, how do you make sure the device remains up to date with security updates? How do you push out updates to your application in a reliable fashion?

I outlined a way of deploy a project using Ubuntu Core, including:

  1. Packaging a simple web server app using the snapcraft tool.
  2. Configuring automatic builds from git, published to the edge channel on the Snap Store. This is also an easy way to get ARM builds for a package, rather than trying to deal with cross compilation tool chains.
  3. Using the ubuntu-image command to create an Ubuntu Core image with the application preinstalled

I gave a demo booting such an image in a virtual machine. This showed the application up and running ready to use. I also demonstrated how promoting a build from the edge channel to stable on the store would make it available to the system wide automatic update mechanism on the device.

Read more
K. Tsakalozos

We have been quiet for a few months just because we have been busy. We were working mainly on two features that we intend to ship in the v1.14 release:

The entailed changes will affect the backwards compatibility and user experience of MicroK8s and this is the reason we time them with the upcoming upstream Kubernetes release. Here we will provide a) a short description of these features, b) a way for you to test drive the new MicroK8s, and c) the steps on how to hold back on the release in case this is a major show stopper for you.

The transition to Containerd

We replace Dockerd with Containerd mainly for two reasons.

  • The setup of having two dockerd on the same host has proven problematic. MicroK8s brings its own dockerd that may clash with a local dockerd users may want to have. With moving to containerd users can apt-get install docker.io without affecting MicroK8s. This switch also means that microk8s.docker will not be available anymore, you will have to use a docker client shipped with your distribution.
  • Performance. It is shown that there is a performance benefit from using containerd. This should not be a surprise since dockerd itself uses containerd internally. With the switch to containerd we are essentially removing a layer that is docker specific.

Hardening MicroK8s security

MicroK8s is a developer’s tool. It is not meant to be deployed in production or in hostile environments. Having said that we tried to make MicroK8s more secure by:

  • Exposing as few services as we can. Here is a table with what we left open and the access restrictions involved:
https://medium.com/media/4dac105e741261ca58799b0b8d101dae/href
  • A CA and certificates are created once at deployment time.

Test drive the upcoming patches

We have prepared a temporary branch you could use to evaluate the above changes:

snap install microk8s --classic --channel=1.13/edge/secure-containerd

If you have MicroK8s already installed you can switch the channel your MicroK8s is following:

snap refresh --channel=1.13/edge/secure-containerd microk8s

Try it out and let us know if we missed anything.

“Thanks, I’ll pass”

All release series up until now will not be affected by this change. This means you can have your MicroK8s deployment follow the 1.13 track:

snap refresh --channel=1.13/stable microk8s

Summing up

An important update is coming. Make sure you give it a try with:

snap install microk8s --classic --channel=1.13/edge/secure-containerd

If you do not like what you see tell us what breaks by filing an issue and keep using the 1.13 track.

References


Containerd on a more secure MicroK8s was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

MicroK8s in the Wild

As the popularity of MicroK8s grows I would like to take the time to mention some projects that use this micro Kubernetes distribution. But before that, let me do some introductions. For those unfamiliar with Kubernetes, Kubernetes is an open source container orchestrator. It shows you how to deploy, upgrade, and provision your application. This is one of the rare occasions where all the major players (Google, Microsoft, IBM, Amazon etc) have flocked around a single framework making it an unofficial standard.

MicroK8s is a distribution of Kubernetes. It is a snap package that sets up a Kubernetes cluster on your machine. You can have a Kubernetes cluster for local development, CI/CD or just for getting to know Kubernetes with just a:

sudo snap install microk8s --classic

If you are on a Mac or Windows you will need a Linux VM.

In what follows you will find some examples on how people are using MicroK8s. Note that this is not a complete list of MicroK8s usages, it is just some efforts I happen to be aware of.

Spring Cloud Kubernetes

This project is using CircleCI for CI/CD. MicroK8s provides a local Kubernetes cluster where integration tests are run. The addons enabled are dns, the docker registry and Istio. The integration tests need to plug into the Kubernetes cluster using the kubeconfig file and the socket to dockerd. This work was introduced in this Pull Request (thanks George) and it gave us the incentive to add a microk8s.status command that would wait for the cluster to come online. For example we can wait up to 5 minutes for MicroK8s to come up with:

microk8s.status --wait-ready --timeout=300

OpenFaaS on MicroK8s

It was this year’s Config Management Camp where I met Joe McCobe the author of “Deploy OpenFaaS with MicroK8s”. I will just repeat his words “was blown away by the speed and ease with which I could get a basic lab environment up and running”.

What about Kubeless?

It seems the ease of deploying MicroK8s goes well with the ease of software development of serverless frameworks. Users of Kubeless are also kicking the tires on MicroK8s. Have a look at “Files upload from Kubeless on MicroK8s to Minio” and “Serverless MicroK8s Kubernetes.”

SUSE Cloud Application Platform (CAP) on Microk8s

In his blog post Dimitris describes in detail all the configuration he had to do to get the software from SUSE to run on MicroK8s. The most interesting part is the motivation behind this effort. As he says “… MicroK8s… use your machine’s resources without you having to decide on a VM size beforehand.” As he explained to me his application puts significant memory pressure only during bootstrap. MicroK8s enabled him to reclaim the unused memory after the initialization phase.

Kubeflow

Kubeflow is the missing link between Kubernetes and AI/ML. Canonical is actively involved in this so…. you should definitely check it out. Sure, I am biased but let me tell you a true story. I have a friend who was given three machines to deploy Tensorflow and run some experiments. She did not have any prior experience at the time so… none of the three node clusters were setup in exactly the same way. There was always something off. This head-scratching situation is just one reason to use Kubeflow.

Transcrobes

Transcrobes comes from an active member of the MicroK8s community. It serves as a language learning aid. “The system knows what you know, so can give you just the right amount of help to be able to understand the words you don’t know but gets out of the way for the stuff you do know.” Here MicroK8s is used for quick prototyping. We wish you all the best Anton, good luck!

Summing Up

We have seen a number of interesting use cases that include CI/CD, Serverless programming, lab setup, rapid prototyping and application development. If you have a MicroK8s use case do let us know. Come and say hi at #microk8s on the Kubernetes slack and/or issue a Pull Request against our MicroK8s In The Wild page.

References


MicroK8s in the Wild was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
jdstrand

Some time ago we started alerting publishers when their stage-packages received a security update since the last time they built a snap. We wanted to create the right balance for the alerts and so the service currently will only alert you when there are new security updates against your stage-packages. In this manner, you can choose not to rebuild your snap (eg, since it doesn’t use the affected functionality of the vulnerable package) and not be nagged every day that you are out of date.

As nice as that is, sometimes you want to check these things yourself or perhaps hook the alerts into some form of automation or tool. While the review-tools had all of the pieces so you could do this, it wasn’t as straightforward as it could be. Now with the latest stable revision of the review-tools, this is easy:

$ sudo snap install review-tools
$ review-tools.check-notices \
  ~/snap/review-tools/common/review-tools_656.snap
{'review-tools': {'656': {'libapt-inst2.0': ['3863-1'],
                          'libapt-pkg5.0': ['3863-1'],
                          'libssl1.0.0': ['3840-1'],
                          'openssl': ['3840-1'],
                          'python3-lxml': ['3841-1']}}}

The review-tools are a strict mode snap and while it plugs the home interface, that is only for convenience, so I typically disconnect the interface and put things in its SNAP_USER_COMMON directory, like I did above.

Since now it is super easy to check a snap on disk, with a little scripting and a cron job, you can generate a machine readable report whenever you want. Eg, can do something like the following:

$ cat ~/bin/check-snaps
#!/bin/sh
set -e

snaps="review-tools/stable rsync-jdstrand/edge"

tmpdir=$(mktemp -d -p "$HOME/snap/review-tools/common")
cleanup() {
    rm -fr "$tmpdir"
}
trap cleanup EXIT HUP INT QUIT TERM

cd "$tmpdir" || exit 1
for i in $snaps ; do
    snap=$(echo "$i" | cut -d '/' -f 1)
    channel=$(echo "$i" | cut -d '/' -f 2)
    snap download "$snap" "--$channel" >/dev/null
done
cd - >/dev/null || exit 1

/snap/bin/review-tools.check-notices "$tmpdir"/*.snap

or if  you already have the snaps on disk somewhere, just do:

$ /snap/bin/review-tools.check-notices /path/to/snaps/*.snap

Now can add the above to cron or some automation tool as a reminder of what needs updates. Enjoy!

Read more
K. Tsakalozos

MicroK8s is a local deployment of Kubernetes. Let’s skip all the technical details and just accept that Kubernetes does not run natively on MacOS or Windows. You may be thinking “I have seen Kubernetes running on a MacOS laptop, what kind of sorcery was that?” It’s simple, Kubernetes is running inside a VM. You might not see the VM or it might not even be a full blown virtual system but some level of virtualisation is there. This is exactly what we will show here. We will setup a VM and inside there we will install MicroK8s. After the installation we will discuss how to use the in-VM-Kubernetes.

A multipass VM on MacOS

Arguably the easiest way to get an Ubuntu VM on MacOS is with multipass. Head to the releases page and grab the latest package. Installing it is as simple as double-clicking on the .pkg file.

To start a VM with MicroK8s we:

multipass launch --name microk8s-vm --mem 4G --disk 40G
multipass exec microk8s-vm -- sudo snap install microk8s --classic
multipass exec microk8s-vm -- sudo iptables -P FORWARD ACCEPT

Make sure you reserve enough resources to host your deployments; above, we got 4GB of RAM and 40GB of hard disk. We also make sure packets to/from the pod network interface can be forwarded to/from the default interface.

Our VM has an IP that you can check with:

> multipass list
Name State IPv4 Release
microk8s-vm RUNNING 10.72.145.216 Ubuntu 18.04 LTS

Take a note of this IP since our services will become available there.

Other multipass commands you may find handy:

  • Get a shell inside the VM:
multipass shell microk8s-vm
  • Shutdown the VM:
multipass stop microk8s-vm
  • Delete and cleanup the VM:
multipass delete microk8s-vm 
multipass purge

Using MicroK8s

To run a command in the VM we can get a multipass shell with:

multipass shell microk8s-vm

To execute a command without getting a shell we can use multipass exec like so:

multipass exec microk8s-vm -- /snap/bin/microk8s.status

A third way to interact with MicroK8s is via the Kubernetes API server listening on port 8080 of the VM. We can use microk8s’ kubeconfig file with a local installation of kubectl to access the in-VM-kubernetes. Here is how:

multipass exec microk8s-vm -- /snap/bin/microk8s.config > kubeconfig

Install kubectl on the host machine and then use the kubeconfig:

kubectl --kubeconfig=kubeconfig get all --all-namespaces

Accessing in-VM services — Enabling addons

Let’s first enable dns and the dashboard. In the rest of this blog we will be showing different methods of accessing Grafana:

multipass exec microk8s-vm -- /snap/bin/microk8s.enable dns dashboard

We check the deployment progress with:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get all --all-namespaces

After all services are running we can proceed into looking how to access the dashboard.

The Grafana of our dashboard

Accessing in-VM services — Use the Kubernetes API proxy

The API server is on port 8080 of our VM. Let’s see how the proxy path looks like:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl cluster-info
...
Grafana is running at http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
...

By replacing 127.0.0.1 with the VM’s IP, 10.72.145.216 in this case, we can reach our service at:

http://10.72.145.216:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

Accessing in-VM services — Setup a proxy

In a very similar fashion to what we just did above, we can ask Kubernetes to create a proxy for us. We need to request the proxy to be available to all interfaces and to accept connections from everywhere so that the host can reach it.

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl proxy --address='0.0.0.0' --accept-hosts='.*'
Starting to serve on [::]:8001

Leave the terminal with the proxy open. Again, replacing 127.0.0.1 with the VMs IP we reach the dashboard through:

http://10.72.145.216:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

Make sure you go through the official docs on constructing the proxy paths.

Accessing in-VM services — Use a NodePort service

We can expose our service in a port on the VM and access it from there. This approach is using the NodePort service type. We start by spotting the deployment we want to expose:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get deployment -n kube-system  | grep grafana
monitoring-influxdb-grafana-v4 1 1 1 1 22h

Then we create the NodePort service:

multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl expose deployment.apps/monitoring-influxdb-grafana-v4 -n kube-system --type=NodePort

We have now a port for the Grafana service:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get services -n kube-system  | grep NodePort
monitoring-influxdb-grafana-v4 NodePort 10.152.183.188 <none> 8083:32580/TCP,8086:32152/TCP,3000:32720/TCP 13m

Grafana is on port 3000 mapped here to 32720. This port is randomly selected so it my vary for you. In our case, the service is available on 10.72.145.216:32720.

Conclusions

MicroK8s on MacOS (or Windows) will need a VM to run. This is no different that any other local Kubernetes solution and it comes with some nice benefits. The VM gives you an extra layer of isolation. Instead of using your host and potentially exposing the Kubernetes services to the outside world you have full control of what others can see. Of course, this isolation comes with some extra administrative overhead that may not be applicable for a dev environment. Give it a try and tell us what you think!

Links

CanonicalLtd/multipass


MicroK8s on MacOS was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
K. Tsakalozos

Looking at the configuration of a Kubernetes node sounds like a simple thing, yet it not so obvious.

The arguments kubelet takes come either as command line parameters or from a configuration file you pass with --config. Seems, straight forward to do a ps -ex | grep kubelet and look in the file you see after the --config parameter. Simple, right? But… are you sure you got all the arguments right? What if Kubernetes defaulted to a value you did not want? What if you do not have shell access to a node?

There is a way to query the Kubernetes API for the configuration a node is running with: api/vi/nodes/<node_name>/proxy/cofigz. Lets see this in a real deployment.

Deploy a Kubernetes Cluster

I am using the Canonical Distribution of Kubernetes (CDK) on AWS here but you can use whichever cloud and Kubernetes installation method you like.

juju bootstrap aws
juju deploy canonical-kubernetes

..and wait for the deployment to finish

watch juju status 

Change a Configuration

CDK allows for configuring both the command line arguments and the extra arguments of the config file. Here we add arguments to the config file:

juju config kubernetes-worker kubelet-extra-config='{imageGCHighThresholdPercent: 60, imageGCLowThresholdPercent: 39}'

A great question is how we got the imageGCHighThreshholdPercent literal. At the time of this writing the official upstream docs point you to the type definitions; a rather ugly approach. There is an EvictionHard property in the type definitions, however, if you look at the example of the upstream docs you see the same property is with lowercase.

Check the Configuration

We will need two shells. On the first one we will start the API proxy and on the second we will query the API. On the first shell:

juju ssh kubernetes-master/0
kubectl proxy

Now that we have the proxy at 127.0.0.1:8001 on the kubernetes-master, use a second shell to get a node name and we query the API:

juju ssh kubernetes-master/0
kubectl get no
curl -sSL "http://localhost:8001/api/v1/nodes/<node_name>/proxy/configz" | python3 -m json.tool

Here is a full run:

juju ssh kubernetes-master/0
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1023-aws x86_64)
* Documentation:  https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Mon Oct 22 10:40:40 UTC 2018
System load:  0.11               Processes:              115
Usage of /: 13.7% of 15.45GB Users logged in: 1
Memory usage: 20% IP address for ens5: 172.31.0.48
Swap usage: 0% IP address for fan-252: 252.0.48.1
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
Last login: Mon Oct 22 10:38:14 2018 from 2.86.54.15
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@ip-172-31-0-48:~$ kubectl get no
NAME STATUS ROLES AGE VERSION
ip-172-31-14-174 Ready <none> 41m v1.12.1
ip-172-31-24-80 Ready <none> 41m v1.12.1
ip-172-31-63-34 Ready <none> 41m v1.12.1
ubuntu@ip-172-31-0-48:~$ curl -sSL "http://localhost:8001/api/v1/nodes/ip-172-31-14-174/proxy/configz" | python3 -m json.tool
{
"kubeletconfig": {
"syncFrequency": "1m0s",
"fileCheckFrequency": "20s",
"httpCheckFrequency": "20s",
"address": "0.0.0.0",
"port": 10250,
"tlsCertFile": "/root/cdk/server.crt",
"tlsPrivateKeyFile": "/root/cdk/server.key",
"authentication": {
"x509": {
"clientCAFile": "/root/cdk/ca.crt"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"registryPullQPS": 5,
"registryBurst": 10,
"eventRecordQPS": 5,
"eventBurst": 10,
"enableDebuggingHandlers": true,
"healthzPort": 10248,
"healthzBindAddress": "127.0.0.1",
"oomScoreAdj": -999,
"clusterDomain": "cluster.local",
"clusterDNS": [
"10.152.183.93"
],
"streamingConnectionIdleTimeout": "4h0m0s",
"nodeStatusUpdateFrequency": "10s",
"nodeLeaseDurationSeconds": 40,
"imageMinimumGCAge": "2m0s",
"imageGCHighThresholdPercent": 60,
"imageGCLowThresholdPercent": 39,
"volumeStatsAggPeriod": "1m0s",
"cgroupsPerQOS": true,
"cgroupDriver": "cgroupfs",
"cpuManagerPolicy": "none",
"cpuManagerReconcilePeriod": "10s",
"runtimeRequestTimeout": "2m0s",
"hairpinMode": "promiscuous-bridge",
"maxPods": 110,
"podPidsLimit": -1,
"resolvConf": "/run/systemd/resolve/resolv.conf",
"cpuCFSQuota": true,
"cpuCFSQuotaPeriod": "100ms",
"maxOpenFiles": 1000000,
"contentType": "application/vnd.kubernetes.protobuf",
"kubeAPIQPS": 5,
"kubeAPIBurst": 10,
"serializeImagePulls": true,
"evictionHard": {
"imagefs.available": "15%",
"memory.available": "100Mi",
"nodefs.available": "10%",
"nodefs.inodesFree": "5%"
},
"evictionPressureTransitionPeriod": "5m0s",
"enableControllerAttachDetach": true,
"makeIPTablesUtilChains": true,
"iptablesMasqueradeBit": 14,
"iptablesDropBit": 15,
"failSwapOn": false,
"containerLogMaxSize": "10Mi",
"containerLogMaxFiles": 5,
"configMapAndSecretChangeDetectionStrategy": "Watch",
"enforceNodeAllocatable": [
"pods"
]
}
}

Summing Up

There is a way to get the configuration of an online Kubernetes node through the Kubernetes API (api/v1/nodes/<node_name>/proxy/configz). This might be handy if you want to code against Kubernetes or you do not want to get into the intricacies of your particular cluster setup.

References


How to Inspect the Configuration of a Kubernetes Node was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
albertomilone@gmail.com

Ubuntu 18.04 marked the transition to a new, more granular, packaging of the NVIDIA drivers, which, unfortunately, combined with a change in logind, and with the previous migration from Lightdm to Gdm3, caused (Intel+NVIDIA) hybrid laptops to stop working the way they used to in Ubuntu 16.xx and older.

The following are the main issues experienced by our users:

  • An increase in power consumption when using the power saving profile (i.e. when the discrete GPU is off).
  • The inability to switch between power profiles on log out (thus requiring a reboot).

We have backported a commit to solve the problem with logind, and I have worked on a few changes in gpu-manager, and in the other key components, to improve the experience when using Gdm3.

NOTE: fixes for Lightdm, and for SDDM still need some work, and will be made available in the next update.

Both issues should be fixed in Ubuntu 18.10, and I have backported my work to Ubuntu 18.04, which is now available for testing.

If you run Ubuntu 18.04, own a hybrid laptop with an Intel and an NVIDIA GPU (supported by the 390 NVIDIA driver),  we would love to get your feedback on the updates in Ubuntu 18.04.

If you are interested, head over to the bug report, follow the instructions at the end of the bug description, and let us know about your experience.

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.5.0 beta 1 has been released. The beta 1 now features

  • Complete proxing of machine communication through the rack controller. This includes DNS, HTTP to metadata server, Proxy with Squid and new in 2.5.0 beta 1, syslog.
  • CentOS 7 & RHEL 7 storage support (Requires a new Curtin version available in PPA).
  • Full networking for KVM pods.
  • ESXi network configuration

For more information, please refer to MAAS Discourse [1].

[1]: https://discourse.maas.io/t/maas-2-5-0-beta-1-released/174

Read more
K. Tsakalozos

A friend once asked, why would one prefer microk8s over minikube?… We never spoke since. True story!

That was a hard question, especially for an engineer. The answer is not so obvious largely because it has to do with personal preferences. Let me show you why.

Microk8s-wise this is what you have to do to have a local Kubernetes cluster with a registry:

sudo snap install microk8s --edge --classic
microk8s.enable registry

How is this great?

  • It is super fast! A couple of hundreds of MB over the internet tubes and you are all set.
  • You skip the pain of going through the docs for setting up and configuring Kubernetes with persistent storage and the registry.

So why is this bad?

  • As a Kubernetes engineer you may want to know what happens under the hood. What got deployed? What images? Where?
  • As a Kubernetes user you may want to configure the registry. Where are the images stored? Can you change any access credentials?

Do you see why this is a matter of preference? Minikube is a mature solution for setting up a Kubernetes in a VM. It runs everywhere (even on windows) and it does only one thing, sets up a Kubernetes cluster.

On the other hand, microk8s offers Kubernetes as an application. It is opinionated and it takes a step towards automating common development workflows. Speaking of development workflows...

The full story with the registry

The registry shipped with microk8s is available on port 32000 of the localhost. It is an insecure registry because, let’s be honest, who cares about security when doing local development :) .

And it’s getting better, check this out! The docker daemon used by microk8s is configured to trust this insecure registry. It is this daemon we talk to when we want to upload images. The easiest way to do so is by using the microk8s.docker command coming with microk8s:

# Lets get a Docker file first
wget https://raw.githubusercontent.com/nginxinc/docker-nginx/ddbbbdf9c410d105f82aa1b4dbf05c0021c84fd6/mainline/stretch/Dockerfile
# And build it
microk8s.docker build -t localhost:32000/nginx:testlocal . microk8s.docker push localhost:32000/nginx:testlocal
If you prefer to use an external docker client you should point it to the socket dockerd is listening on:
docker -H unix:///var/snap/microk8s/docker.sock ps

To use an image from the local registry just reference it in your manifests:

apiVersion: v1
kind: Pod
metadata:
name: my-nginx
namespace: default
spec:
containers:
- name: nginx
image: localhost:32000/nginx:testlocal
restartPolicy: Always

And deploy with:

microk8s.kubectl create -f the-above-awesome-manifest.yaml

Microk8s and registry

What to keep from this post?

You want Kubernetes? We deliver it as a (sn)app!

You want to see your tool-chain in microk8s? Drop us a line. Send us a PR!

We are pleased to see happy Kubernauts!

Those of you who are here for the gossip. He was not that good of a friend (obviously!). We only met in a meetup :) !

References


Microk8s Docker Registry was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
admin

Hello MAASTers

MAAS 2.4.1 has now been released and it is a bug fix release. Please see more details in discourse.maas.io [1].

[1]: https://discourse.maas.io/t/maas-2-4-1-released/148

Read more