Canonical Voices

Posts tagged with 'canonical'

admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 (final) is now available!
This new MAAS release introduces a set of exciting features and improvements that improve performance, stability and usability of MAAS.
MAAS 2.4.0 will be immediately available in the PPA, but it is in the process of being SRU’d into Ubuntu Bionic.
PPA’s Availability
MAAS 2.4.0 is currently available for Ubuntu Bionic in ppa:maas/stable for the coming week.
sudo add-apt-repository ppa:maas/stable
sudo apt-get update
sudo apt-get install maas
What’s new?
Most notable MAAS 2.4.0 changes include:
  • Performance improvements across the backend & UI.
  • KVM pod support for storage pools (over API).
  • DNS UI to manage resource records.
  • Audit Logging
  • Machine locking
  • Expanded commissioning script support for firmware upgrades & HBA changes.
  • NTP services now provided with Chrony.
For the full list of features & changes, please refer to the release notes:

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 beta 2 is now released and is available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 beta 2 is currently available in Bionic’s Archive or in the following PPA:
ppa:maas/next

MAAS 2.4.0 (beta2)

New Features & Improvements

MAAS Internals optimisation

Continuing with MAAS’ internal surgery, a few more improvements have been made:

  • Backend improvements

  • Improve the image download process, to ensure rack controllers immediately start image download after the region has finished downloading images.

  • Reduce the service monitor interval to 30 seconds. The monitor tracks the status of the various services provided alongside MAAS (DNS, NTP, Proxy).

  • UI Performance optimizations for machines, pods, and zones, including better filtering of node types.

KVM pod improvements

Continuing with the improvements for KVM pods, beta 2 adds the ability to:

  • Define a default storage pool

This feature allows users to select the default storage pool to use when composing machines, in case multiple pools have been defined. Otherwise, MAAS will pick the storage pool automatically depending which pool has the most available space.

  • API – Allow allocating machines with different storage pools

Allows users to request a machine with multiple storage devices from different storage pools. This feature uses storage tags to automatically map a storage pool in libvirt with a storage tag in MAAS.

UI Improvements

  • Remove remaining YUI in favor of AngularJS.

As of beta 2, MAAS has now fully dropped the use of YUI for the Web Interface. The last section using YUI was the Settings page and the login page. Both sections have now been transitioned to use AngularJS instead.

  • Re-organize Settings page

The MAAS settings  have now been reorganized into multiple tabs.

Minor improvements

  • API for default DNS domain selection

Adds the ability to define the default DNS domain. This is currently only available via the API.

  • Vanilla framework upgrade

We would like to thank the Ubuntu web team for their hard work upgrading MAAS to the latest version of the Vanilla framework. MAAS is looking better and more consistent every day!

Bug fixes

Please refer to the following for all 37 bug fixes in this release, which address issues with MAAS across the board:

https://launchpad.net/maas/+milestone/2.4.0beta2

 

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 beta 1 and python-libmaas 0.6.0 have now been released and are available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 beta 1 is currently available in Bionic -proposed waiting to be published into Ubuntu, or in the following PPA:
ppa:maas/next

MAAS 2.4.0 (beta1)

Important announcements

Debian package maas-dns no longer needed

The Debian package ‘maas-dns’ has now been made a transitional package. This package provided some post-installation configuration to prepare bind to be managed by MAAS, but it required maas-region-api to be installed first.

In order to streamline the installation and make it easier for users to install HA environments, the configuration of bind has now been integrated to the ‘maas-region-api’ package itself, which and we have made ‘maas-dns’ a dummy transitional package that can now be removed.

New Features & Improvements

MAAS Internals optimization

Major internal surgery to MAAS 2.4 continues improve various areas not visible to the user. These updates will advance the overall performance of MAAS in larger environments. These improvements include:

  • Database query optimizations

Further reductions in the number of database queries, significantly cutting the queries made by the boot source cache image import process from over 100 to just under 5.

  • UI optimizations

MAAS is being optimized to reduce the amount of data using the websocket API to render the UI. This is targeted at only processing data only for viewable information, improving various legacy areas. Currently, the work done for this release includes:

  • Only load historic script results (e.g. old commissioning/testing results) when requested / accessed by the user, instead of always making them available over the websocket.

  • Only load node objects in listing pages when the specific object type is requested. For instance, only load machines when accessing the machines tab instead of also loading devices and controllers.

  • Change the UI mechanism to only request OS Information only on initial page load rather than every 10 seconds.

KVM pod improvements

Continuing with the improvements from alpha 2, this new release provides more updates to KVM pods:

  • Added overcommit ratios for CPU and memory.

When composing or allocating machines, previous versions of MAAS would allow the user to request as many resources as the user wanted regardless of the available resources. This created issues when dynamically allocating machines as it could allow users to create an infinite number of machines even when the physical host was already over committed. Adding this feature allows administrators to control the amount of resources they want to over commit.

  • Added ability to filter which pods or pod types to avoid when allocating machines

Provides users with the ability to select which pods or pod types not to allocate resources from. This makes it particularly useful when dynamically allocating machines when MAAS has a large number of pods.

DNS UI Improvements

MAAS 2.0 introduced the ability to manage DNS, not only to allow the creation of new domains, but also to the creation of resources records such as A, AAA, CNAME, etc. However, most of this functionality has only been available over the API, as the UI only allowed to add and remove new domains.

As of 2.4, MAAS now adds the ability to manage not only DNS domains but also the following resource records:

  • Added ability to edit domains (e.g. TTL, name, authoritative).

  • Added ability to create and delete resource records (A, AAA, CNAME, TXT, etc).

  • Added ability to edit resource records.

Navigation UI improvements

MAAS 2.4 beta 1 is changing the top-level navigation:

  • Rename ‘Zones’ for ‘AZs’

  • Add ‘Machines, Devices, Controllers’ to the top-level navigation instead of ‘Hardware’.

Minor improvements

A few notable improvements being made available in MAAS 2.4 include:

  • Add ability to force the boot type for IPMI machines.

Hardware manufactures have been upgrading their BMC firmware versions to be more compliant with the Intel IPMI 2.0 spec. Unfortunately, the IPMI 2.0 spec has made changes that provide a non-backward compatible user experience. For example, if the administrator configures their machine to always PXE boot over EFI, and the user executed an IPMI command without specifying the boot type, the machine would use the value of the configured BIOS. However, with  these new changes, the user is required to always specify a boot type, avoiding a fallback to the BIOS.

As such, MAAS now allows the selection of a boot type (auto, legacy, efi) to force the machine to always PXE with the desired type (on the next boot only) .

  • Add ability, via the API, to skip the BMC configuration on commissioning.

Provides an API option to skip the BMC auto configuration during commissioning for IPMI systems. This option helps admins keep credentials provided over the API when adding new nodes.

Bug fixes

Please refer to the following for all 32 bug fixes in this release.

https://launchpad.net/maas/+milestone/2.4.0beta1

 

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 alpha 2 has now been released and is available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 alpha 1 is available in the Bionic -proposed archive or in the following PPA:
ppa:maas/next

MAAS 2.4.0 (alpha2)

Important announcements

NTP services now provided by Chrony

Starting with 2.4 Alpha 2, and in common with changes being made to Ubuntu Server, MAAS replaces ‘ntpd’ with Chrony for the NTP protocol. MAAS will handle the upgrade process and automatically resume NTP service operation.

Vanilla CSS Framework Transition

MAAS 2.4 is undergoing a Vanilla CSS framework transition to a new version of vanilla, which will bring a fresher look to the MAAS UI. This framework transition is currently work in progress and not all of the UI have been fully updated. Please expect to see some inconsistencies in this new release.

New Features & Improvements

NTP services now provided by Chrony.

Starting from MAAS 2.4alpha2, chrony is now the default NTP service, replacing ntpd. This work has been done to align with the Ubuntu Server and Security team to support chrony instead of ntpd. MAAS will continue to provide services exactly the same way and users will not be affected by the changes, handling the upgrade process transparently. This means that:

  • MAAS will configure chrony as peers on all Region Controllers
  • MAAS will configure chrony as a client of peers for all Rack Controllers
  • Machines will use the Rack Controllers as they do today

MAAS Internals optimization

MAAS 2.4 is currently undergoing major surgery to improve various areas of operation that are not visible to the user. These updates will improve the overall performance of MAAS in larger environments. These improvements include:

  • AsyncIO based event loop
    • MAAS has an event loop which performs various internal actions. In older versions of MAAS, the event loop was managed by the default twisted event loop. MAAS now uses an asyncio based event loop, driven by uvloop, which is targeted at improving internal performance.

  • Improved daemon management
    • MAAS has changed the way daemons are run to allow users to see both ‘regiond’ and ‘rackd’ as processes in the process list.
    • As part of these changes, regiond workers are now managed by a master regiond process. In older versions of MAAS each worker was directly run by systemd. The master process is now in charge of ensuring workers are running at all times, and re-spawning new workers in case of failures. This also allows users to see the worker hierarchy in the process list.
  • Ability to increase the number of regiond workers
    • Following the improved way MAAS daemons are run, further internal changes have been made to allow the number of regiond workers to be increased automatically. This allows MAAS to scale to handle more internal operations in larger environments.
    • While this capability is already available, it is not yet available by default. It will become available in the following milestone release.
  • Database query optimizations
    • In the process of inspecting the internal operations of MAAS, it was discovered that multiple unnecessary database queries are performed for various operations. Optimising these requires internal improvements to reduce the footprint of these operations. Some areas that have been addressed in this release include:
      • When saving node objects (e.g. making any update of a machine, device, rack controller, etc), MAAS validated changes across various fields. This required an increased number of queries for fields, even when they were not being updated. MAAS now tracks specific fields that change and only performs queries for those fields.
        • Example: To update a power state, MAAS would perform 11 queries. After these improvements, , only 1 query is now performed.
      • On every transaction, MAAS performed 2 queries to update the timestamp. This has now been consolidated into a single query per transaction.
    • These changes  greatly improves MAAS performance and database utilisation in larger environments. More improvements will continue to be made as we continue to examine various areas in MAAS.
  • UI optimisations
    • MAAS is now being optimised to reduce the amount of data loaded in the websocket API to render the UI. This is targeted at only processing data for viewable information, improving various legacy areas. Currently, the work done in this area includes:
      • Script results are only loaded for viewable nodes in the machine listing page, reducing the overall amount of data loaded.
      • The node object is updated in the websocket only when something has changed in the database, reducing the data transferred to the clients as well as the amount of internal queries.

Audit logging

Continuing with the audit logging improvements, alpha2 now adds audit logging for all user actions that affect Hardware Testing & Commissioning.

KVM pod improvements

MAAS’ KVM pods was initially developed as a feature to help developers quickly iterate and test new functionality while developing MAAS. This, however, because a feature that allow not only developers, but also administrators to make better use of resources across their datacenter. Since the feature was initially create for developers, some features were lacking. As such, in 2.4 we are improving the usability of KVM pods:

  • Pod AZ’s.
    MAAS now allows setting the physical zone for the pod. This helps administrators by conceptually placing their KVM pods in a AZ, which enables them to request/allocate machines on demand based on its AZ. All VM’s created from a pod will inherit the AZ.

  • Pod tagging
    MAAS now adds the ability to set tags for a pod. This allows administrators to use tags to allow/prevent the creation of a VM inside the pod using tags. For example, if the administrator would like a machine with a ‘tag’ named ‘virtual’, MAAS will filter all physical machines and only consider other VM’s or a KVM pod for machine allocation.

Bug fixes

Please refer to the following for all bug fixes in this release.

https://launchpad.net/maas/+milestone/2.4.0alpha2

Read more
abeato

In chapter 5 of his mind-blowing “The Road to Reality”, Penrose devotes a section to complex powers, that is, to the solutions to

$$w^z~~~\text{with}~~~w,z \in \mathbb{C}$$

In this post I develop a bit more what he exposes and explore what the solutions look like with the help of some simple Python scripts. The scripts can be found in this github repo, and all the figures in this post can be replicated by running

git clone https://github.com/alfonsosanchezbeato/exponential-spiral.git
cd exponential-spiral; ./spiral_examples.py

The scripts make use of numpy and matplotlib, so make sure those are installed before running them.

Now, let’s develop the math behind this. The values for \(w^z\) can be found by using the exponential function as

$$w^z=e^{z\log{w}}=e^{z~\text{Log}~w}e^{2\pi nzi}$$

In this equation, “log” is the complex natural logarithm multi-valued function, while “Log” is one of its branches, concretely the principal value, whose imaginary part lies in the interval \((−\pi, \pi]\). In the equation we reflect the fact that \(\log{w}=\text{Log}~w + 2\pi ni\) with \(n \in \mathbb{Z}\). This shows the remarkable fact that, in the general case, we have infinite solutions for the equation. For the rest of the discussion we will separate \(w^z\) as follows:

$$w^z=e^{z~\text{Log}~w}e^{2\pi nzi}=C \cdot F_n$$

with constant \(C=e^{z~\text{Log}~w}\) and the rest being the sequence \(F_n=e^{2\pi nzi}\). Being \(C\) a complex constant that multiplies \(F_n\), the only influence it has is to rotate and scale equally all solutions. Noticeably, \(w\) appears only in this constant, which shows us that the \(z\) values are what is really determinant for the number and general shape of the solutions. Therefore, we will concentrate in analyzing the behavior of \(F_n\), by seeing what solutions we can find when we restrict \(z\) to different domains.

Starting by restricting \(z\) to integers (\(z \in \mathbb{Z}\)), it is easy to see that there is only one resulting solution in this case, as the factor \(F_n=e^{2\pi nzi}=1\) in this case (it just rotates the solution \(2\pi\) radians an integer number of times, leaving it unmodified). As expected, a complex number to an integer power has only one solution.

If we let \(z\) be a rational number (\(z=p/q\), being \(p\) and \(q\) integers chosen so we have the canonical form), we obtain

$$F_n=e^{2\pi\frac{pn}{q} i}$$

which makes the sequence \(F_n\) periodic with period \(q\), that is, there are \(q\) solutions for the equation. So we have two solutions for \(w^{1/2}\), three for \(w^{1/3}\), etc., as expected as that is the number of solutions for square roots, cube roots and so on. The values will be the vertex of a regular polygon in the complex plane. For instance, in figure 1 the solutions for \(2^{1/5}\) are displayed.

Figure 1

Fig. 1: The five solutions to \(2^{1/5}\)

If \(z\) is real, \(e^{2\pi nzi}\) is not periodic anymore has infinite solutions in the unit circle, and therefore \(w^z\) has infinite values that lie on a circle of radius \(|C|\).

In the more general case, \(z \in \mathbb{C}\), that is, \(z=a+bi\) being \(a\) and \(b\) real numbers, and we have

$$F_n=e^{-2\pi bn}e^{2\pi ani}.$$

There is now a scaling factor, \(e^{-2\pi bn}\) that makes the module of the solutions vary with \(n\), scattering them across the complex plane, while \(e^{2\pi ani}\) rotates them as \(n\) changes. The result is an infinite number of solutions for \(w^z\) that lie in an equiangular spiral in the complex plane. The spiral can be seen if we change the domain of \(F\) to \(\mathbb{R}\), this is

$$F(t)=e^{-2\pi bt}e^{2\pi ati}~~~\text{with}~~~t \in \mathbb{R}.$$

In figure 2 we can see one example which shows some solutions to \(2^{0.4-0.1i}\), plus the spiral that passes over them.

Fig. 2: Roots for \(2^{0.4-0.1i}\)
Fig. 2: Roots and spiral for \(2^{0.4-0.1i}\)

In fact, in Penrose’s book it is stated that these values are found in the intersection of two equiangular spirals, although he leaves finding them as an exercise for the reader (problem 5.9).

Let’s see then if we can find more spirals that cross these points. We are searching for functions that have the same value as \(F(t)\) when \(t\) is an integer. We can easily verify that the family of functions

$$F_k'(t)=F(t)e^{2\pi kti}~~~\text{with}~~~k \in \mathbb{Z}$$

are compatible with this restriction, as \(e^{2\pi kti}=1\) in that case (integer \(t\)). Figures 3 and 4 represent again some solutions to \(2^{0.4-0.1i}\), \(F(t)\) (which is the same as the spiral for \(k=0\)), plus the spirals for \(k=-1\) and \(k=1\) respectively. We can see there that the solutions lie in the intersection of two spirals indeed.

Fig. 3
Fig. 3: Roots for \(2^{0.4-0.1i}\) plus spirals for k=0 and k=-1


Fig. 4
Fig. 4: Roots for \(2^{0.4-0.1i}\) plus spirals for k=0 and k=1

If we superpose these 3 spirals, the ones for \(k=1\) and \(k=-1\) cross also in places different to the complex powers, as can be seen in figure 5. But, if we choose two consecutive numbers for \(k\), the two spirals will cross only in the solutions to \(w^z\). See, for instance, figure 6 where the spirals for \(k=\{-2,-1\}\) are plotted. We see that any pair of such spirals fulfills Penrose’s description.

Fig. 5
Fig. 5: Roots for \(2^{0.4-0.1i}\) plus spirals for k=-1,0,1


Fig. 6
Fig. 6: Roots for \(2^{0.4-0.1i}\) plus spirals for k=-1,-2

In general, the number of places at which two spirals cross depends on the difference between their \(k\)-number. If we have, say, \(F_k’\) and \(F_l’\) with \(k>l\), they will cross when

$$t=…,0,\frac{1}{k-l},\frac{2}{k-l},…,\frac{k-l-1}{k},1,1+\frac{1}{k-l},…$$

That is, they will cross when \(t\) is an integer (at the solutions to \(w^z\)) and also at \(k-l-1\) points between consecutive solutions.

Let’s see now another interesting special case: when \(z=bi\), that is, it is pure imaginary. In this case, \(e^{2\pi ati}\) is \(1\), and there is no turn in the complex plane when \(t\) grows. We end up with the spiral \(F(t)\) degenerating to a half-line that starts at the origin (which is reached when \(t=\infty\) if \(b>0\)). This can be appreciated in figure 7, where the line and the spirals for \(k=-1\) and \(k=1\) are plotted for \(20^{0.1i}\). The two spirals are mirrored around the half-line.

Fig. 7
Fig. 7: Roots for \(10^{0.1i}\), \(F(t)\), and spirals for k=-1,1

Digging more into this case, it turns out that a pure imaginary number to a pure imaginary power can produce a real result. For instance, for \(i^{0.1i}\), we see in figure 8 that the roots are in the half-positive real line.

Fig. 8
Fig. 8: Roots for \(i^{0.1i}\), \(F(t)\), and spirals for k=-1,1

That something like this can produce real numbers is a curiosity that has intrigued historically mathematicians (\(i^i\) has real values too!). And with this I finish the post. It is really amusing to start playing with the values of \(w\) and \(z\), if you want to do so you can use the python scripts I pointed to in the beginning of the post. I hope you enjoyed the post as much as I did writing it.

Read more
Dustin Kirkland

One of the many excellent suggestions from last year's HackerNews thread, Ask HN: What do you want to see in Ubuntu 17.10?, was to refresh the Ubuntu server's command line installer:



We're pleased to introduce this new installer, which will be the default Server installer for 18.04 LTS, and solicit your feedback.

Follow the instructions below, to download the current daily image, and install it into a KVM.  Alternatively, you could write it to a flash drive and install a physical machine, or try it in your virtual machine of your choice (VMware, VirtualBox, etc.).

$ wget http://cdimage.ubuntu.com/ubuntu-server/daily-live/current/bionic-live-server-amd64.iso
$ qemu-img create -f raw target.img 10G
$ kvm -m 1024 -boot d -cdrom bionic-live-server-amd64.iso -hda target.img
...
$ kvm -m 1024 target.img

For those too busy to try it themselves at the moment, I've taken a series of screenshots below, for your review.














Finally, you can provide feedback, bugs, patches, and feature requests against the Subiquity project in Launchpad:



Cheers,
Dustin

Read more
Dustin Kirkland

February 2008, Canonical's office in Lexington, MA
10 years ago today, I joined Canonical, on the very earliest version of the Ubuntu Server Team!

And in the decade since, I've had the tremendous privilege to work with so many amazing people, and the opportunity to contribute so much open source software to the Ubuntu ecosystem.

Marking the occasion, I've reflected about much of my work over that time period and thought I'd put down in writing a few of the things I'm most proud of (in chronological order)...  Maybe one day, my daughters will read this and think their daddy was a real geek :-)

1. update-motd / motd.ubuntu.com (September 2008)

Throughout the history of UNIX, the "message of the day" was always manually edited and updated by the local system administrator.  Until Ubuntu's message-of-the-day.  In fact, I received an email from Dennis Ritchie and Jon "maddog" Hall, confirming this, in April 2010.  This started as a feature request for the Landscape team, but has turned out to be tremendously useful and informative to all Ubuntu users.  Just last year, we launched motd.ubuntu.com, which provides even more dynamic information about important security vulnerabilities and general news from the Ubuntu ecosystem.  Mathias Gug help me with the design and publication.

2. manpages.ubuntu.com (September 2008)

This was the first public open source project I worked on, in my spare time at Canonical.  I had a local copy of the Ubuntu archive and I was thinking about what sorts of automated jobs I could run on it.  So I wrote some scripts that extracted the manpages out of each one, formatted them as HTML, and published into a structured set of web directories.  10 years later, it's still up and running, serving thousands of hits per day.  In fact, this was one of the ways we were able to shrink the Ubuntu minimal image, but removing the manpages, since they're readable online.  Colin Watson and Kees Cook helped me with the initial implementation, and Matthew Nuzum helped with the CSS and Ubuntu theme in the HTML.

3. Byobu (December 2008)

If you know me at all, you know my passion for the command line UI/UX that is "Byobu".  Byobu was born as the "screen-profiles" project, over lunch at Google in Mountain View, in December of 2008, at the Ubuntu Developer Summit.  Around the lunch table, several of us (including Nick Barcet, Dave Walker, Michael Halcrow, and others), shared our tips and tricks from our own ~/.screenrc configuration files.  In Cape Town, February 2010, at the suggestion of Gustavo Niemeyer, I ported Byobu from Screen to Tmux.  Since Ubuntu Servers don't generally have GUIs, Byobu is designed to be a really nice interface to the Ubuntu command line environment.

4. eCryptfs / Ubuntu Encrypted Home Directories (October 2009)

I was familiar with eCryptfs from its inception in 2005, in the IBM Linux Technology Center's Security Team, sitting next to Michael Halcrow who was the original author.  When I moved to Canonical, I helped Michael maintain the userspace portion of eCryptfs (ecryptfs-utils) and I shepherded into Ubuntu.  eCryptfs was super powerful, with hundreds of options and supported configurations, but all of that proved far to difficult for users at large.  So I set out to simplify it drastically, with an opinionated set of basic defaults.  I started with a simple command to mount a "Private" directory inside of your home directory, where you could stash your secrets.  A few months later, on a long flight to Paris, I managed to hack a new PAM module, pam_ecryptfs.c, that actually encrypted your entire home directory!  This was pretty revolutionary at the time -- predating Apple's FileVault or Microsoft's Bitlocker, even.  Today, tens of millions of Ubuntu users have used eCryptfs to secure their personal data.  I worked closely with Tyler Hicks, Kees Cook, Jamie Strandboge, Michael Halcrow, Colin Watson, and Martin Pitt on this project over the years.

5. ssh-import-id (March 2010)

With the explosion of virtual machines and cloud instances in 2009 / 2010, I found myself constantly copying public SSH keys around.  Moreover, given Canonical's globally distributed nature, I also regularly found myself asking someone for their public SSH keys, so that I could give them access to an instance, perhaps for some pair programming or assistance debugging.  As it turns out, everyone I worked with, had a Launchpad.net account, and had their public SSH keys available there.  So I created (at first) a simple shell script to securely fetch and install those keys.  Scott Moser helped clean up that earliest implementation.  Eventually, I met Casey Marshall, who helped rewrite it entirely in Python.  Moreover, we contacted the maintainers of Github, and asked them to expose user public SSH keys by the API -- which they did!  Now, ssh-import-id is integrated directly into Ubuntu's new subiquity installer and used by many other tools, such as cloud-init and MAAS.

6. Orchestra / MAAS (August 2011)

In 2009, Canonical purchased 5 Dell laptops, which was the Ubuntu Server team's first "cloud".  These laptops were our very first lab for deploying and testing Eucalyptus clouds.  I was responsible for those machines at my house for a while, and I automated their installation with PXE, TFTP, DHCP, DNS, and a ton of nasty debian-installer preseed data.  That said -- it worked!  As it turned out, Scott Moser and Mathias Gug had both created similar setups at their houses for the same reason.  I was mentoring a new hire at Canonical, named Andres Rodriquez at the time, and he took over our part-time hacks and we worked together to create the Orchestra project.  Orchestra, itself was short lived.  It was severely limited by Cobbler as a foundation technology.  So the Orchestra project was killed by Canonical.  But, six months later, a new project was created, based on the same general concept -- physical machine provisioning at scale -- with an entire squad of engineers led by...Andres Rodriguez :-)  MAAS today is easily one of the most important projects the Ubuntu ecosystem and one of the most successful products in Canonical's portfolio.

7. pollinate / pollen / entropy.ubuntu.com (February 2014)

In 2013, I set out to secure Ubuntu at large from a set of attacks ranging from insufficient entropy at first boot.  This was especially problematic in virtual machine instances, in public clouds, where every instance is, by design, exactly identical to many others.  Moreover, the first thing that instance does, is usually ... generate SSH keys.  This isn't hypothetical -- it's quite real.  Raspberry Pi's running Debian were deemed susceptible to this exact problem in November 2015.  So designed and implemented a client (shell script that runs at boot, and fetches some entropy from one to many sources), as well as a high-performance server (golang).  The client is the 'pollinate' script, which runs on the first boot of every Ubuntu server, and the server is the cluster of physical machines processing hundreds of requests per minute at entropy.ubuntu.com.  Many people helped review the design and implementation, including Kees Cook, Jamie Strandboge, Seth Arnold, Tyler Hicks, James Troup, Scott Moser, Steve Langasek, Gustavo Niemeyer, and others.

8. The Orange Box (May 2014)

In December of 2011, in my regular 1:1 with my manager, Mark Shuttleworth, I told him about these new "Intel NUCs", which I had bought and placed them around my house.  I had 3, each of which was running Ubuntu, and attached to a TV around the house, as a media player (music, videos, pictures, etc).  In their spare time, though, they were OpenStack Nova nodes, capable of running a couple of virtual machines.  Mark immediately asked, "How many of those could you fit into a suitcase?"  Within 24 hours, Mark had reached out to the good folks at TranquilPC and introduced me to my new mission -- designing the Orange Box.  I worked with the Tranquil folks through Christmas, and we took our first delivery of 5 of these boxes in January of 2014.  Each chassis held 10 little Intel NUC servers, and a switch, as well as a few peripherals.  Effectively, it's a small data center that travels.  We spend the next 4 months working on the hardware under wraps and then unveiled them at the OpenStack Summit in Atlanta in May 2014.  We've gone through a couple of iterations on the hardware and software over the last 4 years, and these machines continue to deliver tremendous value, from live demos on the booth, to customer workshops on premises, or simply accelerating our own developer productivity by "shipping them a lab in a suitcase".  I worked extensively with Dan Poler on this project, over the course of a couple of years.

9. Hollywood (December 2014)

Perhaps the highlight of my professional career came in October of 2016.  Watching Saturday Night Live with my wife Kim, we were laughing at a skit that poked fun at another of my favorite shows, Mr. Robot.  On the computer screen behind the main character, I clearly spotted Hollywood!  Hollywood is just a silly, fun little project I created on a plane one day, mostly to amuse Kim.  But now, it's been used in Saturday Night LiveNBC Dateline News, and an Experian TV commercials!  Even Jess Frazelle created a Docker container

10. petname / golang-petname / python-petname (January 2015)

From "warty warthog" to "bionic beaver", we've always had a focus on fun, and user experience here in Ubuntu.  How hard is it to talk to your colleague about your Amazon EC2 instance, "i-83ab39f93e"?  Or your container "adfxkenw"?  We set out to make something a little more user-friendly with our "petnames".  Petnames are randomly generated "adjective-animal" names, which are easy to pronounce, spell, and remember.  I curated and created libraries that are easily usable in Shell, Golang, and Python.  With the help of colleagues like Stephane Graber and Andres Rodriguez, we now use these in many places in the Ubuntu ecosystem, such as LXD and MAAS.

If you've read this post, thank you for indulging me in a nostalgic little trip down memory lane!  I've had an amazing time designing, implementing, creating, and innovating with some of the most amazing people in the entire technology industry.  And here's to a productive, fun future!

Cheers,
:-Dustin

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 alpha 1 and python-libmaas 0.6.0 have now been released and are available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 alpha 1 is available in the Bionic -proposed archive or in the following PPA:
ppa:maas/next
 
Python-libmaas Availability
Libmaas is available in the Ubuntu Bionic archive or you can download the source from:

MAAS 2.4.0 (alpha1)

Important announcements

Dependency on tgt (iSCSI) has now been dropped

Starting from MAAS 2.3, the way run ephemeral environments and perform deployments was changed away from using iSCSI. Instead, we introduced the ability to do the same using a squashfs image. With that, we completely removed the requirement for having tgt at all, but we didn’t drop the dependency in 2.3. As of 2.4, however, tgt has now been completely removed.

Dependency on apache2 has now been dropped in the debian packages

Starting from MAAS 2.0, MAAS now made the UI available in port 5240 and deprecated the use of port 80. However, as a mechanism to not break users when upgrading from the previous LTS, MAAS continued to have apache2 as a dependency to provide a reverse proxy to allow users to connect via port 80.

However, the MAAS snap changed that behavior no longer providing access to MAAS via port 80. In order to keep MAAS consistent with the snap, starting from MAAS 2.4, the debian package no longer depends on apache2 to provide a reverse proxy capability from port 80.

Python libmaas (0.6.0) now available in the Ubuntu Archive

I’m happy to announce that the new MAAS Client Library is now available in the Ubuntu Archives for Bionic. Libmaas is an asyncio based client library that provides a nice interface to interact with MAAS. More details below.

New Features & Improvements

Machine Locking

MAAS now adds the ability to lock machines, which prevents the user from performing actions on machines that could change their state. This gives MAAS a prevention mechanism of potentially catastrophic actions. For example, it will prevent mistakenly powering off machines or mistanly releasing machines that could bring workloads down.

Audit logging

MAAS 2.4 now allows the administrators to audit the user’s actions, with the introduction of audit logging. The audit logs are available to administrators via the MAAS CLI/API, giving administrators a centralized location to access these logs.

Documentation is in the process of being published. For raw access please refer to the following link:

https://github.com/CanonicalLtd/maas-docs/pull/766/commits/eb05fb5efa42ba850446a21ca0d55cf34ced2f5d

Commissioning Harness – Supporting firmware upgrade and hardware specific scripts

The commissioning harness has been expanded with various improvements to help administrators write their own firmware upgrade and hardware specific scripts. These improvements addresses various of the challenges administrators face when performing such tasks at scale. The improvements include:

  • Ability to auto-select all the firmware upgrade/storage hardware changes (API only, UI will be available soon)

  • Ability to run scripts only for the hardware they are intended to run on.

  • Ability to reboot the machine while on the commissioning environment without disrupting the commissioning process.

This allows administrators to:

  • Create a hardware specific by declaring in which machine it needs to be run, by specifying the hardware specific PCI ID, modalias, vendor or model of the machine or device.

  • Create firmware upgrade scripts that require a reboot before the machine finishes the commissioning process, by allowing to describe this in the script’s metadata.

  • Allows administrators to define where the script can obtain proprietary firmware and/or proprietary tools to perform any of the operations required.

Minor improvements – Gather information about BIOS & firmware

MAAS now gathers more information about the underlying system, such as the Model, Serial, BIOS and firmware information of a machine (where available). It also gathers the information for storage devices as well as network interfaces.

MAAS Client Library (python-libmaas)

New upstream release – 0.6.0

A new upstream release is now available in the Ubuntu Archive for Bionic. The new release includes the following changes:

  • Add/read/update/delete storage devices attached to machines.

  • Configure partitions and mount points

  • Configure Bcache

  • Configure RAID

  • Configure LVM

Known issues & work arounds

LP: #1748712  – 2.4.0a1 upgrade failed with old node event data

It has been reported that an upgrade to MAAS 2.4.0a1 failed due to having old data from a non-existent know stored in the database. This could have been due to a older devel version of MAAS which would have left an entry in the node event table. A work around is provided in the bug report.

If you hit this issue, please update the bug report immediately so MAAS developers.

Bug fixes

Please refer to the following for all bug fixes in this release.

https://launchpad.net/maas/+milestone/2.4.0alpha1

Read more
K. Tsakalozos

In this post we will see how to automate the deployment of an ASP.NET Core application on an On-Prem Kubernetes cluster. We will base our work on the excellent blog “Deploying ASP.NET Core apps on App Engine” by Mete Atamel. Our contribution would be a) showing how to target public as well as private clouds for Kubernetes deployments, and b) automate the delivery of your software through a basic CI/CD based on Jenkins.

Click here to share this article on LinkedIn »

What will we be using

  • An Ubuntu 16.04 machine
  • git command-line client (installed by sudo apt install git)
  • Internet access
  • $0, yes this is going to be a “look how much you can do for free” kind of blog post

Quick outline

  • Create an ASP.NET Core application on the Ubuntu machine and push the code to GitHub.
  • Package the application in a Docker container and upload it to Docker Hub
  • Spin-up a Kubernetes cluster, the Canonical distribution. Here we will deploy that cluster locally, but you can use any private or public cloud you might have access to.
  • Deploy your application on the Kubernetes cluster and expose it.
  • Deploy Jenkins next to Kubernetes and automate the delivery of your app.

Let’s not waste any time we have a long way ahead of us.

ASP.NET Core applications on Linux

Installing .NET on an Ubuntu 16.04 is as simple as adding a repository and getting dotnet-sdk-2.1.4:

$ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg 
$ sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
$ sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
$ sudo apt-get install apt-transport-https
$ sudo apt-get update
$ sudo apt-get install dotnet-sdk-2.1.4

We can now create our application. It is going to be the template razor application as described in Mete’s codelab and we will not even bother to change the application’s name!

$ mkdir -p ~/workspace/dotnet
$ cd ~/workspace/dotnet
$ dotnet new razor -o HelloWorldAspNetCore
$ cd HelloWorldAspNetCore
$ dotnet run

You should see the application on your browser at http://localhost:5000

Our code will be on a public github repository so go ahead and create an account if you do not already have one (https://github.com). Click the “New repository” button to create a public repository named HelloWorldAspNetCore.

Creating a repository for our code.

Lets add a .gitignore file at the root of our project and push our code:

$ cd ~/workspace/dotnet/HelloWorldAspNetCore
$ wget https://raw.githubusercontent.com/OmniSharp/generator-aspnet/master/templates/gitignore.txt
$ mv gitignore.txt .gitignore
$ git init
$ git add .
$ git commit -m "Initial commit"
$ git remote add origin https://github.com/ktsakalozos/HelloWorldAspNetCore.git
$ git push -u origin master

You can now relax! Your code is safe with github.

Package your application in a Docker container

Installing docker would require adding the respective repository and apt getting docker-ce:

$ sudo apt-get install apt-transport-https ca-certificates curl \
software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
$ sudo apt-get update
$ sudo apt-get install docker-ce
$ sudo usermod -a -G docker $USER
$ newgrp docker

While we are in the process of setting up docker we might as well register with Docker Hub and create a repository to store our images. Go to https://hub.docker.com and create an account. I will be here waiting :).

After logging in to Docker Hub click on the “Create Repository” button and create a new public repository. That is where we will be pushing our docker images. For the rest of this blog the docker user is “kjackal” and the repository is named “hello-dotnet”. This will make more sense to you shortly.

New docker repository form.

To package our application we first need to ask dotnet to compile our code and gather any dependencies into a folder for later deployment. This is done with:

$ dotnet publish -c Release

Our docker container should package everything under bin/Release/netcoreapp2.0/publish/. We create a `Dockerfile` on the root of our project with the following contents:

$ cd ~/workspace/dotnet/HelloWorldAspNetCore/
$ cat ./Dockerfile
FROM gcr.io/google-appengine/aspnetcore:2.0
ADD ./bin/Release/netcoreapp2.0/publish/ /app
ENV ASPNETCORE_URLS=http://*:${PORT}
WORKDIR /app
ENTRYPOINT [ "dotnet", "HelloWorldAspNetCore.dll"]

Building the container and testing that it works:

$ docker build -t kjackal/hello-dotnet:v1 .
$ docker run -p 8080:8080 kjackal/hello-dotnet:v1

You should see the output at http://localhost:8080 .

It is time to push the first version to Docker Hub:

$ docker login
$ docker push kjackal/hello-dotnet:v1

The Dockerfile should be part of the code so we commit it and push it to the git repository:

$ git add Dockerfile
$ git commit -m "Adding dockerfile"
$ git push origin master

Deploy a Kubernetes Cluster

For the on-prem Kubernetes deployments we will go with Canonical’s solution. The reason being (apart from me being biased) that Canonical offers a seamless and effortless transition from a toy deployment running on your laptop to a full blown production grade Kubernetes deployed on private, public clouds or even on bare metal.

In this blog we show how to deploy Kubernetes and Jenkins on your localhost (laptop/desktop) — just make sure you have at least 8GB of RAM. We first need to have LXD running. LXD is a really powerful type of container based on the same technologies as Docker. Contrary to Docker, LXD containers resemble more to virtual machines (VMs). Lets simplify things and assume from now on that LXD containers are VMs that boot instantly and have no performance overhead!

In the following snippet we install LXD. While initialising ( /snap/bin/lxd init) make sure you go with the defaults but you do not enable ipv6. When asked “What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]?” Reply with “none”.

$ sudo snap install lxd
$ sudo usermod -a -G lxd $USER
$ newgrp lxd
$ /snap/bin/lxd init

At this point we can use either Juju or Conjure-up to deploy Kubernetes. Conjure-up is essentially a wizard sitting on-top of Juju.

As already mentioned by Tim installing Kubernetes with conjure-up is as simple as:

$ sudo snap install conjure-up --classic
$ conjure-up

Canonical Kubernetes comes in two flavours:

  1. kubernetes-core is a cut down version installed in two machines, in our case two LXD containers running on our localhost.
  2. canonical-kubernetes is the full production grade deployment with features such as HA and monitoring.

We will go for kubernetes-core; on the next screen select localhost as the cloud provider. Follow the wizard’s steps till the end and wait for the deployment to finish. For a headless deployment you can do a conjure-up kubernetes-core localhost .

To review the status of our deployment have a look at:

$ juju status

Getting into one of the LXD containers/machines we use juju ssh, for exmple:

$ juju ssh kubernetes-master/0

Inside kubernetes-master under /home/ubuntu you will find a config file you can use for accessing your cluster. We can fetch that file with:

$ juju scp kubernetes-master/0:config .

Conjure-up has already copied the Kubernetes config locally and installed kubectl for us. How nice!

Automate the deployment process, CI/CD

We will be showing a few Jenkins jobs to automate the process of building, packaging and deploying our application. The intention here is to show everything that happens under the hood and not hide behind a flashy UI.

First things first, we need a Jenkins machine.

$ juju deploy jenkins

We deploy Jenkins next to our Kubernetes cluster. This will take some time, you can check the progress of the deployment with juju status.

Next we need to set a password to Jenkins and expose it so we can access its UI on port 8080. Exposing Jenkins is not needed in the localhost deployment which uses LXD containers but we show it here for completeness.

$ juju config jenkins password='your_secure_password'
$ juju expose jenkins

Before we start crafting our jobs we need to configure Jenkins a bit more. I can tell you beforehand our jobs need to sudo run without asking for a password. The easiest way to do that is to edit /etc/sudoers in the Jenkins machine. Here is how we append a line to the sudoers file with Juju:

$ juju run --unit jenkins/0 -- 'sudo echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers'

We also know that our Jenkins jobs need to talk to the Kubernetes. To this end Jenkins will need the kubeconfig file. We take the file from the kubernetes-master and place it in our Jenkins machine under /var/tmp:

$ juju scp kubernetes-master/0:config .
$ juju scp config jenkins/0:/var/tmp/

Last part of the configuration, I promise! We know our application will be exposed using Kubernetes NodePort on port 31576. We need to make sure there is no firewall blocking that port and requests can reach it:

$ juju run --application kubernetes-worker -- open-port 31576

We are now ready to create our three main jobs:

Three jobs we will be creating
  • The first Jenkins jobs is the “Install dependencies”. This job is just a shell script installing all software packages needed to a) talk to kubernetes, b) build our ASP.NET application and c) package everything into a docker container. Place the following in a shell script job and run it once:
echo "Installing kubectl"
sudo snap install kubectl --classic
echo "Installing dotnet"
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-get install apt-transport-https -y
sudo apt-get update
sudo apt-get install dotnet-sdk-2.1.4 -y
echo "Installing docker"
sudo apt-get install apt-transport-https ca-certificates curl \
software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce -y
  • The second Jenkins job bootstraps the service in Kubernetes. It creates a deployment using the v1 image we created above and it makes sure all operations are recorded ( — record param). This deployment is exposed using NodePort 31576. Place the following in a Jenkins jobs and run it once, remember to update the docker user name (kjackal):
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config run hello-dotnet \
--image=kjackal/hello-dotnet:v1 --port=8080 --record
echo "apiVersion: v1
kind: Service
metadata:
name: hello-dotnet
spec:
type: NodePort
ports:
- port: 8080
nodePort: 31576
name: http
selector:
run: hello-dotnet" > /tmp/expose.yaml
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config apply -f /tmp/expose.yaml

Use juju status to find the IP of a kubernetes worker and open a browser at http://<kubernetes-worker-ip>:31576 . Your application is served from Kubernetes! But we are not done yet.

  • Lets create a third job (“Build and release”) that pulls your code from GitHub, compiles it, puts it in a container and deploys that container. Replace the repository and the docker username in the following snippet. Then create a Jenkins job:
rm -rf HelloWorldAspNetCore
git clone https://github.com/ktsakalozos/HelloWorldAspNetCore.git
cd HelloWorldAspNetCore
dotnet publish -c Release
sudo docker login -u kjackal -p ${DOCKER_PASS}
sudo docker build -t kjackal/hello-dotnet:${DOCKER_TAG} .
sudo docker push kjackal/hello-dotnet:${DOCKER_TAG}
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config set image deployment/hello-dotnet hello-dotnet=kjackal/hello-dotnet:${DOCKER_TAG}

Two parameters are needed: ${DOCKER_TAG} is a string, and ${DOCKER_PASS} holds the docker password of the user (kjackal in this case). You have to tick the checkbox indicating this is a parametrized job and add the two parameters. We are ready! Trigger the job and wait for it to finish. Your code should find its way to our Kubernetes cluster.

You do not believe me? Have a look at the rollout history of our deployment.

$ kubectl rollout history deployment/hello-dotnet

Release from GitHub

Everything looks great so far. Every time we want to release we will login to Jenkins and trigger the “Build and release” job…. Lets try something different. Let’s trigger a release from our code by creating a tag.

Create a “Release from Github” job on Jenkins and have it running periodically every 5 minutes:

Trigger job every 5 minutes

We want this job to look for new tags and if it detects a new one to perform the usual compile, package, deploy cycle. Here is how one such job:

#!/bin/bash
REPO="https://github.com/ktsakalozos/HelloWorldAspNetCore.git"
rm -rf ./HelloWorldAspNetCore
git clone $REPO
cd HelloWorldAspNetCore
# Initialise tags list
if [ ! -f /var/tmp/known-tags ]; then
git tag > /var/tmp/known-tags
echo "Initialising. List of preexisting git tags:"
cat /var/tmp/known-tags
exit 1
fi
mv /var/tmp/known-tags /var/tmp/know-tags.old
git tag > /var/tmp/known-tags
diff /var/tmp/known-tags /var/tmp/know-tags.old
if [ $? == '0' ]; then
echo "No new git tags detected."
exit 2
fi
# We have new tags
last_tag=$(grep -v -f /var/tmp/know-tags.old /var/tmp/known-tags | tail -n 1)
git checkout tags/${last_tag}
echo "Buidling ${last_tag}"
dotnet publish -c Release
sudo docker login -u kjackal -p <replace_with_docker_password>
sudo docker build -t kjackal/hello-dotnet:${last_tag} .
sudo docker push kjackal/hello-dotnet:${last_tag}
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config set image deployment/hello-dotnet hello-dotnet=kjackal/hello-dotnet:${last_tag}

Make sure you run the this job once so it gets initialised. Afterwords, and every 5 minutes, this job will look at the available tags and fail if no new tags are present.

Lets create a new tag/release now. Go to your GitHub repository click releases -as shown below- and fill up the release form.

Here is where you find the releases you have.
Release form on Github

Within 5 minutes your release will reach Kubernetes! Without you needing to login to Jenkins. Automagically!

A few points to note:

  1. There might be Jenkins plugins for this. But we said we will be looking at what is under the hood without hiding behind a GUI.
  2. You should place your Jenkins jobs on GitHub along with your code.

Where to go from here

So far you have seen the parts of a basic, yet fully functional CI/CD that delivers a .NET application onto any Kubernetes cluster. Each of the steps shown above are subject to improvements and tailoring based on your needs.

  • The ASP.NET application would normally have automated tests. You would want to run these tests in your CI. Travis is a great tool for this purpose and depending on the size and nature of your project it could be free. Alternatively you could setup Jenkins to run these tests as often as you please and report back the result.
  • Look into Helm if you intend to distribute your application instead of offering it as a service hosted by you.
  • Experiment with release strategies and find the one that best suits your needs. Make sure you read through Kubernetes deployment strategies.
  • You could consider using the auto scaling features of Kubernetes.
  • The Kubernetes cluster here is on LXD containers. You should use a cloud either private (eg Openstack) or public. With Conjure-up and Juju the cluster deployment process remains the same regardless of the targeted cloud. You have no excuse.
  • Finally, as you move to a cloud make sure you deploy the Canonical Distribution of Kubernetes instead of kubernetes-core. You will get a more robust deployment with HA features, logging and monitoring.

Resources


Automated Delivery of ASP.NET Core Apps on On-Prem Kubernetes was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
Dustin Kirkland

  • To date, we've shaved the Bionic (18.04 LTS) minimal images down by over 53%, since Ubuntu 14.04 LTS, and trimmed nearly 100 packages and thousands of files.
  • Feedback welcome here: https://ubu.one/imgSurvey
In last year's AskHN HackerNews post, "Ask HN: What do you want to see in Ubuntu 17.10?", and the subsequent treatment of the data, we noticed a recurring request for "lighter, smaller, more minimal" Ubuntu images.

This is particularly useful for container images (Docker, LXD, Kubernetes, etc.), embedded device environments, and anywhere a developer wants to bootstrap an Ubuntu system from the smallest possible starting point.  Smaller images generally:
  • are subject to fewer security vulnerabilities and subsequent updates
  • reduce overall network bandwidth consumption
  • and require less on disk storage
First, a definition...
"The Ubuntu Minimal Image is the smallest base upon which a user can apt install any package in the Ubuntu archive."
By design, Ubuntu Minimal Images specifically lack the creature comforts, user interfaces and user design experience that have come to define the Ubuntu Desktop and Ubuntu Cloud images.

To date, we've shaved the Bionic (18.04 LTS) minimal images down by over 53%, since Ubuntu 14.04 LTS, and trimmed nearly 100 packages and thousands of files.





   




   

Read more
Dustin Kirkland


I'm the proud owner of a new Dell XPS 13 Developer Edition (9360) laptop, pre-loaded from the Dell factory with Ubuntu 16.04 LTS Desktop.

Kudos to the Dell and the Canonical teams that have engineered a truly remarkable developer desktop experience.  You should also check out the post from Dell's senior architect behind the XPS 13, Barton George.

As it happens, I'm also the proud owner of a long loved, heavily used, 1st Generation Dell XPS 13 Developer Edition laptop :-)  See this post from May 7, 2012.  You'll be happy to know that machine is still going strong.  It's now my wife's daily driver.  And I use it almost every day, for any and all hacking that I do from the couch, after hours, after I leave the office ;-)

Now, this latest XPS edition is a real dream of a machine!

From a hardware perspective, this newer XPS 13 sports an Intel i7-7660U 2.5GHz processor and 16GB of memory.  While that's mildly exciting to me (as I've long used i7's and 16GB), here's what I am excited about...

The 500GB NVME storage and a whopping 1239 MB/sec I/O throughput!

kirkland@xps13:~$ sudo hdparm -tT /dev/nvme0n1
/dev/nvme0n1:
Timing cached reads: 25230 MB in 2.00 seconds = 12627.16 MB/sec
Timing buffered disk reads: 3718 MB in 3.00 seconds = 1239.08 MB/sec

And on top of that, this is my first QHD+ touch screen laptop display, sporting a magnificent 3200x1800 resolution.  The graphics are nothing short of spectacular.  Here's nearly 4K of Hollywood hard "at work" :-)


The keyboard is super comfortable.  I like it a bit better than the 1st generation.  Unlike your Apple friends, we still have our F-keys, which is important to me as a Byobu user :-)  The placement of the PgUp, PgDn, Home, and End keys (as Fn + Up/Down/Left/Right) takes a while to get used to.


The speakers are decent for a laptop, and the microphone is excellent.  The webcam is placed in an odd location (lower left of the screen), but it has quite nice resolution and focus quality.


And Bluetooth and WiFi, well, they "just work".  I got 98.2 Mbits/sec of throughput over WiFi.

kirkland@xps:~$ iperf -c 10.0.0.45
------------------------------------------------------------
Client connecting to 10.0.0.45, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.149 port 40568 connected with 10.0.0.45 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 118 MBytes 98.2 Mbits/sec

There's no external display port, so you'll need something like this USB-C-to-HDMI adapter to project to a TV or monitor.


There's 1x USB-C port, 2x USB-3 ports, and an SD-Card reader.


One of the USB-3 ports can be used to charge your phone or other devices, even while your laptop is suspended.  I use this all the time, to keep my phone topped up while I'm aboard planes, trains, and cars.  To do so, you'll need to enable "USB PowerShare" in the BIOS.  Here's an article from Dell's KnowledgeBase explaining how.


Honestly, I have only one complaint...  And that's that there is no Trackstick mouse (which is available on some Dell models).  I'm not a huge fan of the Touchpad.  It's too sensitive, and my palms are always touching it inadvertently.  So I need to use an external mouse to be effective.  I'll continue to provide this feedback to the Dell team, in the hopes that one day I'll have my perfect developer laptop!  Otherwise, this machine is a beauty.  I'm sure you'll love it too.

Cheers,
Dustin

Read more
Dustin Kirkland


For up-to-date patch, package, and USN links, please refer to: https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown

This is cross-posted on Canonical's official Ubuntu Insights blog:
https://insights.ubuntu.com/2018/01/04/ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/


Unfortunately, you’ve probably already read about one of the most widespread security issues in modern computing history -- colloquially known as “Meltdown” (CVE-2017-5754) and “Spectre” (CVE-2017-5753 and CVE-2017-5715) -- affecting practically every computer built in the last 10 years, running any operating system. That includes Ubuntu.

I say “unfortunately”, in part because there was a coordinated release date of January 9, 2018, agreed upon by essentially every operating system, hardware, and cloud vendor in the world. By design, operating system updates would be available at the same time as the public disclosure of the security vulnerability. While it happens rarely, this an industry standard best practice, which has broken down in this case.

At its heart, this vulnerability is a CPU hardware architecture design issue. But there are billions of affected hardware devices, and replacing CPUs is simply unreasonable. As a result, operating system kernels -- Windows, MacOS, Linux, and many others -- are being patched to mitigate the critical security vulnerability.

Canonical engineers have been working on this since we were made aware under the embargoed disclosure (November 2017) and have worked through the Christmas and New Years holidays, testing and integrating an incredibly complex patch set into a broad set of Ubuntu kernels and CPU architectures.

Ubuntu users of the 64-bit x86 architecture (aka, amd64) can expect updated kernels by the original January 9, 2018 coordinated release date, and sooner if possible. Updates will be available for:

  • Ubuntu 17.10 (Artful) -- Linux 4.13 HWE
  • Ubuntu 16.04 LTS (Xenial) -- Linux 4.4 (and 4.4 HWE)
  • Ubuntu 14.04 LTS (Trusty) -- Linux 3.13
  • Ubuntu 12.04 ESM** (Precise) -- Linux 3.2
    • Note that an Ubuntu Advantage license is required for the 12.04 ESM kernel update, as Ubuntu 12.04 LTS is past its end-of-life
Ubuntu 18.04 LTS (Bionic) will release in April of 2018, and will ship a 4.15 kernel, which includes the KPTI patchset as integrated upstream.

Ubuntu optimized kernels for the Amazon, Google, and Microsoft public clouds are also covered by these updates, as well as the rest of Canonical's Certified Public Clouds including Oracle, OVH, Rackspace, IBM Cloud, Joyent, and Dimension Data.

These kernel fixes will not be Livepatch-able. The source code changes required to address this problem is comprised of hundreds of independent patches, touching hundreds of files and thousands of lines of code. The sheer complexity of this patchset is not compatible with the Linux kernel Livepatch mechanism. An update and a reboot will be required to active this update.

Furthermore, you can expect Ubuntu security updates for a number of other related packages, including CPU microcode, GCC and QEMU in the coming days.

We don't have a performance analysis to share at this time, but please do stay tuned here as we'll followup with that as soon as possible.

Thanks,
@DustinKirkland
VP of Product
Canonical / Ubuntu

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 (final) is now available!
This new MAAS release introduces a set of exciting features and improvements to the overall user experience. It now becomes the focus of maintenance, as it fully replaces MAAS 2.2
In order to provide with sufficient notice, please be aware that 2.3.0 will replace MAAS 2.2 in the Ubuntu Archive in the coming weeks. In the meantime, MAAS 2.3 is available in PPA and as a Snap.
PPA’s Availability
MAAS 2.3.0 is currently available in ppa:maas/next for the coming week.
sudo add-apt-repository ppa:maas/next
sudo apt-get update
sudo apt-get install maas
Please be aware that MAAS 2.3 will replace MAAS 2.2 in ppa:maas/stable within a week.
Snap Availability
For those wanting to use the snap, you can obtain it from the stable channel:
sudo snap install maas –devmode –stable

MAAS 2.3.0 (final)

Important announcements

Machine network configuration now deferred to cloud-init.

Starting from MAAS 2.3, machine network configuration is now handled by cloud-init. In previous MAAS (and curtin) releases, the network configuration was performed by curtin during the installation process. In an effort to improve robustness, network configuration has now been consolidated in cloud-init. MAAS will continue to pass network configuration to curtin, which in turn, will delegate the configuration to cloud-init.

Ephemeral images over HTTP

As part of the effort to reduce dependencies and improve reliability, MAAS ephemeral (network boot) images are no longer loaded using iSCSI (tgt). By default, the ephemeral images are now obtained using HTTP requests to the rack controller.

After upgrading to MAAS 2.3, please ensure you have the latest available images. For more information please refer to the section below (New features & improvements).

Advanced network configuration for CentOS & Windows

MAAS 2.3 now supports the ability to perform network configuration for CentOS and Windows. The network configuration is performed via cloud-init. MAAS CentOS images now use the latest available version of cloud-init that includes these features.

New features & improvements

CentOS network configuration

MAAS can now perform machine network configuration for CentOS 6 and 7, providing networking feature parity with Ubuntu for those operating systems. The following can now be configured for MAAS deployed CentOS images:

  • Bonds, VLAN and bridge interfaces.
  • Static network configuration.

Our thanks to the cloud-init team for improving the network configuration support for CentOS.

Windows network configuration

MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information (https://maas.io/contact-us).

Improved Hardware Testing

MAAS 2.3 introduces a new and improved hardware testing framework that significantly improves the granularity and provision of hardware testing feedback. These improvements include:

  • An improved testing framework that allows MAAS to run each component individually. This allows MAAS to run tests against storage devices for example, and capture results individually.
  • The ability to describe custom hardware tests with a YAML definition:
    • This provides MAAS with information about the tests themselves, such as script name, description, required packages, and other metadata about what information the script will gather. All of which will be used by MAAS to render in the UI.
    • Determines whether the test supports a parameter, such as storage, allowing the test to be run against individual storage devices.
    • Provides the ability to run tests in parallel by setting this in the YAML definition.
  • Capture performance metrics for tests that can provide it.
    • CPU performance tests now offer a new ‘7z’ test, providing metrics.
    • Storage performance tests now include a new ‘fio’ test providing metrics.
    • Storage test ‘badblocks’ has been improved to provide the number of badblocks found as a metric.
  • The ability to override a machine that has been marked ‘Failed testing’. This allows administrators to acknowledge that a machine is usable despite it having failed testing.

Hardware testing improvements include the following UI changes:

  • Machine Listing page
    • Displays whether a test is pending, running or failed for the machine components (CPU, Memory or Storage.)
    • Displays whether a test not related to CPU, Memory or Storage has failed.
    • Displays a warning when the machine has been overridden and has failed tests, but is in a ‘Ready’ or ‘Deployed’ state.
  • Machine Details page
    • Summary tab – Provides hardware testing information about the different components (CPU, Memory, Storage).
    • Hardware Tests /Commission tab – Provides an improved view of the latest test run, its runtime as well as an improved view of previous results. It also adds more detailed information about specific tests, such as status, exit code, tags, runtime and logs/output (such as stdout and stderr).
    • Storage tab – Displays the status of specific disks, including whether a test is OK or failed after running hardware tests.

For more information please refer to https://docs.ubuntu.com/maas/2.3/en/nodes-hw-testing.

Network discovery & beaconing

In order to confirm network connectivity and aide with the discovery of VLANs, fabrics and subnets, MAAS 2.3 introduces network beaconing.

MAAS now sends out encrypted beacons, facilitating network discovery and monitoring. Beacons are sent using IPv4 and IPv6 multicast (and unicast) to UDP port 5240. When registering a new controller, MAAS uses the information gathered from the beaconing protocol to ensure that newly registered interfaces on each controller are associated with existing known networks in MAAS. This aids MAAS by providing better information on determining the network topology.

Using network beaconing, MAAS can better correlate which networks are connected to its controllers, even if interfaces on those controller are not configured with IP addresses. Future uses for beaconing could include validation of networks from commissioning nodes, MTU verification, and a better user experience for registering new controllers.

Upstream Proxy

MAAS 2.3 now enables an upstream HTTP proxy to be used while allowing MAAS deployed machines to continue to use the caching proxy for the repositories. Doing so provides greater flexibility for closed environments, including:

  • Enabling MAAS itself to use a corporate proxy while allowing machines to continue to use the MAAS proxy.
  • Allowing machines that don’t have access to a corporate proxy to gain network access using the MAAS proxy.

Adding upstream proxy support also includes an improved configuration on the settings page. Please refer to Settings > Proxy for more details.

Ephemeral Images over HTTP

Historically, MAAS has used ‘tgt’ to provide images over iSCSI for the ephemeral environments (e.g commissioning, deployment environment, rescue mode, etc). MAAS 2.3 changes the default behaviour by now providing images over HTTP.

These images are now downloaded directly by the initrd. The change means that the initrd loaded on PXE will contact the rack controller to download the image to load in the ephemeral environment. Support for using ‘tgt’ is being phased out in MAAS 2.3, and will no longer be supported from MAAS 2.4 onwards.

For users who would like to continue to use & load their ephemeral images via ‘tgt’, they can disable http boot with the following command.

  maas <user> maas set-config name=http_boot value=False

UI Improvements

Machines, Devices, Controllers

MAAS 2.3 introduces an improved design for the machines, devices and controllers detail pages that include the following changes.

  • “Summary” tab now only provides information about the specific node (machine, device or controller), organised across cards.
  • “Configuration” has been introduced, which includes all editable settings for the specific node (machine, device or controllers).
  • “Logs” consolidates the commissioning output and the installation log output.

Other UI improvements

Other UI improvements that have been made for MAAS 2.3 include:

  • Added DHCP status column on the ‘Subnet’s tab.
  • Added architecture filters
  • Updated VLAN and Space details page to no longer allow inline editing.
  • Updated VLAN page to include the IP ranges tables.
  • Zones page converted to AngularJS (away from YUI).
  • Added warnings when changing a Subnet’s mode (Unmanaged or Managed).
  • Renamed “Device Discovery” to “Network Discovery”.
  • Discovered devices where MAAS cannot determine the hostname now show the hostname as “unknown” and greyed out instead of using the MAC address manufacturer as the hostname.

Rack Controller Deployment

MAAS 2.3 can now automatically deploy rack controllers when deploying a machine. This is done by providing cloud-init user data, and once a machine is deployed, cloud-init will install and configure the rack controller. Upon rack controller registration, MAAS will automatically detect the machine is now a rack controller and it will be transitioned automatically. To deploy a rack controller, users can do so via the API (or CLI), e.g:

maas <user> machine deploy <system_id> install_rackd=True

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Controller Versions & Notifications

MAAS now surfaces the version of each running controller and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as within a HA setup.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

API Improvements

The machines API endpoint now provides more information on the configured storage and provides additional output that includes volume_groups, raids, cache_sets, and bcaches fields.

Django 1.11 support

MAAS 2.3 now supports the latest Django LTS version, Django 1.11. This allows MAAS to work with the newer Django version in Ubuntu Artful, which serves as a preparation for the next Ubuntu LTS release.

  • Users running MAAS in Ubuntu Artful will use Django 1.11.
  • Users running MAAS in Ubuntu Xenial will continue to use Django 1.9.

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 RC1 has now been released and it is currently available in PPA and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use RC1, please use the following PPA:
ppa:maas/next
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode —beta
 

MAAS 2.3.0 (RC1)

Issues fixed in this release

For more information, visit: https://launchpad.net/maas/+milestone/2.3.0rc1

  • LP: #1727576    [2.3, HWTv2] When specific tests timesout there’s no log/output
  • LP: #1728300    [2.3, HWTv2] smartctl interval time checking is too short
  • LP: #1721887    [2.3, HWTv2] No way to override a machine that Failed Testing
  • LP: #1728302    [2.3, HWTv2, UI] Overall health status is redundant
  • LP: #1721827    [2.3, HWTv2] Logging when and why a machine failed testing (due to missing heartbeats/locked/hanged) not available in maas.log
  • LP: #1722665    [2.3, HWTv2] MAAS stores a limited amount of test results
  • LP: #1718779    [2.3] 00-maas-06-get-fruid-api-data fails to run on controller
  • LP: #1729857    [2.3, UI] Whitespace after checkbox on node listing page
  • LP: #1696122    [2.2] Failed to get virsh pod storage: cryptic message if no pools are defined
  • LP: #1716328    [2.2] VM creation with pod accepts the same hostname and push out the original VM
  • LP: #1718044    [2.2] Failed to process node status messages – twisted.internet.defer.QueueOverflow
  • LP: #1723944    [2.x, UI] Node auto-assigned address is not always shown while in rescue mode
  • LP: #1718776    [UI] Tooltips missing from the machines listing page
  • LP: #1724402    no output for failing test
  • LP: #1724627    00-maas-06-get-fruid-api-data fails relentlessly, causes commissioning to fail
  • LP: #1727962    Intermittent failure: TestDeviceHandler.test_list_num_queries_is_the_expected_number
  • LP: #1727360    Make partition size field optional in the API (CLI)
  • LP: #1418044    Avoid picking the wrong IP for MAAS_URL and DEFAULT_MAAS_URL
  • LP: #1729902    When commissioning don’t show message that user has overridden testing

Read more
K. Tsakalozos

I read the Hacker News post Heptio Contour and I thought “Cool! A project from our friends at Heptio, lets see what they got for us”. I wont lie to you, at first I was a bit disappointed because there was no special mention for Canonical Distribution of Kubernetes (CDK) but I understand, I am asking too much :). Let me cover this gap here.

Deploy CDK

To deploy CDK on Ubuntu you need to just do a:

> sudo snap install conjure-up --classic
> conjure-up kubernetes-core

Using ‘kubernetes-core’ will give you a minimal k8s cluster — perfect for our use case. For a larger, more robust cluster, try ‘canonical-kubernetes.’

Deploy Contour and Demo App

CDK already comes with an ingress solution so you need to disable it and deploy Contour. Here we also deploy the demo kuard application:

> juju config kubernetes-worker ingress=false
> kubectl --kubeconfig=/home/jackal/.kube/config apply -f http://j.hept.io/contour-deployment-norbac
> kubectl --kubeconfig=/home/jackal/.kube/config apply -f http://j.hept.io/contour-kuard-example

Get Your App

The Contour service will be on a port that (depending on the cloud you are targeting) might be closed, so you need to open it before accessing kuard:

> kubectl --kubeconfig=/home/jackal/.kube/config get service -n heptio-contour  contour -o wide                                                                                                        
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
contour LoadBalancer 10.152.183.201 <pending> 80:31226/TCP 2m app=contour
juju run --application kubernetes-worker open-port 31226

And here it is running on AWS:

Instead of opening the ports to the outside world you could set the right DNS entries. However, this is specific to the cloud you are deploying to.

As the Contour README says “On AWS, create a CNAME record that maps the host in your Ingress object to the ELB address.”
“If you have an IP address instead (on GCE, for example), create an A record.”
For a localhost deployment your ports should not be blocked and you can fake a DNS entry by editing /etc/hosts.

Thank you Heptio. Keep it up!

Resources

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 Beta 3 has now been released and it is currently available in PPA and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use beta 3, please use the following PPA:
ppa:maas/next
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode —beta
 

MAAS 2.3.0 (beta3)

Issues fixed in this release

For more information, visit: https://launchpad.net/maas/+milestone/2.3.0beta3
  • LP: #1727551    [2.3] Commissioning shows results from script that no longer exists

  • LP: #1696485    [2.2, HA] MAAS dhcp does not offer up multiple domains to search

  • LP: #1696661    [2.2, HA] MAAS should offer multiple DNS servers in HA case

  • LP: #1724235    [2.3, HWTv2] Aborted test should not show as failure

  • LP: #1721824    [2.3, HWTv2] Overall health status is missing

  • LP: #1727547    [2.3, HWTv2] Aborting testing goes back into the incorrect state

  • LP: #1722848    [2.3, HWTv2] Memtester test is not robust

  • LP: #1727568    [2.3, HWTv2, regression] Hardware Tests tab does not show what tests are running

  • LP: #1721268    [2.3, UI, HWTv2] Metrics table (e.g. from fio test) is not padded to MAAS’ standard

  • LP: #1721823    [2.3, UI, HWTv2] No way to surface a failed test that’s non CPU, Mem, Storage in machine listing page

  • LP: #1721886    [2.3, UI, HWTv2] Hardware Test tab doesn’t auto-update

  • LP: #1559353    [2.0a3] “Add Hardware > Chassis” cannot find off-subnet chassis BMCs

  • LP: #1705594    [2.2] rackd errors after fresh install

  • LP: #1718517    [2.3] Exceptions while processing commissioning output cause timeouts rather than being appropriately surfaced

  • LP: #1722406    [2.3] API allows “deploying” a machine that’s already deployed

  • LP: #1724677    [2.x] [critical] TFTP back-end failed right after node repeatedly requests same file via tftp

  • LP: #1726474    [2.x] psycopg2.IntegrityError: update or delete on table “maasserver_node” violates foreign key constraint

  • LP: #1727073    [2.3] rackd — 12% connected to region controllers.

  • LP: #1722671    [2.3, pod] Unable to delete a machine or a pod if the pod no longer exists

  • LP: #1680819    [2.x, UI] Tooltips go off screen

  • LP: #1725908    [2.x] deleting user with static ip mappings throws 500

  • LP: #1726865    [snap,2.3beta3] maas init uses the default gateway in the default region URL

  • LP: #1724181    maas-cli missing dependencies: netifaces, tempita

  • LP: #1724904    Changing PXE lease in DHCP snippets global sections does not work

Read more
admin

Hello MAASters!

It’s been two weeks since my last update when the MAAS 2.3 beta 2 release was announced! Since then, the MAAS team has been split, both by participating in internal events (and recovering from travel) as well as continue to focus on the stabilization of MAAS 2.3. As such, I’m happy to provide the updates of the past couple of weeks.

MAAS 2.3
In the past couple of weeks the team has been focused on stabilizing MAAS 2.3 and has fixed the following issues:

  • LP: #1705594    [2.2, HA] rackd errors after fresh install
  • LP: #1722848    [2.3, HWTv2] Memtester test is not robust
  • LP: #1724677    [2.x] [critical] TFTP back-end failed right after node repeatedly requests same file via tftp
  • LP: #1727073    [2.3, HA] rackd — 12% connected to region controllers.
  • LP: #1696485    [2.2, HA] MAAS dhcp does not offer up multiple domains to search
  • LP: #1696661    [2.2, HA] MAAS should offer multiple DNS servers in HA case
  • LP: #1721268    [2.3, UI, HWTv2] Metrics table (e.g. from fio test) is not padded to MAAS’ standard
  • LP: #1721886    [2.3, UI, HWTv2] Hardware Test tab doesn’t auto-update
  • LP: #1722671    [2.3, pod] Unable to delete a machine or a pod if the pod no longer exists
  • LP: #1724181    maas-cli missing dependencies: netifaces, tempita
  • LP: #1724235    [2.3, HWTv2] Aborted test should not show as failure
  • LP: #1726865    [snap,2.3beta3] maas init uses the default gateway in the default region URL
  • LP: #1724904    Changing PXE lease in DHCP snippets global sections does not work
  • LP: #1680819    [2.x, UI] Tooltips go off screen

 

MAAS 2.4
I’m happy to announce that the roadmap for  MAAS 2.4 has now been defined, and it is targeted for April 2018. However, I’ll create a bit of suspense as we will announce the upcoming features once MAAS 2.3 final has been released! Stay tuned!

Read more
K. Tsakalozos

When was the last time you had one of those “I wish I knew about that <time period> ago” moments? One of mine was when I saw MAAS (Metal as a Service). Back at the University, as a member of the MADgIK lab, I had to administer a couple of racks. These were the days of Xen paravirtualization, Eucaliptus and the GRID. Interesting times, but I now realise that if I had MAAS and Juju back then, I would have finished with that adventure a couple of years sooner!

Was told blogs with figures are more attractive. There you go!

From a high level MAAS will render your physical cluster into an IaaS cloud that provisions physical instead of virtual machines. As you might expect a PXE server will serve the images you ask via a good looking web UI. MAAS will discover what is on your network and make it dead simple to set VLANs and network spaces. Juju can use your MaaS deployment as if it were a cloud. This enables you to deploy OpenStack, Kubernetes and BigData solutions directly onto your hardware. It is a really powerful combination that can save you time (in the range of years in my case).

Lets do it!

Do not waste any more time on this blog, go and try MAAS on QEMU instances on your own machine. This play list will get you started. At the end you will have a (virtual) cluster on QEMU. One of the machines will act as the MAAS controller, the rest will be the cluster nodes.

Your virtual cluster.
MAAS nodes

Next step is to use MAAS from Juju.

Just to show off :)

Conclusion

I am off to the shop to get more RAM. No wonder this is such a short blog.

Resources

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 Beta 2 has now been released and it is currently available in PPA and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use beta 2, please use the following PPA:
ppa:maas/next
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode —beta
 

MAAS 2.3.0 (beta 2)

Issues fixed in this release

https://launchpad.net/maas/+milestone/2.3.0beta2

  • LP: #1711760    [2.3] resolv.conf is not set (during commissioning or testing)

  • LP: #1721108    [2.3, UI, HWTv2] Machine details cards – Don’t show “see results” when no tests have been run on a machine

  • LP: #1721111    [2.3, UI, HWTv2] Machine details cards – Storage card doesn’t match CPU/Memory one

  • LP: #1721548    [2.3] Failure on controller refresh seem to be causing version to not get updated

  • LP: #1710092    [2.3, HWTv2] Hardware Tests have a short timeout

  • LP: #1721113    [2.3, UI, HWTv2] Machine details cards – Storage – If multiple disks, condense the card instead of showing all disks

  • LP: #1721524    [2.3, UI, HWTv2] When upgrading from older MAAS, Storage HW tests are not mapped to the disks

  • LP: #1721587    [2.3, UI, HWTv2] Commissioning logs (and those of v2 HW Tests) are not being shown

  • LP: #1719015    $TTL in zone definition is not updated

  • LP: #1721276    [2.3, UI, HWTv2] Hardware Test tab – Table alignment for the results doesn’t align with titles

  • LP: #1721525    [2.3, UI, HWTv2] Storage card on machine details page missing red bar on top if there are failed tests

  • LP: #1722589    syslog full of “topology hint” logs

  • LP: #1719353    [2.3a3, Machine listing] Improve the information presentation of the exact tasks MAAS is running when running hardware testing

  • LP: #1719361    [2.3 alpha 3, HWTv2] On machine listing page, remove success icons for components that passed the tests

  • LP: #1721105    [2.3, UI, HWTv2] Remove green success icon from Machine listing page

  • LP: #1721273    [2.3, UI, HWTv2] Storage section on Hardware Test tab does not describe each disk to match the design

Read more
admin

MAAS 2.3.0 (beta1)

New Features & Improvements

Hardware Testing

MAAS 2.3 beta overhauls and improves the visibility of hardware test results and information. This includes various changes across MAAS:

  • Machine Listing page
    • Surface progress and failures of hardware tests, actively showing when a test is pending, running, successful or failed.
  • Machine Details page
    • Summary tab – Provide hardware testing information about the different components (CPU, Memory, Storage)
    • Hardware Tests tab – Completely re-design of the Hardware Test tab. It now shows a list of test results per component. Adds the ability to view more details about the test itself.
    • Storage tab – Ability to view tests results per storage component.

UI Improvements

Machines, Devices, Controllers

MAAS 2.3 beta 1 introduces a new design for the node summary pages:

  • “Summary tab” now only shows information of the machine, in a complete new design.
  • “Settings tab” has been introduced. It now includes the ability to edit such node.
  • “Logs tab” now consolidates the commissioning output and the installation log output.

Other UI improvements

Other UI improvements that have been made for MAAS 2.3 beta 1 includes:

  • Add DHCP status column on the ‘Subnet’s tab.
  • Add architecture filters
  • Update VLAN and Space details page to no longer allow inline editing.
  • Update VLAN page to include the IP ranges tables.
  • Convert the Zones page into AngularJS (away from YUI).
  • Add warnings when changing a Subnet’s mode (Unmanaged or Managed).

Rack Controller Deployment

MAAS beta 1 now adds the ability to deploy any machine with the rack controller, which is only available via the API.

API Improvements

MAAS 2.3 beta 1 introduces API output for volume_groups, raids, cache_sets, and bcaches field to the machines endpoint.

Known issues:

The following are a list of known UI issues affecting hardware testing:

Issues fixed in this release

https://launchpad.net/maas/+milestone/2.3.0beta1

  • #1711320    [2.3, UI] Can’t ‘Save changes’ and ‘Cancel’ on machine/device details page
  • #1696270    [2.3] Toggling Subnet from Managed to Unmanaged doesn’t warn the user that behavior changes
  • #1717287    maas-enlist doesn’t work when provided with serverurl with IPv6 address
  • #1718209    PXE configuration for dhcpv6 is wrong
  • #1718270    [2.3] MAAS improperly determines the version of some installs
  • #1718686    [2.3, master] Machine lists shows green checks on components even when no tests have been run
  • #1507712    cli: maas logout causes KeyError for other profiles
  • #1684085    [2.x, Accessibility] Inconsistent save states for fabric/subnet/vlan/space editing
  • #1718294    [packaging] dpkg-reconfigure for region controller refers to an incorrect network topology assumption

Read more