Canonical Voices

Posts tagged with 'ubuntu'

jdstrand

Finding your VMs and containers via DNS resolution so you can ssh into them can be tricky. I was talking with Stéphane Graber today about this and he reminded me of his excellent article: Easily ssh to your containers and VMs on Ubuntu 12.04.

These days, libvirt has the `virsh dominfo` command and LXD has a slightly different way of finding the IP address.

Here is an updated `~/.ssh/config` that I’m now using (thank you Stéphane for the update for LXD):

Host *.lxd
    #User ubuntu
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(lxc list -c s4 $(echo %h | sed "s/\.lxd//g") %h | grep RUNNING | cut -d' ' -f4) %p
 
Host *.vm
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(virsh domifaddr $(echo %h | sed "s/\.vm//g") | awk -F'[ /]+' '{if (NR>2 && $5) print $5}') %p

You may want to uncomment `StrictHostKeyChecking` and `UserKnownHostsFile` depending on your environment (see `man ssh_config`) for details.

With the above, I can ssh in with:

$ ssh foo.vm uptime
16:37:26 up 50 min, 0 users, load average: 0.00, 0.00, 0.00
$ ssh bar.lxd uptime
21:37:35 up 12:39, 2 users, load average: 0.55, 0.73, 0.66

Enjoy!


Filed under: canonical, ubuntu, ubuntu-server

Read more
admin

MAAS 2.3.0 (alpha3)

New Features & Improvements

Hardware Testing (backend only)

MAAS has now introduced an improved hardware testing framework. This new framework allows MAAS to test individual components of a single machine, as well as providing better feedback to the user for each of those tests. This feature has introduced:

  • Ability to define a custom testing script with a YAML definition – Each custom test can be defined with YAML that will provide information about the test. This information includes the script name, description, required packages, and other metadata about what information the script will gather. This information can then be displayed in the UI.

  • Ability to pass parameters – Adds the ability to pass specific parameters to the scripts. For example, in upcoming beta releases, users would be able to select which disks they want to test if they don’t want to test all disks.

  • Running test individually – Improves the way how hardware tests are run per component. This allows MAAS to run tests against any individual component (such a single disk).

  • Adding additional performance tests

    • Added a CPU performance test with 7z.

    • Added a storage performance test with fio.

Please note that individual results for each of the components is currently only available over the API. Upcoming beta release will include various UI improvements that will allow the user to better surface and interface with these new features.

Rack Controller Deployment in Whitebox Switches

MAAS has now the ability to install and configure a MAAS rack controller once a machine has been deployed. As of today, this feature is only available when MAAS detects the machine is a whitebox switch. As such, all MAAS certified whitebox switches will be deployed with a MAAS rack controller. Currently certified switches include the Wedge 100 and the Wedge 40.

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

UI – Controller Versions & Notifications

MAAS now surfaces the version of each running controller, and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as a HA setup.

Issues fixed in this release

  • #1702703    Cannot run maas-regiond without /bin/maas-rack
  • #1711414    [2.3, snap] Cannot delete a rack controller running from the snap
  • #1712450    [2.3] 500 error when uploading a new commissioning script
  • #1714273    [2.3, snap] Rack Controller from the snap fails to power manage on IPMI
  • #1715634    ‘tags machines’ takes 30+ seconds to respond with list of 9 nodes
  • #1676992    [2.2] Zesty ISO install fails on region controller due to postgresql not running
  • #1703035    MAAS should warn on version skew between controllers
  • #1708512    [2.3, UI] DNS and Description Labels misaligned on subnet details page
  • #1711700    [2.x] MAAS should avoid updating DNS if nothing changed
  • #1712422    [2.3] MAAS does not report form errors on script upload
  • #1712423    [2.3] 500 error when clicking the ‘Upload’ button with no script selected.
  • #1684094    [2.2.0rc2, UI, Subnets] Make the contextual menu language consistent across MAAS
  • #1688066    [2.2] VNC/SPICE graphical console for debugging purpose on libvirt pod created VMs
  • #1707850    [2.2] MAAS doesn’t report cloud-init failures post-deployment
  • #1711714    [2.3] cloud-init reporting not configured for deployed ubuntu core systems
  • #1681801    [2.2, UI] Device discovery – Tooltip misspelled
  • #1686246    [CLI help] set-storage-layout says Allocated when it should say Ready
  • #1621175    BMC acc setup during auto-enlistment fails on Huawei model RH1288 V3

For full details please visit:

https://launchpad.net/maas/+milestone/2.3.0alpha3

Read more
admin

Hello MAASters! This is the development summary for the past couple of weeks:

MAAS 2.3 (current development release)

  • Hardware Testing Phase 2
    • Added parameters form for script parameters validation.
    • Accept and validate results from nodes.
    • Added hardware testing 7zip CPU benchmarking builtin script.
    • WIP – ability to send parameters to test scripts and process results of individual components. (e.g. will provide the ability for users to select which disk they want to test, and capture results accordingly)
    • WIP – disk benchmark test via Fio.
  • Network beaconing & better network discovery
    • MAAS controllers now send out beacon advertisements every 30 seconds, regardless of whether or not any solicitations were received.
  • Switch Support
    • Backend changes to automatically detect switches (during commissioning) and make use of the new switch model.
    • Introduce base infrastructure for NOS drivers, similar to the power management one.
    • Install the Rack Controller when deploying a supported Switch (Wedge 40, Wedge 100)
    • UI – Add a switch listing tab behind a feature flag.
  • Minor UI improvements
    • The version of MAAS installed on each controller is now reported on the controller details page.
  • python-libmaas
    • Added ability to power on, power off, and query the power state of a machine.
    • Added PowerState enum to make it easy to check the current power state of a machine.
    • Added ability to reference the children and parent interfaces of an interface.
    • Added ability to reference the owner of node.
    • Added base level `Node` object that `Machine`, `Device`, `RackController`, and `RegionController` extend from.
    • Added `as_machine`, `as_device`, `as_rack_controller`, and `as_region_controller` to the Node object. Allowing the ability to convert a `Node` into the type you need to perform an action on.
  • Bug fixes:
    • LP: #1676992 – force Postgresql restart on maas-region-controller installation.
    • LP: #1708512 – Fix DNS & Description misalignment
    • LP: #1711714 – Add cloud-init reporting for deployed Ubuntu Core systems
    • LP: #1684094 – Make context menu language consistent for IP ranges.
    • LP: #1686246 – Fix docstring for set-storage-layout operation
    • LP: #1681801 – Device discovery – Tooltip misspelled
    • LP: #1688066 – Add Spice graphical console to pod created VM’s
    • LP: #1711700 – Improve DNS reloading so its happens only when required.
    • LP: #1712423, #1712450, #1712422 – Properly handle a ScriptForm being sent an empty file.
    • LP: #1621175 – Generate password for BMC’s with non-spec compliant password policy
    • LP: #1711414 – Fix deleting a rack when it is installed via the snap
    • LP: #1702703 – Can’t run region controller without a rack controller installed.

Read more
admin

Hello MAASters!

I’m happy to annouce that MAAS 2.3.0 Alpha 2 has now been released and it is currently available in Ubuntu Artful, PPA for Xenial and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use Alpha 1, please use the following PPA:
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode –beta

MAAS 2.3.0 (alpha2)

Important announcements

Advanced Network for CentOS & Windows

The MAAS team is happy to announce that MAAS 2.3 now supports the ability to perform network configuration for CentOS and Windows. The network configuration is performed via cloud-init. MAAS CentOS images now use the latest available version of cloud-init that includes these features.

New Features & Improvements

CentOS Networking support

MAAS can now perform machine network configuration for CentOS, giving CentOS networking feature parity with Ubuntu. The following can now be configured for MAAS deployed CentOS images:

  • Static network configuration.

  • Bonds, VLAN and bridge interfaces.

Thanks for the cloud-init team for improving the network configuration support for CentOS.

Windows Networking support

MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information (https://maas.io/contact-us).

Network Discovery & Beaconing

MAAS now sends out encrypted beacons to facilitate network discovery and monitoring. Beacons are sent using IPv4 and IPv6 multicast (and unicast) to UDP port 5240. When registering a new controller, MAAS uses the information gathered from the beaconing protocol to ensure that newly registered interfaces on each controller are associated with existing known networks in MAAS.

UI improvements

Minor UI improvements have been made

  • Renamed “Device Discovery” to “Network Discovery”.

  • Discovered devices where MAAS cannot determine the hostname now show the hostname as “unknown” and greyed out instead of using the MAC address manufacturer as the hostname.

Issues fixed in this release

Issues fixed in this release are detailed at:

https://launchpad.net/maas/+milestone/2.3.0alpha1

Read more
Dustin Kirkland

Earlier this month, I spoke at ContainerDays, part of the excellent DevOpsDays series of conferences -- this one in lovely Portland, Oregon.

I gave a live demo of Kubernetes running directly on bare metal.  I was running it on an 11-node Ubuntu Orange Box -- but I used the exact same tools Canonical's world class consulting team uses to deploy Kubernetes onto racks of physical machines.
You see, the ability to run Kubernetes on bare metal, behind your firewall is essential to the yin-yang duality of Cloud Native computing.  Sometimes, what you need is actually a Native Cloud.
Deploying Kubernetes into virtual machines in the cloud is rather easy, straightforward, with dozens of tools now that can handle that.

But there's only one tool today, that can deploy the exact same Kubernetes to AWS, Azure, GCE, as well as VMware, OpenStack, and bare metal machines.  That tools is conjure-up, which acts as a command line front end to several essential Ubuntu tools: MAAS, LXD, and Juju.

I don't know if the presentation was recorded, but I'm happy to share with you my slides for download, and embedded here below.  There are a few screenshots within that help convey the demo.




Cheers,
Dustin

Read more
admin

Hello MAASters! This is the development summary for the past couple of weeks:

MAAS 2.3 (current development release)

The team is preparing and testing the next official release, MAAS 2.3 alpha2. It is currently undergoing a heavy round of testing and will be announced separately the beginning of the upcoming week. In the past three weeks, the team has:

  • Support for CentOS Network configuration
    We have completed the work to support CentOS Advanced Networking, which provides the ability for users to configure VLAN, bond and bridge interfaces, bringing it feature parity with Ubuntu. This will be available in MAAS 2.3 alpha 2.
  • Support for Windows Network configuration
    MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information [1].
  • Hardware Testing Phase 2

    • Testing scripts now define a type field that informs MAAS for which component will be tested and where the resulting metrics will apply. This may be node, cpu, memory, or storage, defaults to node.
    • Completed work to support the definition and parsing of a YAML based description for custom test scripts. This allows the user to defined the test’s title, description, and the metrics the test will output, which allows MAAS to parse and eventually display over the UI/API.
  • Network beaconing & better network discovery

    • Beaconing is now fully functional for controller registration and interface updates!
    • When registering or updating a new controller (either the first standalone controller, or a secondary/HA controller), new interfaces that have been determined to be on an existing VLAN will not cause a new fabric to be created in MAAS.
  • Switch modeling
    • The basic database model for the new switching model has been implemented.
    • On-going progress of presenting switches in the node listing is under way.
    • Work is in-progress to allow MAAS to deploy a rack controller which will be utilized when deploying a new switch with MAAS.
  • Minor UI improvements
    • Renamed “Device Discovery” to “Network Discovery”.
    • Discovered devices where MAAS cannot determine the hostname now just show the hostname as “unknown” and grayed out instead of using the MAC address manufacturer as the hostname.
  • Bug fixes:
    • LP: #1704444 – MAAS API returns 500 internal server error instead of raising actual error.
    • LP: #1705501 – django warning on install
    • LP: #1707971 – MAAS becomes unstable after rack controller restarts
    • LP: #1708052 – Quick erase doesn’t remove md superblock
    • LP: #1710681 – Cannot delete an Ubuntu image, “Update Selection” is disabled

MAAS 2.2.2 Released in the Ubuntu Archive!

MAAS 2.2.2 has now also been released in the Ubuntu Archive. For more details on MAAS 2.2.2, please see [2].

 

[1]: https://maas.io/contact-us

[2]: https://lists.ubuntu.com/archives/maas-devel/2017-August/002663.html

Read more
admin

Hello MAASters! The MAAS development summaries are back!

The past three weeks the team has been made good progress on three main areas, the development of 2.3, maintenance for 2.2, and out new and improved python library (libmaas).

MAAS 2.3 (current development release)

The first official MAAS 2.3 release has been prepared. It is currently undergoing a heavy round of testing and will be announced separately once completed. In the past three weeks, the team has:

  • Completed Upstream Proxy UI
    • Improve the UI to better configure the different proxy modes.
    • Added the ability to configure an upstream proxy.
  • Network beaconing & better network discovery
  • Started Hardware Testing Phase 2
      • UX team has completed the initial wireframes and gathered feedback.
      • Started changes to collect and gather better test results.
  • Started Switch modeling
      • Started changes to support switch and switch port modeling.
  • Bug fixes
    • LP: #1703403 – regiond workers can use too many postgres connections
    • LP: #1651165 – Unable to change disk name using maas gui
    • LP: #1702690 – [2.2] Commissioning a machine prefers minimum kernel over commissioning global
    • LP: #1700802 – [2.x] maas cli allocate interfaces=<label>:ip=<ADDRESS> errors with Unknown interfaces constraint Edit
    • LP: #1703713 – [2.3] Devices don’t have a link from the DNS page
    • LP: #1702976 – Cavium ThunderX lacks power settings after enlistment apparently due to missing kernel
    • LP: #1664822 – Enable IPMI over LAN if disabled
    • LP: #1703713 – Fix missing link on domain details page
    • LP: #1702669 – Add index on family(ip) for each StaticIPAddress to improve execution time of the maasserver_routable_pairs view.
    • LP: #1703845 – Set the re-check interval for rack to region RPC connections to the lowest value when a RPC connection is closed or lost.

MAAS 2.2 (current stable release)

  • Last week, MAAS 2.2 was SRU’d into the Ubuntu Archives and to our latest LTS release, Ubuntu Xenial, replacing the MAAS 2.1 series.
  • This week, a new MAAS 2.2 point release has also been prepared. It is currently undergoing heavy testing. Once testing is completed, it will be released in a separate announcement.

Libmaas

Last week, the team has worked on increasing the level

  • Added ability to create machines.
  • Added ability to commission machines.
  • Added ability to manage MAAS networking definitions. Including Subnet, Fabrics, Spaces, vlans, IP Ranges, Static Routes and DHCP.

Read more
Dustin Kirkland

Back in March, we asked the HackerNews community, “What do you want to see in Ubuntu 17.10?”: https://ubu.one/AskHN

A passionate discussion ensued, the results of which are distilled into this post: http://ubu.one/thankHN

In fact, you can check that link, http://ubu.one/thankHN and see our progress so far this cycle.  We already have a beta code in 17.10 available for your testing for several of those:

And several others have excellent work in progress, and will be complete by 17.10:

In summary -- your feedback matters!  There are hundreds of engineers and designers working for *you* to continue making Ubuntu amazing!

Along with the switch from Unity to GNOME, we’re also reviewing some of the desktop applications we package and ship in Ubuntu.  We’re looking to crowdsource input on your favorite Linux applications across a broad set of classic desktop functionality.

We invite you to contribute by listing the applications you find most useful in Linux in order of preference. To help us parse your input, please copy and paste the following bullets with your preferred apps in Linux desktop environments.  You’re welcome to suggest multiple apps, please just order them prioritized (e.g. Web Browser: Firefox, Chrome, Chromium).  If some of your functionality has moved entirely to the web, please note that too (e.g. Email Client: Gmail web, Office Suite: Office360 web).  If the software isn’t free/open source, please note that (e.g. Music Player: Spotify client non-free).  If I’ve missed a category, please add it in the same format.  If your favorites aren’t packaged for Ubuntu yet, please let us know, as we’re creating hundreds of new snap packages for Ubuntu desktop applications, and we’re keen to learn what key snaps we’re missing.

  • Web Browser: ???
  • Email Client: ???
  • Terminal: ???
  • IDE: ???
  • File manager: ???
  • Basic Text Editor: ???
  • IRC/Messaging Client: ???
  • PDF Reader: ???
  • Office Suite: ???
  • Calendar: ???
  • Video Player: ???
  • Music Player: ???
  • Photo Viewer: ???
  • Screen recording: ???

In the interest of opening this survey as widely as possible, we’ve cross-posted this thread to HackerNews, Reddit, and Slashdot.  We very much look forward to another friendly, energetic, collaborative discussion.

Or, you can fill out the survey here: https://ubu.one/apps1804

Thank you!
On behalf of @Canonical and @Ubuntu

Read more
Dustin Kirkland


I met up with the excellent hosts of the The Changelog podcast at OSCON in Austin a few weeks back, and joined them for a short segment.

That podcast recording is now live!  Enjoy!


The Changelog 256: Ubuntu Snaps and Bash on Windows Server with Dustin Kirkland
Listen on Changelog.com



Cheers,
Dustin

Read more
Leo Arias

I'm a Quality Assurance Engineer. A big part of my job is to find problems, then make sure that they are fixed and automated so they don't regress. If I do my job well, then our process will identify new and potential problems early without manual intervention from anybody in the team. It's like trying to automate myself, everyday, until I'm no longer needed and have to jump to another project.

However, as we work in the project, it's unavoidable that many small manual tasks accumulate on my hands. This happens because I set up the continuous integration infrastructure, so I'm the one who knows more about it and have easier access, or because I'm the one who requested access to the build farm so I'm the one with the password, or because I configured the staging environment and I'm the only one who knows the details. This is a great way to achieve job security, but it doesn't lead us to higher quality. It's a job half done, and it's terribly boring to be a bottleneck and a silo of information about testing and the release process. All of these tasks should be shared by the whole team, as with all the other tasks in the project.

There are two problems. First, most of these tasks involve delicate credentials that shouldn't be freely shared with everybody. Second, even if the task itself is simple and quick to execute, it's not very simple to document how to set up the environment to be able to execute them, nor how to make sure that the right task is executed in the right moment.

Chatops is how I like to solve all of this. The idea is that every task that requires manual intervention is implemented in a script that can be executed by a bot. This bot joins the communication channel where the entire team is present, and it will execute the tasks and report about their results as a response to external events that happen somewhere in the project infrastructure, or as a response to the direct request of a team member in the channel. The credentials are kept safe, they only have to be shared with the bot and the permissions can be handled with access control lists or membership to the channel. And the operative knowledge is shared with all the team, because they are all listening in the same channel with the bot. This means that anybody can execute the tasks, and the bot assists them to make it simple.

In snapcraft we started writing our bot not so long ago. It's called snappy-m-o (Microbe Obliterator), and it's written in python with errbot. We, of course, packaged it as a snap so we have automated delivery every time we change it's source code, and the bot is also autoupdated in the server, so in the chat we are always interacting with the latest and greatest.

Let me show you how we started it, in case you want to get your own. But let's call this one Baymax, and let's make a virtual environment with errbot, to experiment.

drawing of the Baymax bot

$ mkdir -p ~/workspace/baymax
$ cd ~/workspace/baymax
$ sudo apt install python3-venv
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install errbot
$ errbot --init

The last command will initialize this bot with a super simple plugin, and will configure it to work in text mode. This means that the bot won't be listening on any channel, you can just interact with it through the command line (the ops, without the chat). Let's try it:

$ errbot
[...]
>>> !help
All commands
[...]
!tryme - Execute to check if Errbot responds to command.
[...]
>>> !tryme
It works !
>>> !shutdown --confirm

tryme is the command provided by the example plugin that errbot --init created. Take a look at the file plugins/err-example/example.py, errbot is just lovely. In order to define your own plugin you will just need a class that inherits from errbot.BotPlugin, and the commands are methods decorated with @errbot.botcmd. I won't dig into how to write plugins, because they have an amazing documentation about Plugin development. You can also read the plugins we have in our snappy-m-o, one for triggering autopkgtests on GitHub pull requests, and the other for subscribing to the results of the pull requests tests.

Let's change the config of Baymax to put it in an IRC chat:

$ pip install irc

And in the config.py file, set the following values:

BACKEND = 'IRC'
BOT_IDENTITY = {
    'nickname' : 'baymax-elopio',  # Nicknames need to be unique, so append your own.
                                   # Remember to replace 'elopio' with your nick everywhere
                                   # from now on.
    'server' : 'irc.freenode.net',
}
CHATROOM_PRESENCE = ('#snappy',)

Run it again with the errbot command, but this time join the #snappy channel in irc.freenode.net, and write in there !tryme. It works ! :)

screenshot of errbot on IRC

So, this is very simple, but let's package it now to start with the good practice of continuous delivery before it gets more complicated. As usual, it just requires a snapcraft.yaml file with all the packaging info and metadata:

name: baymax-elopio
version: '0.1-dev'
summary: A test bot with errbot.
description: Chat ops bot for my team.
grade: stable
confinement: strict

apps:
  baymax-elopio:
    command: env LC_ALL=C.UTF-8 errbot -c $SNAP/config.py
    plugs: [home, network, network-bind]

parts:
  errbot:
    plugin: python
    python-packages: [errbot, irc]
  baymax:
    source: .
    plugin: dump
    stage:
      - config.py
      - plugins
    after: [errbot]

And we need to change a few more values in config.py to make sure that the bot is relocatable, that we can run it in the isolated snap environment, and that we can add plugins after it has been installed:

import os

BOT_DATA_DIR = os.environ.get('SNAP_USER_DATA')
BOT_EXTRA_PLUGIN_DIR = os.path.join(os.environ.get('SNAP'), 'plugins')
BOT_LOG_FILE = BOT_DATA_DIR + '/err.log'

One final try, this time from the snap:

$ sudo apt install snapcraft
$ snapcraft
$ sudo snap install baymax*.snap --dangerous
$ baymax-elopio

And go back to IRC to check.

Last thing would be to push the source code we have just written to a GitHub repo, and enable the continuous delivery in build.snapcraft.io. Go to your server and install the bot with sudo snap install baymax-elopio --edge. Now everytime somebody from your team makes a change in the master repo in GitHub, the bot in your server will be automatically updated to get those changes within a few hours without any work from your side.

If you are into chatops, make sure that every time you do a manual task, you also plan for some time to turn that task into a script that can be executed by your bot. And get ready to enjoy tons and tons of free time, or just keep going through those 400 open bugs, whichever you prefer :)

Read more
Leo Arias

I love playing with my prototyping boards. Here at Ubuntu we are designing the core operating system to support every single-board computer, and keep it safe, updated and simple. I've learned a lot about physical computing, but I always have a big problem when my prototype is done, and I want to deploy it. I am working with a Raspberry Pi, a DragonBoard, and a BeagleBone. They are all very different, with different architectures, different pins, onboard capabilities and peripherals, and they can have different operating systems. When I started learning about this, I had to write 3 programs that were very different, if I wanted to try my prototype in all my boards.

picture of the three different SBCs

Then I found Gobot, a framework for robotics and IoT that supports my three boards, and many more. With the added benefit that you can write all the software in the lovely and clean Go language. The Ubuntu store supports all their architectures too, and packaging Go projects with snapcraft is super simple. So we can combine all of this to make a single snap package that with the help of Gobot will work on every board, and deploy it to all the users of these boards through the snaps store.

Let's dig into the code with a very simple example to blink an LED, first for the Raspberry PI only.

package main

import (
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  adaptor := raspi.NewAdaptor()
  led := gpio.NewLedDriver(adaptor, "7")

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

In there you will see some of the Gobot concepts. There's an adaptor for the board, a driver for the specific device (in this case the LED), and a robot to control everything. In this program, there are only two things specific to the Raspberry Pi: the adaptor and the name of the GPIO pin ("7").

picture of the Raspberry Pi prototype

It works nicely in one of the boards, but let's extend the code a little to support the other two.

package main

import (
  "log"
  "os/exec"
  "strings"
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/beaglebone"
  "gobot.io/x/gobot/platforms/dragonboard"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  out, err := exec.Command("uname", "-r").Output()
  if err != nil {
    log.Fatal(err)
  }
  var adaptor gobot.Adaptor
  var pin string
  kernelRelease := string(out)
  if strings.Contains(kernelRelease, "raspi2") {
    adaptor = raspi.NewAdaptor()
    pin = "7"
  } else if strings.Contains(kernelRelease, "snapdragon") {
    adaptor = dragonboard.NewAdaptor()
    pin = "GPIO_A"
  } else {
    adaptor = beaglebone.NewAdaptor()
    pin = "P8_7"
  }
  digitalWriter, ok := adaptor.(gpio.DigitalWriter)
  if !ok {
    log.Fatal("Invalid adaptor")
  }
  led := gpio.NewLedDriver(digitalWriter, pin)

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

We are basically adding in there a block to select the right adaptor and pin, depending on which board the code is running. Now we can compile this program, throw the binary in the board, and give it a try.

picture of the Dragonboard prototype

But we can do better. If we package this in a snap, anybody with one of the boards and an operating system that supports snaps can easily install it. We also open the door to continuous delivery and crowd testing. And as I said before, super simple, just put this in the snapcraft.yaml file:

name: gobot-blink-elopio
version: master
summary:  Blink snap for the Raspberry Pi with Gobot
description: |
  This is a simple example to blink an LED in the Raspberry Pi
  using the Gobot framework.

confinement: devmode

apps:
  gobot-blink-elopio:
    command: gobot-blink

parts:
  gobot-blink:
    source: .
    plugin: go
    go-importpath: github.com/elopio/gobot-blink

To build the snap, here is a cool trick thanks to the work that kalikiana recently added to snapcraft. I'm writing this code in my development machine, which is amd64. But the raspberry pi and beaglebone are armhf, and the dragonboard is arm64; so I need to cross-compile the code to get binaries for all the architectures:

snapcraft --target-arch=armhf
snapcraft clean
snapcraft --target-arch=arm64

That will leave two .snap files in my working directory that then I can upload to the store with snapcraft push. Or I can just push the code to GitHub and let build.snapcraft.io to take care of building and pushing for me.

Here is the source code for this simple example: https://github.com/elopio/gobot-blink

Of course, Gobot supports many more devices that will let you build complex robots. Just take a look at the documentation in the Gobot site, and at the guide about deployable packages with Gobot and snapcraft.

picture of the BeagleBone prototype

If you have one of the boards I'm using here to play, give it a try:

sudo snap install gobot-blink-elopio --edge --devmode
sudo gobot-blink-elopio

Now my experiments will be to try make the snap more secure, with strict confinement. If you have any questions or want to help, we have a topic in the forum.

Read more
admin

Hello MAASters!

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS 2.3 (current development release)

  • Completed Django 1.11 transition
      • MAAS 2.3 snap will use Django 1.11 by default.
      • Ubuntu package will use Django 1.11 in Artful+
  • Network beaconing & better network discovery
      • MAAS now listens for [unicast and multicast] beacons on UDP port 5240. Beacons are encrypted and authenticated using a key derived from the MAAS shared secret. Upon receiving certain types of beacons, MAAS will reply, confirming the sender that existing MAAS on the network has the same shared key. In addition, records are kept about which interface each beacon was received on, and what VLAN tag (if any) was in use on that interface. This allows MAAS to determine which interfaces observed the same beacon (and thus must be on the same fabric). This information can also determine if [what would previously have been assumed to be] a separate fabric is actually an alternate VLAN in an existing fabric.
      • The maas-rack send-beacons command is now available to test the beacon protocol. (This command is intended for testing and support, not general use.) The MAAS shared secret must be installed before the command can be used. By default, it will send multicast beacons out all possible interfaces, but it can also be used in unicast mode.
      • Note that while IPv6 support is planned, support for receiving IPv6 beacons in MAAS is not yet available. The maas-rack send-beacons command, however, is already capable of sending IPv6 beacons. (Full IPv6 support is expected to make beacons more flexible, since IPv6 multicast can be sent out on interfaces without a specific IP address assignment, and without resorting to raw sockets.)
      • Improvements to rack registration are now under development, so that users will see a more accurate representation of fabrics upon initial installation or registration of a MAAS rack controller.
  • Bug fixes
    • LP: #1701056: Show correct information for a device details page as a normal user
    • LP: #1701052: Do not show the controllers tab as a normal user
    • LP: #1683765: Fix format when devices/controllers are selected to match those of machines
    • LP: #1684216 – Update button label from ‘Save selection’ to ‘Update selection’
    • LP: #1682489 – Fix Cancel button on add user dialog, which caused the user to be added anyway
    • LP: #1682387 – Unassigned should be (Unassigned)

MAAS 2.2.1

The past week the team was also focused on preparing and QA’ing the new MAAS 2.2.1 point release, which was released on Friday June the 30th. For more information about the bug fixes please visit the following https://launchpad.net/maas/+milestone/2.2.1 .

MAAS 2.2.1 is available in:

  • ppa:maas/stable

Read more
Leo Arias

Travis CI offers a great continuous integration service for the projects hosted on GitHub. With it, you can run tests, deliver artifacts and deploy applications every time you push a commit, on pull requests, after they are merged, or with some other frequency.

Last week Travis CI updated the Ubuntu 14.04 (Trusty) machines that run your tests and deployment steps. This update came with a nice surprise for everybody working to deliver software to Linux users, because it is now possible to install snaps in Travis!

I've been excited all week telling people about all the doors that this opens; but if you have been following my adventures in the Ubuntu world, by now you can probably guess that I'm mostly thinking about all the potential this has for automated testing. For the automation of user acceptance tests.

User acceptance tests are executed from the point of view of the user, with your software presented as a black box to them. The tests can only interact with the software through the entry points you define for your users. If it's a CLI application, then the tests will call commands and subcommands and check the outputs. If it's a website or a desktop application, the tests will click things, enter text and check the changes on this GUI. If it's a service with an HTTP API, the tests will make requests and check the responses. On these tests, the closer you can get to simulate the environment and behaviour of your real users, the better.

Snaps are great for the automation of user acceptance tests because they are immutable and they bundle all their dependencies. With this we can make sure that your snap will work the same on any of the operating systems and architectures that support snaps. The snapd service takes care of hiding the differences and presenting a consistent execution environment for the snap. So, getting a green execution of these tests in the Trusty machine of Travis is a pretty good indication that it will work on all the active releases of Ubuntu, Debian, Fedora and even on a Raspberry Pi.

Let me show you an example of what I'm talking about, obviously using my favourite snap called IPFS. There is more information about IPFS in my previous post.

Check below the packaging metadata for the IPFS snap, a single snapcraft.yaml file:

name: ipfs
version: master
summary: global, versioned, peer-to-peer filesystem
description: |
  IPFS combines good ideas from Git, BitTorrent, Kademlia, SFS, and the Web.
  It is like a single bittorrent swarm, exchanging git objects. IPFS provides
  an interface as simple as the HTTP web, but with permanence built in. You
  can also mount the world at /ipfs.
confinement: strict

apps:
  ipfs:
    command: ipfs
    plugs: [home, network, network-bind]

parts:
  ipfs:
    source: https://github.com/ipfs/go-ipfs.git
    plugin: nil
    build-packages: [make, wget]
    prepare: |
      mkdir -p ../go/src/github.com/ipfs/go-ipfs
      cp -R . ../go/src/github.com/ipfs/go-ipfs
    build: |
      env GOPATH=$(pwd)/../go make -C ../go/src/github.com/ipfs/go-ipfs install
    install: |
      mkdir $SNAPCRAFT_PART_INSTALL/bin
      mv ../go/bin/ipfs $SNAPCRAFT_PART_INSTALL/bin/
    after: [go]
  go:
    source-tag: go1.7.5

It's not the most simple snap because they use their own build tool to get the go dependencies and compile; but it's also not too complex. If you are new to snaps and want to understand every detail of this file, or you want to package your own project, the tutorial to create your first snap is a good place to start.

What's important here is that if you run snapcraft using the snapcraft.yaml file above, you will get the IPFS snap. If you install that snap, then you can test it from the point of view of the user. And if the tests work well, you can push it to the edge channel of the Ubuntu store to start the crowdtesting with your community.

We can automate all of this with Travis. The snapcraft.yaml for the project must be already in the GitHub repository, and we will add there a .travis.yml file. They have good docs to prepare your Travis account. First, let's see what's required to build the snap:

sudo: required
services: [docker]

script:
  - docker run -v $(pwd):$(pwd) -w $(pwd) snapcore/snapcraft sh -c "apt update && snapcraft"

For now, we build the snap in a docker container to keep things simple. We have work in progress to be able to install snapcraft in Trusty as a snap, so soon this will be even nicer running everything directly in the Travis machine.

This previous step will leave the packaged .snap file in the current directory. So we can install it adding a few more steps to the Travis script:

[...]

script:
  - docker [...]
  - sudo apt install --yes snapd
  - sudo snap install *.snap --dangerous

And once the snap is installed, we can run it and check that it works as expected. Those checks are our automated user acceptance test. IPFS has a CLI client, so we can just run commands and verify outputs with grep. Or we can get fancier using shunit2 or bats. But the basic idea would be to add to the Travis script something like this:

[...]

script:
  [...]
  - /snap/bin/ipfs init
  - /snap/bin/ipfs cat /ipfs/QmVLDAhCY3X9P2uRudKAryuQFPM5zqA3Yij1dY8FpGbL7T/readme | grep -z "^Hello and Welcome to IPFS!.*$"
  - [...]

If one of those checks fail, Travis will mark the execution as failed and stop our release process until we fix them. If instead, all of the checks pass, then this version is good enough to put into the store, where people can take it and run exploratory tests to try to find problems caused by weird scenarios that we missed in the automation. To help with that we have the snapcraft enable-ci travis command, and a tutorial to guide you step by step setting up the continuous delivery from Travis CI.

For the IPFS snap we had for a long time a manual smoke suite, that our amazing community of testers have been executing over and over again, every time we want to publish a new release. I've turned it into a simple bash script that from now on will be executed frequently by Travis, and will tell us if there's something wrong before anybody gives it a try manually. With this our community of testers will have more time to run new and interesting scenarios, trying to break the application in clever ways, instead of running the same repetitive steps many times.

Thanks to Travis and snapcraft we no longer have to worry about a big part of or release process. Continuous integration and delivery can be fully automated, and we will have to take a look only when something breaks.

As for IPFS, it will keep being my guinea pig to guide new features for snapcraft and showcase them when ready. It has many more commands that have to be added to the automated test suite, and it also has a web UI and an HTTP API. Lots of things to play with! If you would like to help, and on the way learn about snaps, automation and the decentralized web, please let me know. You can take a look on my IPFS snap repo for more details about testing snaps in Travis, and other tricks for the build and deployment.

screenshot of the IPFS smoke test running in travis

Read more
admin

Announcements

  • Transition to Git in Launchpad
    The MAAS team is happy to announce that we have moved our code repositories away from Bazaar. We are now using Git in Launchpad.[1]

MAAS 2.3 (current development release)

This week, the team has worked on the following features and improvements:

  • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress involved:
    • Updated Jenkins job configuration to run CI tests from Git instead of bzr.
    • Created new Jenkins jobs to test older releases via Git instead of bzr.
    • Update Jenkins job triggering mechanism from using Tarmac to using the Jenkins Git plugin.
    • Replaced the maas code lander (based on tarmac) with a Jenkins job to automatically land approved branches.
      • This also includes a mechanism to automatically set milestones and close Launchpad bugs.
    • Updated Snap building recipe to build from Git. 
  • Removal of ‘tgt’ as a dependency behind a feature flag – This week we have landed the ability to load ephemeral images via HTTP from the initrd, instead of doing it via iSCSI (served by ‘tgt’). While the use of ‘tgt’ is still default, the ability to not use it is hidden behind a feature flag (http_boot). This is only available in trunk. 
  • Django 1.11 transition – We are down to the latest items of the transition, and we are targeting it to be completed by the upcoming week. 
  • Network Beaconing & better network discovery – The team is continuing to make progress on beacons. Following a thorough review, the beaconing packet format has been optimized; beacon packets are now simpler and more compact. We are targeting rack registration improvements for next week, so that newly-registered rack controllers do not create new fabrics if an interface can be determined to be on an existing fabric.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1). The MAAS team is currently targeting a new 2.2.1 release for the upcoming week.

  • LP #1687305 – Fix virsh pods reporting wrong storage
  • LP #1699479 – A couple of unstable tests failing when using IPv6 in LXC containers

[1]: https://git.launchpad.net/maas

Read more
Dustin Kirkland

Thank you to Oracle Cloud for inviting me to speak at this month's CloudAustin Meetup hosted by Rackspace.

I very much enjoyed deploying Canonical Kubernetes on Ubuntu in the Oracle Cloud, and then exploring Kubernetes a bit, how it works, the architecture, and a simple workload within.  I'm happy to share my slides below, and you can download a PDF here:


If you're interested in learning more, check out:
It was a great audience, with plenty of good questions, pizza, and networking!

I'm pleased to share my slide deck here.

Cheers,
Dustin

Read more
Sciri

Note: Community TFTP documentation is on the Ubuntu Wiki but this short guide adds extra steps to help secure and safeguard your TFTP server.

Every Data Centre Engineer should have a TFTP server somewhere on their network whether it be running on a production host or running on their own notebook for disaster recovery. And since TFTP is lightweight without any user authentication care should be taken to prevent access to or overwriting of critical files.

The following example is similar to the configuration I run on my personal Ubuntu notebook and home Ubuntu servers. This allows me to do switch firmware upgrades and backup configuration files regardless of environment since my notebook is always with me.

Step 1: Install TFTP and TFTP server

$ sudo apt update; sudo apt install tftp-hpa tftpd-hpa

Step 2: Configure TFTP server

The default configuration below allows switches and other devices to download files but, if you have predictable filenames, then anyone can download those files if you configure TFTP Server on your notebook. This can lead to dissemination of copyrighted firmware images or config files that may contain passwords and other sensitive information.

# /etc/default/tftpd-hpa

TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/var/lib/tftpboot"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure"

Instead of keeping any files directly in the /var/lib/tftpboot base directory I’ll use mktemp to create incoming and outgoing directories with hard-to-guess names. This prevents guessing common filenames.

First create an outgoing directory owned by root mode 755. Files in this directory should be owned by root to prevent unauthorized or accidental overwriting. You wouldn’t want your expensive Cisco IOS firmware image accidentally or maliciously overwritten.

$ cd /var/lib/tftpboot
$ sudo chmod 755 $(sudo mktemp -d XXXXXXXXXX --suffix=-outgoing)

Next create incoming directory owned by tftp mode 700 . This allows tftpd-hpa to create files in this directory if configured to do so.

$ sudo chown tftp:tftp $(sudo mktemp -d XXXXXXXXXX --suffix=-incoming)
$ ls -1
ocSZiwPCkH-outgoing
UHiI443eTG-incoming

Configure tftpd-hpa to allow creation of new files. Simply add –create to TFTP_OPTIONS in /etc/default/tftpd-hpa.

# /etc/default/tftpd-hpa

TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/var/lib/tftpboot"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure --create"

And lastly restart tftpd-hpa.

$ sudo /etc/init.d/tftpd-hpa restart
[ ok ] Restarting tftpd-hpa (via systemctl): tftpd-hpa.service.

Step 3: Firewall rules

If you have a software firewall enabled you’ll need to allow access to port 69/udp. Either add this rule to your firewall scripts if you manually configure iptables or run the following UFW command:

$ sudo ufw allow tftp

Step 4: Transfer files

Before doing a firmware upgrade or other possibly destructive maintenance I always backup my switch config and firmware.

cisco-switch#copy running-config tftp://192.168.0.1/UHiI443eTG-incoming/config-cisco-switch
Address or name of remote host [192.168.0.1]? 
Destination filename [UHiI443eTG-incoming/config-cisco-switch]? 
 
 !!
3554 bytes copied in 0.388 secs (9160 bytes/sec)
cisco-switch#copy flash:?
flash:c1900-universalk9-mz.SPA.156-3.M2.bin flash:ccpexp flash:cpconfig-19xx.cfg flash:home.shtml
flash:vlan.dat

cisco-switch#copy flash:c1900-universalk9-mz.SPA.156-3.M2.bin tftp://192.168.0.1/UHiI443eTG-incoming/c1900-universalk9-mz.SPA.156-3.M2.bin 
Address or name of remote host [192.168.0.1]? 
Destination filename [UHiI443eTG-incoming/c1900-universalk9-mz.SPA.156-3.M2.bin]? 
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
85258084 bytes copied in 172.692 secs (493700 bytes/sec)

Files in incoming will be owned by tftp mode 666 (world writable) by default. Remember to move those files to your outgoing directory and change ownership to root mode 644 for safe keeping.

Once you’re sure your switch config and firmware is safely backed up it’s safe to copy new firmware to flash or do any other required destructive maintenance.

Step 5: Prevent TFTP access

It’s good practice on a notebook to deny services when not actively in-use. Assuming you have a software firewall be sure to deny access to your TFTP server when on the road or when connected to hostile networks.

$ sudo ufw deny tftp
Rule updated
Rule updated (v6)
$ sudo ufw status
Status: active

To Action From
-- ------ ----
CUPS ALLOW Anywhere 
OpenSSH DENY Anywhere 
69/udp DENY Anywhere 
CUPS (v6) ALLOW Anywhere (v6) 
OpenSSH (v6) DENY Anywhere (v6) 
69/udp (v6) DENY Anywhere (v6)

Read more
admin

Thursday June 8th, 2017

The MAAS team is happy to announce the introduction of development summaries. We hope this helps to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS, and bugs fixed in released MAAS versions.

Announcements

With the MAAS 2.2 release out of the door, we are happy to announce that:

  • MAAS 2.3 is now opened for development.
  • MAAS is moving to GIT in Launchpad – In the coming weeks, MAAS source will now be hosted under a GIT repository in Launchpad, once we complete the work of updating all our internal processes (e.g. CI, Landers, etc).

MAAS 2.3 (current development release)

With the team now focusing efforts on the new development release, MAAS 2.3, the team has been working on the following features and improvements:

  • Started adding support for Django 1.11 – MAAS will continue to be backward compatible with Django 1.8.
  • Adding support for ‘upstream’ proxy – MAAS deployed machines will continue to use MAAS’ internal proxy, while allowing MAAS ‘ proxy to communicate with an upstream proxy.
  • Started adding network beaconing – New feature to support better network (subnet’s, vlans) discovery and allow fabric deduplication.
    • Officially registered IPv4 and IPv6 multicast groups for MAAS beaconing (224.0.0.118 and ff02::15a, respectively).
    • Implemented a mechanism to provide authenticated encryption using the MAAS shared secret.
    • Prototyped initial beaconing multicast join mechanism and receive path.

Libmaas (python-libmaas)

With the continuous improvement of the new MAAS Python Library (python-libmaas), we have focused our efforts on the following improvements the past week:

  • Add support to be able to provide nested objects and object sets.
  • Add support to be able to update any object accessible via the library.
  • Add ability to read interfaces (nested) under Machines, Devices, Rack Controllers and Region Controllers.
  • Add ability to read VLAN’s (nested) under Fabrics.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

  • Bug #1694767: RSD composition not setting local disk tags
  • Bug #1694759: RSD Pod refresh shows ComposedNodeState is “Failed”
  • Bug #1695083: Improve NTP IP address selection for MAAS DHCP clients.

Questions?

IRC – Find as on #maas @ freenode.

ML – https://lists.ubuntu.com/mailman/listinfo/maas-devel

Read more
Michael Hall

After a little over 6 years, I am embarking on a new adventure. Today is my last day at Canonical, it’s bitter sweet saying goodbye precisely because it has been such a joy and an honor to be working here with so many amazing, talented and friendly people. But I am leaving by choice, and for an opportunity that makes me as excited as leaving makes me sad.

Goodbye Canonical

maltaI’ve worked at Canonical longer than I’ve worked at any company, and I can honestly say I’ve grown more here both personally and professionally than I have anywhere else. It launched my career as a Community Manager, learning from the very best in the industry how to grow, nurture, and excite a world full of people who share the same ideals. I owe so many thanks (and beers) to Jono Bacon, David Planella, Daniel Holbach, Jorge Castro, Nicholas Skaggs, Alan Pope, Kyle Nitzsche and now also Martin Wimpress. I also couldn’t have done any of this without the passion and contributions of everybody in the Ubuntu community who came together around what we were doing.

As everybody knows by now, Canonical has been undergoing significant changes in order to set it down the road to where it needs to be as a company. And while these changes aren’t the reason for my leaving, it did force me to think about where I wanted to go with my future, and what changes were needed to get me there. Canonical is still doing important work, I’m confident it’s going to continue making a huge impact on the technology and open source worlds and I wish it nothing but success. But ultimately I decided that where I wanted to be was along a different path.

Of course I have to talk about the Ubuntu community here. As big of an impact as Canonical had on my life, it’s only a portion of the impact that the community has had. From the first time I attended a Florida LoCo Team event, I was hooked. I had participated in open source projects before, but that was when I truly understood what the open source community was about. Everybody I met, online or in person, went out of their way to make me feel welcome, valuable, and appreciated. In fact, it was the community that lead me to work for Canonical in the first place, and it was the community work I did that played a big role in me being qualified for the job. I want to give a special shout out to Daniel Holbach and Jorge Castro, who built me up from a random contributor to a project owner, and to Elizabeth Joseph and Laura Faulty who encouraged me to take on leadership roles in the community. I’ve made so many close and lasting friendships by being a part of this amazing group of people, and that’s something I will value forever. I was a community member for years before I joined Canonical, and I’m not going anywhere now. Expect to see me around on IRC, mailing lists and other community projects for a long time to come.

Hello Endless

EndlessNext week I will be joining the team at Endless as their Community Manager. Endless is an order of magnitude smaller than Canonical, and they have a young community that it still getting off the ground. So even though I’ll have the same role I had before, there will be new and exciting challenges involved. But the passion is there, both in the company and the community, to really explode into something big and impactful. In the coming months I will be working to setup the tools, processes and communication that will be needed to help that community grow and flourish. After meeting with many of the current Endless employees, I know that my job will be made easier by their existing commitment to both their own community and their upstream communities.

What really drew me to Endless was the company’s mission. It’s not just about making a great open source project that is shared with the world, they have a specific focus on social good and improving the lives of people who the current technology isn’t supporting. As one employee succinctly put it to me: the whole world, empowered. Those who know me well will understand why this resonates with me. For years I’ve been involved in open source projects aimed at early childhood education and supporting those in poverty or places without the infrastructure that most modern technology requires. And while Ubuntu covers much of this, it wasn’t the primary focus. Being able to work full time on a project that so closely aligned with my personal mission was an opportunity I couldn’t pass up.

Broader horizons

Over the past several months I’ve been expanding the number of communities I’m involved in. This is going to increase significantly in my new role at Endless, where I will be working more frequently with upstream and side-stream projects on areas of mutual benefit and interest. I’ve already started to work more with KDE, and I look forward to becoming active in GNOME and other open source desktops soon.

I will also continue to grow my independent project, Phoenicia, which has a similar mission to Endless but a different technology and audience. Now that this is no longer competing in the XPRIZE competition, it releases some restrictions that we had to operate under and frees us to investigate new areas of innovation and collaboration. If you’re interested in game development, or making an impact on the lives of children around the world, come and see what we’re doing.

If anybody wants to reach out to me to chat, you can still reach me at mhall119@ubuntu.com and soon at mhall119@endlessm.com, tweet me at @mhall119, connect on LinkedIn, chat on Telegram or circle me on Google+. And if we’re ever at a conference together give me a shout, I’d love to grab a drink and catch up.

Read more
admin

I’m happy to announce that MAAS 2.2.0 (final) has now been released, and it introduces quite a few exciting features:

  • MAAS Pods – Ability to dynamically create a machine on demand. This is reflected in MAAS’ support for Intel Rack Scale Design.
  • Hardware Testing
  • DHCP Relay Support
  • Unmanaged Subnets
  • Switch discovery and deployment on Facebook’s Wedge 40 & 100.
  • Various improvements and minor features.
  • MAAS Client Library
  • Intel Rack Scale Design support.

For more information, please read the release notes are available here.

Availability
MAAS 2.2.0 is currently available in the following MAAS team PPA.
ppa:maas/next
Please note that MAAS 2.2 will replace the MAAS 2.1 series, which will go out of support. We are holding MAAS 2.2 in the above PPA for a week, to provide enough notice to users that it will replace 2.1 series. In the following weeks, MAAS 2.2 will be backported into Ubuntu Xenial.

Read more
Dustin Kirkland


Canonical announced the Ubuntu 12.04 LTS (Precise Pangolin) release almost 5 years ago, on April 26, 2012. As with all LTS releases, Canonical has provided ongoing security patches and bug fixes for a period of 5 years. The Ubuntu 12.04 LTS (Long Term Support) period will end on Friday, April 28, 2017.

Following the end-of-life of Ubuntu 12.04 LTS, Canonical is offering Ubuntu 12.04 ESM (Extended Security Maintenance), which provides important security fixes for the kernel and the most essential user space packages in Ubuntu 12.04.  These updates are delivered in a secure, private archive exclusively available to Ubuntu Advantage customers on a per-node basis.

All Ubuntu 12.04 LTS users are encouraged to upgrade to Ubuntu 14.04 LTS or Ubuntu 16.04 LTS. But for those who cannot upgrade immediately, Ubuntu 12.04 ESM updates will help ensure the on-going security and integrity of Ubuntu 12.04 systems.

Users interested in Ubuntu 12.04 ESM updates can purchase Ubuntu Advantage at http://buy.ubuntu.com/   Credentials for the private archive will be available by the end-of-life date for Ubuntu 12.04 LTS (April 28, 2017).

Questions?  Post in the comments below and join us for a live webinar, "HOWTO: Ensure the Ongoing Security Compliance of your Ubuntu 12.04 Systems", on Wednesday, March 22nd at 4pm GMT / 12pm EDT / 9am PDT.  Here, we'll discuss Ubuntu 12.04 ESM and perform a few live upgrades of Ubuntu 12.04 LTS systems.

Cheers,
Dustin

Read more