Canonical Voices

Alex Cattle

An image displaying a range of devices connected to a mobile network.

Mobile operators face a range of challenges today from saturation, competition and regulation – all of which are having a negative impact on revenues. The introduction of 5G offers new customer segments and services to offset this decline. However, unlike the introduction of 4G which was dominated by consumer benefits, 5G is expected to be driven by enterprise use. According to IDC, enterprises will generate 60 percent of the world’s data by 2025.

Rather than rely on costly proprietary hardware and operating models, the use of open source technologies offers the ability to commoditise and democratise the wireless network infrastructure. Major operators such as Vodafone, Telefonica and China Mobile have already adopted such practices.

Shifting to open source technology and taking a software defined approach enables mobile operators to differentiate based on the services they offer, rather than network coverage or subscription costs.

This whitepaper will explain how mobile operators can break the proprietary stranglehold and adopt an open approach including:

  • The open source initiatives and technologies available today and being adopted by major operators.
  • How a combination of software defined radio, mobile base stations and 3rd party app development can provide a way for mobile operators to differentiate and drive down CAPEX
  • Use cases by Vodafone and EE on successful implementations by adopting an open source approach

To view the whitepaper, sign up using the form below:

The post The future of mobile connectivity appeared first on Ubuntu Blog.

Read more
Anthony Dillon

This was a fairly busy two weeks for the Web & design team at Canonical.  Here are some of the highlights of our completed work.

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical.

Getting started with AI webinar

We build a page to promote the AI webinar.

Watch the webinar

Legal page updates

There have been several minor changes to the legal pages on the site.

New telco whitepaper

Designed and built a takeover and landing page for a whitepaper on telecommunications, featuring Lime Microsystems.

Get the whitepaper

Moved to

This was a collaborative effort between multiple squads in the team, which resulted in moving the blog from to This is now live with redirects from the old links to the new home of blog.

Visit the blog


Brand squad champion a consistent look and feel across all media from web to social to print and logos. 

Marketing support

We completed a few documents for the Marketing team, datasheets and a whitepaper.

Suru exploration

We did some initial exploration work on how we can extend the use of Suru across the webs. We concentrated on use in the header and footer sections of bubble pages. The aim is to get consistent use across all our websites and guideline how it is to be used in order to help the development teams going forward.


We pushed on with the development of our illustrations, completing an audit of illustrations currently used across all our websites and putting a plan in place to update them in stages.

We designed an illustration for ‘The future of mobile connectivity’ to go in a takeover on


Finalised a list of UI icons with the UX teams with a view to create a new complete set to go across all our products, delivering a consistent user experience.

Slide deck tutorial

We completed the first stage of a video tutorial to go alongside the Master slide deck that guides you on how to use the new deck, do’s and dont’s, and tips for creating great looking slides.


The MAAS squad develop the UI for the maas project.

Understand network testing

As the engineers are starting to flesh out their part of the spec for network testing in MAAS – a new feature aimed at the 2.7 release, the UX team spent a significant amount of time learning and understanding what network testing implies and how it will work. We began creating initial sketches to help both the design and the engineering team think in line for what the UI could look like.

Upgrade to Vanilla 2.0 

We upgraded the MAAS application to Vanilla 2.0, making use of all the great features introduced with it. As things the new version of the framework introduces quite a few changes, there are a number of issues in the MAAS UI need to be fixed – the work for this was started and will continue in the next iteration.

Implement small graphs for KVM listing

We are introducing new small graphs within a table, that will, to begin with, be used in the KVM and RSD pages, allowing the users to at-a-glance see their used and available resources. The work for this began with implementing graphs and infotips (rich tooltips) displaying storage and will continue in the next iteration with graphs displaying cores. 


The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

Juju status project

We gathered which pieces of information would be good to display in the new Juju status project. The new GUI will help Juju to scale up, targeting users with lots of Juju/models, stitching all bootstrapping together. Status of all models with metadata about the controllers, analytics and stats. JAAS is the intersection of Juju, CLI, models and solutions. enhancements

We have worked on an update to the homepage hero area to include a slideshow. This will help to promote different types of content at the homepage.


The Vanilla squad design and maintain the design system and Vanilla framework library. They ensure a consistent style throughout web assets.  

Released Vanilla 2.0

Over the past year, we’ve been working hard to bring you the next release of Vanilla framework: version 2.0, our most stable release to date.

Since our last significant release, v1.8.0 back in July last year, we’ve been working hard to bring you new features, improve the framework and make it the most stable version we’ve released.

You can see the full list of new and updated changes in the framework in the full release notes . Alternatively you can read up on high-level highlights in our recent blog post New release: Vanilla framework 2.0, and to help with your upgrade to 2.0 we’ve written a step-by-step guide to help you along the way.

Site upgrades

With the release of Vanilla 2.0 we’ve been rolling upgrades across some of our marketing sites.


The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

Distro pages

Upon our release of new pages with instructions to enable snap support to users, we got some echo in the media. Check these articles:

These pages are generating an aggregated traffic around ~2000 visits per day since they launched, they keep growing without having performed any specific campaign.

Release UI drag’n’drop

The Release UI has received some love this iteration. We’ve updated the visual style to improve usability and worked on drag’n’dropping channels and releases – which will be landing soon.

There was some refactoring needed to make the right components draggable, and we took the opportunity to experiment with React hooks. This work was the foundation for a better release experience and flexibility that we’ll be working on in the coming iteration.

Commercial applications

Responsive UI layout

The UI for the base application is now responsive and has been optimised for tablet and mobile devices. The work comprised of two components:

  1. Modifying the existing CSS Grid layout at certain breakpoints, which was relatively simple, and;
  2. Migrating the react-virtualized tables to react-window (a much leaner alternative for virtualized lists) and making them responsive. This piece of work was substantially more difficult.


Candid is our service which provides macaroon-based authentication.

Multi log-in, implementation of the design

Candid has the ability to login to a selection of registered identity providers. We required an interface to select the provider you want to use to authenticate with. This task included applying Vanilla framework to the existing views with improvements on performance and maintainability.

Blog Posts

The post Design and Web team summary – 25 June 2019 appeared first on Ubuntu Blog.

Read more

Thanks to the huge amount of feedback this weekend from gamers, Ubuntu Studio, and the WINE community, we will change our plan and build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS.

We will put in place a community process to determine which 32-bit packages are needed to support legacy software, and can add to that list post-release if we miss something that is needed.

Community discussions can sometimes take unexpected turns, and this is one of those. The question of support for 32-bit x86 has been raised and seriously discussed in Ubuntu developer and community forums since 2014. That’s how we make decisions.

After the Ubuntu 18.04 LTS release we had extensive threads on the ubuntu-devel list and also consulted Valve in detail on the topic. None of those discussions raised the passions we’ve seen here, so we felt we had sufficient consensus for the move in Ubuntu 20.04 LTS. We do think it’s reasonable to expect the community to participate and to find the right balance between enabling the next wave of capabilities and maintaining the long tail. Nevertheless, in this case it’s relatively easy for us to change plan and enable natively in Ubuntu 20.04 LTS the applications for which there is a specific need.

We will also work with the WINE, Ubuntu Studio and gaming communities to use container technology to address the ultimate end of life of 32-bit libraries; it should stay possible to run old applications on newer versions of Ubuntu. Snaps and LXD enable us both to have complete 32-bit environments, and bundled libraries, to solve these issues in the long term.

There is real risk to anybody who is running a body of software that gets little testing. The facts are that most 32-bit x86 packages are hardly used at all. That means fewer eyeballs, and more bugs. Software continues to grow in size at the high end, making it very difficult to even build new applications in 32-bit environments. You’ve heard about Spectre and Meltdown – many of the mitigations for those attacks are unavailable to 32-bit systems.

This led us to stop creating Ubuntu install media for i386 last year and to consider dropping the port altogether at a future date.  It has always been our intention to maintain users’ ability to run 32-bit applications on 64-bit Ubuntu – our kernels specifically support that.

The Ubuntu developers remain committed as always to the principle of making Ubuntu the best open source operating system across desktop, server, cloud, and IoT.  We look forward to the ongoing engagement of our users in continuing to make this principle a reality.

The post Statement on 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS appeared first on Ubuntu Blog.

Read more
Jeremie Deray

Disclosure: read the post until the end, a surprise awaits you!

Moving from ROS 1 to ROS 2 can be a little overwhelming.
It is a lot of (new) concepts, tools and a large codebase to get familiar with. And just like many of you, I am getting started with ROS 2.

One of the central pieces of the ROS ecosystem is its Command Line Interface (CLI). It allows for performing all kind of actions; from retrieving information about the codebase and/or the runtime system, to executing code and of course helping debugging in general. It’s a very valuable set of tools that ROS developers use on a daily basis. Fortunately, pretty much all of those tools were ported from ROS 1 to ROS 2.

To those already familiar with ROS, the ROS 2 CLI wording will sound very familiar. Commands such as roslaunch is ported to ros2 launch, rostopic becomes ros2 topic while rosparam is now ros2 param.
Noticed the pattern already ? Yes that’s right ! The keyword ‘ros2‘ has become the unique entry-point for the CLI.

So what ? ROS CLI keywords where broke in two and that’s it ?

Well, yes pretty much.

Every command starts with the ros2 keyword, followed by a verb, a sub-verb and possibly positional/optional arguments. The pattern is then,

$ ros2 verb sub-verb <positional-argument> <optional-arguments>

Notice that throughout the CLI, the auto-completion (the infamous [tab][tab]) is readily available for verbs, sub-verbs and most positional arguments. Similarly, helpers are available at each stage,

$ ros2 verb --help
$ ros2 verb sub-verb -h

Let us see a few examples,

$ ros2 run demo_node_cpp talker
starts the talker cpp node from the demo_nodes_cpp package.

$ ros2 run demo_node_py listener
starts the listener python node from the demo_nodes_py package.

$ ros2 topic echo /chatter
outputs the messages sent from the talker node.

$ ros2 node info /listener
outputs information about the listener node.

$ ros2 param list
lists all parameters of every node.

Fairly similar to ROS 1 right ?

Missing CLI tools

We mentioned earlier that most of the CLI tools were ported to ROS 2, but not all. We believe such missing tools is one of the barriers to greater adoption of ROS 2, so we’ve started added some that we noticed were missing. Over the past week we contributed 5 sub-verbs, including one that is exclusive to ROS 2. Let us briefly review them,

$ ros2 topic find <message-type>
outputs a list of all topics publishing messages of a given type (#271).

$ ros2 topic type <topic-name>
outputs the message type of a given topic (#272).

$ ros2 service find <service-type>
outputs a list of all services of a given type (#273).

$ ros2 service type <service-name>
outputs the service type of a given service (#274).

This tools are pretty handy by themselves, especially to debug and grasp an overview of a running system. And they become even more interesting when combined, say, in handy little scripts,

$ ros2 topic pub /chatter $(ros2 topic type /chatter) "data: Hello ROS 2 Developers"

Have you ever looked for the version of a package you are using ?
Ever wondered who is the package author ?
Or which other packages it depends upon ?
All of this information, locked in the package’s xml manifest is now easily available at the tip of your fingers !

The new sub-verb we introduced allows one to retrieve any information contained in a package xml manifest (#280). The command,

$ ros2 pkg xml <package-name>
outputs the entirety of the xml manifest of a given package.
To retrieve solely a piece of it, or a tag in xml wording, use the --tag option,

$ ros2 pkg xml <package-name> --tag <tag-name>

A few examples are (at the time of writing),

$ ros2 pkg xml demo_nodes_cpp --tag version

$ ros2 pkg xml demo_nodes_py -t author
Mikael Arguedas
Esteve Fernandez

$ ros2 pkg xml intra_process_demo -t build_depend libopencv-dev

This concludes our brief review of the changes that ROS 2 introduced to the CLI tools.

Before leaving, let me offer you a treat.

— A ROS 2 CLI Cheats Sheet that we put together —

Feel free to share it, print and pin it above your screen but also contribute to it as it is hosted on github !


The post ROS 2 Command Line Interface appeared first on Ubuntu Blog.

Read more

MicroK8s can be used to run Kubernetes on Mac for testing and developing apps on macOS. Follow the steps below for easy setup.

Kubernetes on Mac install and Grafana dashboard

MicroK8s is the local distribution of Kubernetes developed by Ubuntu. It’s a compact Linux snap that installs a single node cluster on a local PC. Although MicroK8s is only built for Linux, Kubernetes on Mac works by setting up a cluster in an Ubuntu VM.

It runs all Kubernetes services natively on Ubuntu and any operating system (OS) which supports snaps. This is beneficial for testing and building apps, creating simple Kubernetes clusters and developing microservices locally –  essentially all dev work that needs deployment.

MicroK8s provides another level of reliability because it provides the most current version of Kubernetes for development. The latest upstream version of Kubernetes is always available on Ubuntu within one week of official release.

Kubernetes on Mac set up steps

Kubernetes and MicroK8s both need a Linux kernel to work and require an Ubuntu VM as mentioned above. Mac users also need Multipass, the tool for launching Ubuntu VMs on Mac, Windows and Linux.

Here are instructions to set up Multipass and to run Kubernetes on Mac.

Step 1: Install a VM for Mac using Multipass

The latest Multipass package is available on GitHub. Double click the .pkg file to install it.

To start a VM with MicroK8s run:

multipass launch --name microk8s-vm --mem 4G --disk 40G
multipass exec microk8s-vm -- sudo snap install microk8s --classic
multipass exec microk8s-vm -- sudo iptables -P FORWARD ACCEPT

Make enough resources available for hosting. Above we’ve created a VM named microk8s-vm and given it 4GB of RAM and 40GB of disk.

The VM has an IP that can be checked with the following: (Take note of this IP since our services will become available here).

multipass list
Name         State IPv4            Release
microk8s-vm  RUNNING   Ubuntu 18.04 LTS

Step 2: Interact with MicroK8s on the VM

This can be done in three ways:

  • Using a Multipass shell prompt (command line) by running:
multipass shell microk8s-vm                                                                                     
  • Using multipass exec to execute a command without a shell prompt by inputting:
multipass exec microk8s-vm -- /snap/bin/microk8s.status                             
  • Using the Kubernetes API server running in the VM. Here one would use MicroK8s kubeconfig file with a local installation of kubectl to access the in-VM-kubernetes. Do this by running:
multipass exec microk8s-vm -- /snap/bin/microk8s.config > kubeconfig     

Next, install kubectl on the host machine and then use the kubeconfig:

kubectl --kubeconfig=kubeconfig get all --all-namespaces            
Default service/kubernetes ClusterIP <none> 443/TCP 3m12s

Step 3: Access in-VM Multipass services – enabling MicroK8s add-ons

A basic MicroK8s add-on to set up is the Grafana dashboard. Below we show one way of accessing Grafana to monitor and analyse a MicroK8s instance. To do this execute:

multipass exec microk8s-vm -- /snap/bin/microk8s.enable dns dashboard
Enabling DNS
Applying manifest
service/kube-dns created
serviceaccount/kube-dns created
configmap/kube-dns created
deployment.extensions/kube-dns created
Restarting kubelet
DNS is enabled
Enabling dashboard
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
service/monitoring-grafana created
service/monitoring-influxdb created
service/heapster created
deployment.extensions/monitoring-influxdb-grafana-v4 created
serviceaccount/heapster created
configmap/heapster-config created
configmap/eventer-config created
deployment.extesions/heapster-v1.5.2 created
dashboard enabled

Next, check the deployment progress by running:

multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get all --all-namespaces                                                                                                                        

Which should return output similar to:


Once all the necessary services are running, the next step is to access the dashboard, for which we need a URL to visit. To do this, run:

multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl cluster-info  
Kubernetes master is running at
Heapster is running at
KubeDNS is running at
Grafana is running at
InfluxDB is running at

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

If we were inside the VM, we could access the Grafana dashboard by visiting: this URL But, we want to access the dashboard from the host (i.e. outside the VM). We can use a proxy to do this:

multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl proxy --address='' --accept-hosts='.*' 
Starting to serve on [::][::]:8001

Leave the Terminal open with this command running and take note of the port (8001). We will need this next.

To visit the Grafana dashboard, we need to modify the in-VM dashboard URL by:

Kubernetes on Mac in summary

Building apps that are easy to scale and distribute has taken pride-of-place for developers and DevOp teams. Developing and testing apps locally using MicroK8s should help teams to deploy their builds faster.

Useful reading

The post Kubernetes on Mac: how to set up appeared first on Ubuntu Blog.

Read more
Karl Williams

We have just released Vanilla Framework 2.0, Canonical’s SCSS styling framework, and – despite our best efforts to minimise the impact – the new features come with changes that will not be automatically backwards compatible with sites built using previous versions of the framework.

To make the transition to v2.0 easier, we have compiled a list of the major breaking changes and their solutions (when upgrading from v1.8+). This list is outlined below. We recommend that you treat this as a checklist while migrating your projects.

1. Spacing variable mapping

If you’ve used any of the spacing variables (they can be recognised as variables that start with $spv or $sph) in your Sass then you will need to update them before you can build CSS. The simplest way to update these variables is to find/replace them using the substitution values listed in the Vanilla spacing variables table.

2. Grid

2.1 Viewport-specific column names

Old class New class
mobile-col-* col-small-*
tablet-col-* col-medium-*

2.2 Columns must be direct descendants of rows

Ensure .col-* are direct descendants of .row; this has always been the intended use of the pattern but there were instances where the rule could be ignored. This is no longer the case.

Additionally, any .col-*s that are not direct descendants will just span the full width of their container as a fallback.

You can see an example of correcting improperly-nested column markup in this pull request.

2.3 Remove any Shelves grid functionality

The framework no longer includes Shelves, classes starting with prefix-, suffix-, push- and pull- and no longer supported but arbitrary positioning on our new grid is achieved by stating an arbitrary starting column using col-start- classes.

For example: if you want an eight-column container starting at the fourth column in the grid you would use the classes col-8 col-start-4.

You can read full documentation and an example in the Empty Columns documentation.

2.4 Fixed width containers with no columns

Previously, a .row with a single col-12 or a col-12 on its own may have been used to display content in a fixed-width container. The nested solution adds unnecessary markup and a lone col-12 will no longer work.

A simple way to make an element fixed width, use the utility .u-fixed-width, which does not need columns.

2.5 Canonical global nav

If your website makes use of the Canonical global navigation module (If so, hello colleague or community member!) then ensure that the ensure global nav width matches the new fixed-width (72rem by default). A typical implementation would look like the following HTML:

<script src="/js/global-nav.js"></script> <script>canonicalGlobalNav.createNav({maxWidth: '72rem'});</script>

3. Renamed patterns

Some class names have been marked for renaming or the classes required to use them have been minimised.

3.1 Stepped lists

We favour component names that sound natural in English to make the framework more intuitive. “list step” wasn’t a good name and didn’t explain its use very well so we decedided to rename it to the much more explicit “stepped list”.

In order to update the classes in your project search and replace the following:

Old class name New class name
.p-list-step .p-stepped-list
.p-list-step__item .p-stepped-list__item

3.2 Code snippet

“Code snippet” was an ambiguous name so we have renamed it to “code copyable” to indicate the major difference between it and other code variants.

Change the classes your code to the following:

Old class name New class name
.p-code-snippet .p-code-copyable

If you’ve extended the mixin then you’ll need to update the mixin name as follows:

Old mixin name New mixin name
vf-p-code-snippet vf-p-code-copyable

The p-tooltips class remains the same but you no longer need two classes to use the pattern because the modified tooltips now include all required styling. Markup can be simplified as follows (this is the same for all tooltip variants):

Old markup New markup
<button class="p-tooltip p-tooltip--btm-center" …> <button class="p-tooltip--btm-center" …>

4. Breakpoints

Media queries changed due to a WG proposal. Ensure any local media queries are aligned with the new ones. Also, update any hard-coded media queries (e.g. in the markup) to the new values. The new values can be found in the docs (local link)

5. Deprecated components

5.1 Footer (p-footer)

This component is now entirely removed with no direct replacement. Footers can be constructed with a standard p-strip, row markup.

5.2 Button switch (button.p-switch)

Buttons shouldn’t be used with the p-switch component and are no longer supported. Instead, use the much more semantic checkbox input instead.

5.3 Hidden, visible (u-hidden, u-visible)

These have been deprecated and the one visibility-controlling class format is u-hide (and the more specific variants – u-hide documentation)

5.4 Warning notification (p-notification–warning)

This name for the component has been deprecated and it is now only available as p-notification--caution

5.5 p-link–no-underline

This was an obsolete modifier for a removed version underlined links that used borders

5.6 Strong link (p-link–strong)

This has been removed with no replacement as it was an under-utilised component with no clear usage guidelines.

5.7 Inline image variants

The variants p-inline-images img and p-inline-images__img have been removed and the generic implementation now supports all requirements.

5.7 Sidebar navigation (p-navigation–sidebar)

For navigation, the core p-navigation component is recommended. If sidebar-like functionality is still required then if can be constructed with the default grid components.

5.7 Float utilities (p-float*)

To move them in-line with the naming conventions used elsewhere in the framework, u-float--right and u-float--left are now u-float-right and u-float-left (One “-” is removed to make them first-level components rather than modifers. This is to allow for modifiers for screen sizes).

6. (Optional) Do not wrap buttons / anchors in paragraphs.

We have CSS rules in place to ensure that wrapped buttons behave as if they aren’t, and we would like to remove this as it is unnecessary bloat. In order to do that, we need to ensure buttons are no longer wrapped. This back-support is likely to be deprecated in future versions of the framework.

7. Update stacked forms to use the grid (p-form–stacked).

The hard-coded widths (25%/75%) on labels and inputs have been removed. This will break any layouts that use them, so please wrap the form elements in .row>.col-*.

8. Replace references to $px variable

$px used to stand for 1px expressed as a rem value. This was used, for example, to calculate padding so that the padding plus the border equals a round rem value. This could no longer work once we introduced the font size increase at 1680px, because the value of 1rem changes from 16px to 18px.

Replace instances of this variable with calc({rem value} +/- 1px).

This means you need to replace e.g.:

Before: padding: $spv-nudge - $px $sph-intra--condensed * 1.5 - ($px * 2);

After: padding: calc(#{$spv-nudge} - 1px) calc(#{$sph-intra--condensed * 1.5} - 2px);

9. Replace references to $color-warning with $color-caution

This was a backwards compatibility affordance that had been deprecated for the last few releases.

10. (Optional) Try to keep text elements as siblings in the same container

Unless you really need to wrap in different containers, e.g. (emmet notation) div>h4+div>p, use div>h4+p. This way the page will benefit from the careful adjustments of white space using <element>+<element> css rules, i.e. p+p, p+h4 etc. Ignoring this won’t break the page, but the spacing between text elements will not be ideal.

11. $font-base-size is now a map of sizes

To allow for multiple base font sizes for different screen sizes, we have turned the $font-base-size variable into a map.

A quick fix to continue using the deprecated variable locally would be:

$font-base-size: map-get($base-font-sizes, base);

But, for a more future-proof version, you should understand and use the new map.

By default, the font-size of the root element (<html>) increases on screens larger than the value of $breakpoint-x-large. This means it can no longer be represented as a single variable. Instead, we use a map to store the two font sizes ($base-font-sizes). If you are using $font-base-size in your code, replace as needed with a value from the $base-font-sizes map.

That’s it

Following the previous steps should now have your project using the latest features of Vanilla 2.0. There may be more work updating your local code and removing any temporary local fixes for issues we have fixed since the last release but this will vary from project to project.

If you still need help then please contact us on twitter or refer to the full Vanilla Framework documentation.

If, in the process of using Vanilla, you find bugs then please report them as issues on Github where we also welcome pull request submissions from the community if you want to suggest a fix.

The post Vanilla Framework 2.0 upgrade guide appeared first on Ubuntu Blog.

Read more
Igor Ljubuncic

In Linux, testing software is both easy and difficult at the same time. While the repository channels offer great availability to software, you can typically only install a single instance of an application. If you want to test multiple instances, you will most likely need to configure the remainder yourself. With snaps, this is a fairly simple task.

From version 2.36 onwards, snapd supports parallel install – a capability that lets you have multiple instances of the same snap available on your system, each isolated from the others, with its own configurations, interfaces, services, and more. Let’s see how this is done.

Experimental features & unique identifier

The first step is to turn on a special flag that lets snapd manage parallel installs:

snap set system experimental.parallel-instances=true

Once this step is done, you can proceed to installing software. Now, the actual setup may appear slightly counter-intuitive, because you need to append a unique identifier to each snap instance name to distinguish them from the other(s). The identifier is an alphanumeric string, up to 10 characters in length, and it is added as a suffix to the snap name. This is a manual step, and you can choose anything you like for the identifier. For example, if you want to install GIMP with your own identifier, you can do something like:

snap install gimp_first

Technically, gimp_first does not exist as a snap, but snapd will be able to interpret the format of “snap name” “underscore” “unique identifier” and install the right software as a separate instance.

You have quite a bit of freedom choosing how you use this feature. You can install them each individually or indeed in parallel, e.g. snap install gimp_1 gimp_2 gimp_3. You can install a production version (e.g. snap install vlc) and then use unique identifiers for your test installs only. In fact, this may be the preferred way, as you will be able to clearly tell your different instances apart.

Testing 1 2 3

You can try parallel installs with any snap in the store. For example, let’s setup two instances of odio. Snapd will only download the snap package once, and then configure the two requested instances separately.

snap install odio_first odio_second
odio_second 1 from Canonical✓ installed
odio_first 1 from Canonical✓ installed

From here on, you can manage each instance in its own right. You can remove each one using its full name (including the identifier), connect and disconnect interfaces, start and stop services, create aliases, and more. For instance:

snap remove odio_second
odio_second removed

Different instances, different versions

It gets better. Not only can you have multiple instances, you can manage the release channel of each instance separately. So if you want to test different versions – which can really be helpful if you want to learn (and prepare for) what new editions of an application bring, you can do this in parallel to your production setup, without requiring additional hardware, operating system instances, users – or having to worry about potentially harming your environment.

snap info vlc
name:      vlc
summary:   The ultimate media player

stable:    3.0.7                      2019-06-07 (1049) 212MB -
candidate: 3.0.7                      2019-06-07 (1049) 212MB -
beta:       2019-06-18 (1071) 212MB -
edge:      4.0.0-dev-8388-gb425adb06c 2019-06-18 (1070) 329MB -

VLC is a good example, with stable version 3.0.7 available, and preview version 4.0.0 in the edge channel. If you already have multiple instances installed, you can just refresh one of them, e.g.: the aptly named vlc_edge instance:

snap refresh --edge vlc_edge

Or you can even directly install a different version as a separate instance:

snap install --candidate vlc_second
vlc_second (candidate) 3.0.7 from VideoLAN✓ installed

You can check your installed instances, and you will see the whole gamut of versions:

snap list| grep vlc
vlc         3.0.7          1049   stable     videolan*      -
vlc_edge    4.0.0-dev-...  1070 edge     videolan*      -
vlc_second  3.0.7          1049   candidate  videolan*     -

When parallel lines touch

For all practical purposes, these will be individual applications with their own home directory and data. In a way, this is quite convenient, but it can be problematic if your snaps require exclusive access to system resources, like sockets or ports. If you have a snap that runs a service, only one instance will be able to bind to a predefined port, while others will fail.On the other hand, this is quite useful for testing the server-client model, or how different applications inside the snap work with one another. The namespace collisions as well as methods to share data using common directories are described in detail in the documentation. Parallel installs do offer a great deal of flexibility, but it is important to remember that most applications are designed to run individually on a system.


The value proposition of self-contained applications like snaps has been debated in online circles for years now, revolving around various pros and cons compared to installations from traditional repository channels. In many cases, there’s no clear-cut answer, however parallel installs do offer snaps a distinct, unparalleled [sic] advantage. You have the ability to run multiple instances, multiple versions of your applications in a safe, isolated manner.

At the moment, parallel installs are experimental, best suited for users comfortable with software testing. But the functionality does open a range of interesting possibilities, as it allows early access to new tools and features, while at the same time, you can continue using your production setup without any risk. If you have any comments or suggestions, please join our forum for a discussion.

Photo by Kholodnitskiy Maksim on Unsplash.

The post Parallel installs – test and run multiple instances of snaps appeared first on Ubuntu Blog.

Read more

Canonical widens Kubernetes support with kubeadm

Canonical announces full enterprise support for Kubernetes 1.15 using kubeadm deployments, its Charmed Kubernetes, and MicroK8s; the popular single-node deployment of Kubernetes.

The MicroK8s community continues to grow and contribute enhancements, with Knative and RBAC support now available through the simple microk8s.enable command. Knative is a great way to experiment with serverless computing, and now you can experiment locally through MicroK8s. With MicroK8s 1.15 you can develop and deploy Kubernetes 1.15 on any Linux desktop, server or VM across 40 Linux distros. Mac and Windows are supported too, with Multipass.

Existing Charmed Kubernetes users can upgrade smoothly to Kubernetes 1.15, regardless of the underlying hardware or machine virtualisation. Supported deployment targets include AWS, GCE, Azure, Oracle, VMware, OpenStack, LXD, and bare metal.

“Kubernetes 1.15 includes exciting new enhancements in application, custom resource, storage, and network management. These features enable better quota management, allow custom resources to behave more like core resources, and offer performance enhancements in networking. The Ubuntu ecosystem benefits from the latest features of Kubernetes, as soon as they become available upstream“ commented Carmine Rimi, Kubernetes Product Manager at Canonical.

What’s new in Kubernetes 1.15

Notable upstream Kubernetes 1.15 features:

  • Storage enhancements:
    • Quotas for ephemeral storage: (alpha) Quotas utilises filesystem project quotas to provide monitoring of resource consumption and optionally enforcement of limits. Project quotas, initially in XFS and more recently ported to ext4fs, offer a kernel-based means of monitoring and restricting filesystem consumption. This improves performance of monitoring storage utilisation of ephemeral volumes.
    • Extend data sources for persistent volume claims (PVC): (alpha) You can now specify an existing PVC as a DataSource parameter for creating a new PVC. This results in a clone – a duplicate – of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The back end device creates an exact duplicate of the specified Volume. Clones and Snapshots are different – a Clone results in a new, duplicate volume being provisioned from an existing volume — it counts against the users volume quota, it follows the same create flow and validation checks as any other volume provisioning request, it has the same lifecycle and workflow. Snapshots, on the other hand, results in a point-in-time copy of a volume that is not, itself, a usable volume.
    • Dynamic persistent volume (PV) resizing: (beta) This enhancement allows PVs to be resized without having to terminate pods and unmount the volume first.
  • Networking enhancements:
    • NodeLocal DNSCache: (beta) This enhancement improves DNS performance by running a dns caching agent on cluster nodes as a Daemonset. With this new architecture, pods reach out to the dns caching agent running on the same node, thereby avoiding unnecessary networking rules and connection tracking.
    • Finaliser protection for service load balancers: (alpha) Adding finaliser protection to ensure the Service resource is not fully deleted until the correlating LB is also deleted.
    • AWS Network Load Balancer Support: (beta) AWS users now have more choices for AWS load balancer configuration. This includes the Network Load Balancer, which offers extreme performance and static IPs for applications.
  • Node and Scheduler enhancements:
    • Device monitoring plugin support: (beta) Monitoring agents provide insight to the outside world about containers running on the node – this includes, but is not limited to, container logging exporters, container monitoring agents, and device monitoring plugins. This enhancement gives device vendors the ability to add metadata to a container’s metrics or logs so that it can be filtered and aggregated by namespace, pod, container, etc.  
    • Non-preemptive priority classes: (alpha) This feature adds a new option to PriorityClasses, which can enable or disable pod preemption. PriorityClasses impact the scheduling and eviction of pods – pods are scheduled according to descending priority; if a pod cannot be scheduled due to insufficient resources, lower-priority pods will be preempted to make room. Allowing PriorityClasses to be non-preempting is important for running batch workloads – pods with partially-completed work won’t be preempted, allowing them to finish.
    • Scheduling framework extension points: (alpha) The scheduling framework extension points allow many existing and future features of the scheduler to be written as plugins. Plugins are compiled into the scheduler, and these APIs allow many scheduling features to be implemented as plugins, while keeping the scheduling ‘core’ simple and maintainable.
  • Custom Resource Definitions (CRD) enhancements:
    • OpenAPI 3.0 Validation: Major changes introduced to schema validation with the addition of OpenAPI 3.0 validation.
    • Watch bookmark support: (alpha) The Watch API is one of the fundamentals of the Kubernetes API. This feature introduces a new type of watch event called Bookmark, which serves as a checkpoint of all objects, up to a given resourceVersion, that have been processed for a given watcher. This makes restarting watches cheaper and reduces the load on the apiserver by minimising the amount of unnecessary watch events that need to be processed after restarting a watch.
    • Defaulting: (alpha) This features add support for defaulting to Custom Resources. Defaulting is a fundamental step in the processing of API objects in the request pipeline of the kube-apiserver. Defaulting happens during deserialisation, after decoding of a versioned object, but before conversion to a hub type. This feature adds support for specifying default values for fields via OpenAPI v3 validation schemas in the CRD manifest. OpenAPI v3 has native support for a default field with arbitrary JSON values.
    • Pruning: (alpha) Custom Resources store arbitrary JSON data without following the typical Kubernetes API behaviour to prune unknown fields. This makes CRDs different, but also leads to security and general data consistency concerns because it is unclear what is actually stored. The pruning feature will prune all fields which are not specified in the OpenAPI validation schemas given in the CRD.
    • Admission webhook: (beta) Major changes were introduced. The admission webhook feature now supports both mutating webhook and validation (non-mutating) webhook. The dynamic registration API of webhook and the admission API are promoted to v1beta1.
  • For more information, please see the upstream Kubernetes 1.15 release notes.

Notable MicroK8s 1.15 features:

  • Pure upstream Kubernetes 1.15 binaries.
  • Knative addon, try it with “microk8s.enable knative”. Thank you @olatheander.
  • RBAC support via a simple “microk8s.enable rbac”, courtesy of @magne.
  • Update of the dashboard to 1.10.1 and fixes for RBAC. Thank you @balchua.
  • CoreDNS is now the default. Thanks @richardcase for driving this.
  • Ingress updated to 0.24.1 by @JorritSalverda, thank you.
  • Fix on socat failing on Fedora by @JimPatterson, thanks.
  • Modifiable csr server certificate, courtesy of @balchua.
  • Use of iptables kubeproxy mode by default.
  • Instructions on how to run Cilium on MicroK8s by @joestringer.

For complete details, along with installation instructions, see the MicroK8s 1.15 release notes and documentation.

Notable Charmed Kubernetes 1.15 features:

  • Pure upstream Kubernetes 1.15 binaries.
  • Containerd support: The default container runtime in Charmed Kubernetes 1.15 is containerd. Docker is still supported, and an upgrade path is provided for existing clusters wishing to migrate to containerd. Both runtimes can be used within a single cluster if desired. Container runtimes are now subordinate charms, paving the way for additional runtimes to be added in the future.
  • Calico 3 support: The Calico and Canal charms have been updated to install Calico 3.6.1 by default. For users currently running Calico 2.x, the next time you upgrade your Calico or Canal charm, the charm will automatically upgrade to Calico 3.6.1 with no user intervention required.
  • Calico BGP support: Several new config options have been added to the Calico charm to support BGP functionality within Calico. These additions make it possible to configure external BGP peers, route reflectors, and multiple IP pools.
  • Custom load balancer addresses: Support has been added to specify the IP address of an external load balancer. This support is in the kubeapi-load-balancer and the kubernetes-master charms. This allows a virtual IP address on the kubeapi-load-balancer charm or the IP address of an external load balancer.
  • Private container registry enhancements: A generic images-registry configuration option that will be honored by all Kubernetes charms, core charms and add-ons, so that users can specify a private registry in one place and have all images in a Kubernetes deployment come from that registry.
  • Keystone with CA Certificate support: Kubernetes integration with keystone now supports the use of user supplied CA certificates and can support https connections to keystone.
  • Graylog updated to version 3, which includes major updates to alerts, content packs, and pipeline rules. Sidecar has been re-architected so you can now use it with any log collector.
  • ElasticSearch updated to version 6. This version includes new features and enhancements to aggregations, allocation, analysis, mappings, search, and the task manager.

For complete details, see the Charmed Kubernetes 1.15 release notes and documentation.

Contact us

If you’re interested in Kubernetes support, consulting, or training, please get in touch!

About Charmed Kubernetes

Canonical’s certified, multi-cloud Charmed Kubernetes installs pure upstream binaries, and offers simplified deployment, scaling, management, and upgrades of Kubernetes, regardless of the underlying hardware or machine virtualisation. Supported deployment environments include AWS, GCE, Azure, VMware, OpenStack, LXD, and bare metal.

Charmed Kubernetes integrates tightly with underlying cloud services and hardware – enabling GPGPU’s automatically and leveraging cloud-specific services like AWS, Azure and GCE load balancers and storage. Charmed Kubernetes allows independent placement and scaling of components such as etcd or the Kubernetes Master, providing an HA or minimal configuration, and built-in, automated, on-demand upgrades from one version to the next.

Enterprise support for Charmed Kubernetes by Canonical provides customers with a highly available, multi-cloud, flexible and secure platform for their cloud-native workloads and enjoys wide adoption across enterprise, particularly in the telco, financial and retail sectors.

The post Kubernetes 1.15 now available from Canonical appeared first on Ubuntu Blog.

Read more

MicroK8s is a solution for teams wanting to deploy Kubernetes on Windows for developing and testing purposes. Below we include steps for quick set up on Windows.


MicroK8s, a Linux snap, is Ubuntu’s lightweight, CNCF-certified local distribution of Kubernetes that installs in 30 seconds or less.  It runs all Kubernetes services natively on Ubuntu, or any operating system (OS) Linux  supports snaps, and deploys a single cluster on a local PC. This gives teams flexibility to test microservices on a small scale, develop and train machine learning models locally, or embed upgradeable Kubernetes in IoT devices for easy evolution.

While MicroK8s automates the typical functions of Kubernetes locally, such as scheduling, scaling and debugging, it also adds another layer of reliability because it provides the latest Kubernetes for development. The latest upstream version of Kubernetes is always available on Ubuntu within one week of official release.

Kubernetes on Windows works by setting up a Kubernetes cluster in an Ubuntu VM.

With this is mind, MicroK8s and Kubernetes both need a Linux kernel to operate and require an Ubuntu VM, which can be created using Multipass. Multipass is the tool that instantly launches and manages Ubuntu VMs on Windows, MacOS and Linux.  The VM provides another layer of security,  isolating the Kubernetes instance from the outside world.

Kubernetes on Windows set up steps 

What follows here are the steps to set up Multipass, interact with MicroK8s on the VM and how to add-on DNS to view the MicroK8s dashboard.

Note that there are a few requirements for running Multipass on Windows 10 Enterprise or Pro with hyper-v enabled on a trusted network as discussed here.

Step 1: Set up a VM for Windows using Multipass

To start a VM with MicroK8s run: 

multipass launch --name microk8s-vm --mem 4G --disk 40G

multipass exec microk8s-vm -- sudo snap install microk8s --classic

multipass exec microk8s-vm -- sudo iptables -P FORWARD ACCEPT

Ensure sufficient resources are available to host these deployments. Below we’ve created a VM named microk8s-vm and given it 4GB of RAM and 40GB of disk.

Our VM has an IP that can  be checked with the following: (Take note of this IP since our services will become available here).

multipass list

Name           State IPv4            Release

microk8s-vm    RUNNING    Ubuntu 18.04 LTS

Step 2: Interact with MicroK8s on the VM

There are three ways to interact with Multipass in a VM.

  • Using a Multipass shell prompt (command line) by running:
multipass shell microk8s-vm
  • Using multipass exec to execute a command without a shell prompt by inputting:
multipass exec microk8s-vm -- /snap/bin/microk8s.status
  • Using the Kubernetes API server running in the VM.. Here one would use MicroK8s kubeconfig file with a local installation of kubectl to access the in-VM-kubernetes. Do this by running:
multipass exec microk8s-vm -- /snap/bin/microk8s.config > kubeconfig

Next install kubectl on the host machine and then use the kubeconfig:

kubectl --kubeconfig=kubeconfig get all --all-namespaces
default  service/kubernetes   ClusterIP   <none> 443/TCP 3m12s

Step 3: Access in-VM Multipass services - enabling MicroK8s add-ons

A basic MicroK8s add-on to set up is the Grafana dashboard. Below we show one way of accessing Grafana to monitor and analyse a MicroK8s instance. To do this execute:

multipass exec microk8s-vm -- /snap/bin/microk8s.enable dns dashboard
Enabling DNS
Applying manifest
service/kube-dns created
serviceaccount/kube-dns created
configmap/kube-dns created
deployment.extensions/kube-dns created
Restarting kubelet
DNS is enabled
Enabling dashboard
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
service/monitoring-grafana created
service/monitoring-influxdb created
service/heapster created
deployment.extensions/monitoring-influxdb-grafana-v4 created
serviceaccount/heapster created
configmap/heapster-config created
configmap/eventer-config created
deployment.extensions/heapster-v1.5.2 created
dashboard enabled

Next, check the deployment progress by running:

multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get all --all-namespaces

Which should return output similar to:

Once all services are running, access the dashboard. The image below shows the Grafana of our dashboard.

Kubernetes on Windows summary

MicroK8s gives teams the opportunity to test out their work before going public, automating the standard tasks of Kubernetes, while adding an extra layer of reliability.

Useful reading:

The post Kubernetes on Windows: how to set up appeared first on Ubuntu Blog.

Read more
Alan Pope

A little later than usual, we’ve collected some of the snaps which crossed our “desk” (Twitter feed) during May 2019. Once again, we bring you a suite of diverse applications published in the Snap Store. Take a look down the list, and discover something new today.

Cloudmonkey (cmk) is a CLI and interactive shell that simplifies configuration and management of Apache CloudStack, the opensource IAAS cloud computing platform.

snap install cloudmonkey

Got a potato gaming computer? You can still ‘game’ on #linux with Vitetris right in your terminal! Featuring configurable keys, high-score table, multi (2) player mode and joystick support! Get your Pentomino on today!

snap install vitetris

If you’re seeking a comprehensive and easy to use #MQTT client for #Linux then look no further than MQTT Explorer.

snap install mqtt-explorer

Azimuth is a metroidvania game, with vector graphics. Azimuth is inspired by such as the Metroid series (particularly Super Metroid and Metroid Fusion), SketchFighter 4000 Alpha, and Star Control II (a.k.a. The Ur-Quan Masters).

snap install azimuth

Familiar with Excel? Then you’ll love Banana Accounting’s intuitive, all-in-one spreadsheet-inspired features. Upgrade your bookkeeping with brilliant planning & fast invoicing. Journals, balance sheets, profit & loss, liquidity, VAT, and more!

snap install banana-accounting

Remember ICQ? We do! The latest release of the popular chat application is available for #Linux as an official snap! Dust off your ID and password, and get chatting like it’s 1996!

snap install icq-im

Déjà Dup hides the complexity of backing up your system. Déjà Dup is a handy tool, with support for local, remote and cloud locations, scheduling and desktop integration.

snap install deja-dup

Ktorrent is a fast, configurable BitTorrent, with UDP tracker support, protocol encryption, IP filtering, DHT, and more. Now available as a snap.

snap install ktorrent

Fast is a minimal zero-dependency utility for testing your internet download speed from terminal.

snap install fast

No matter which Linux distribution you use, browse the Snap Store directly from your desktop. View details about snaps, read and write reviews and manage snap permissions in one place.

snap install snap-store

Krita is the full-featured digital art studio. The latest release is now available in the Snap Store.

snap install krita

From Silicon Graphics machines to modern-day PCs, BZFlag has seen them all. This 3D online multiplayer tank game blends nostalgia with fun. Now available as a snap.

snap install bzflag

That’s all for this month. Keep up to date with Snapcraft on Twitter for more updates! Also, join the community over on the Snapcraft forum to discuss anything you’ve seen here.

Header image by Alex Basov on Unsplash

The post Fresh snaps for May 2019 appeared first on Ubuntu Blog.

Read more

This content is password protected. To view it please enter your password below:

The post Protected: Kubernetes on Windows appeared first on Ubuntu Blog.

Read more

Niryo has built a fantastic 6-axis robotic arm called ‘Niryo One’. It is a 3D-printed, affordable robotic arm focused mainly on educational purposes. Additionally, it is fully open source and based on ROS. On the hardware side, it is powered by a Raspberry Pi 3 and NiryoStepper motors, based on Arduino microcontrollers. When we found out all this, guess what we thought? This is a perfect target for Ubuntu Core and snaps!

When the robotic arm came to my hands, the first thing I did was play with Niryo Studio; a tool from Niryo that lets you move the robotic arm, teach sequences to it and store them, and many more things. You can programme the robotic arm with Python or with a graphical editor based on Google’s Blocky. Niryo Studio is a great tool that makes starting on robotics easy and pleasant.

Nyrio studio for Ubuntu
Nyrio Studio

After this, I started the task of creating a snap with the ROS stack that controls the robotic arm. Snapcraft supports ROS, so this was not a difficult task: the catkin plugin takes care of almost everything. However, as happens with any non-trivial project, the Niryo stack had peculiarities that I had to address:

  • It uses a library called WiringPi which needs an additional part in the snap recipe.
  • GCC crashed when compiling on the RPi3, due to the device running out of memory. This is an issue known by Niryo that can be solved by using only two cores when building (this can be done by using -j2 -l2 make options). Unfortunately we do not have that much control when using Snapcraft’s catkin plugin. However, Snapcraft is incredibly extensible so we can resort to creating a local plugin by copying around the catkin plugin shipped with Snapcraft and doing the needed modifications. That is what I did, and the catkin-niryo plugin I created added the -j2 -l2 options to the build command so I could avoid the GCC crash.
  • There were a bunch of hard coded paths that I had to change in the code. Also, I had to add some missing dependencies, and there are other minor code changes. The resulting patches can be found here.
  • I also had to copy around some configuration files inside the snap.
  • Finally, there is also a Node.js package that needs to be included in the build. The nodejs plugin worked like a charm and that was easily done.

After addressing all these challenges, I was able to build a snap in an RPi3 device. The resulting recipe can be found in the niryo_snap repo in GitHub, which includes the (simple) build instructions. I went forward and published it in the Snap Store with name abeato-niryo-one. Note that the snap is not confined at the moment, so it needs to be installed with the --devmode option.

Then, I downloaded an Ubuntu Core image for the RPi3 and flashed it to an SD card. Before inserting it in the robotic arm’s RPi3, I used it with another RPi3 to which I could attach to the UART serial port, so I could run console-conf. With this tool I configured the network and the Ubuntu One user that would be used in the image. Note that the Nyrio stack tries to configure a WiFi AP for easier initial configuration, but that is not yet supported by the snap, so the networking configuration from console-conf determines how we will be able to connect later to the robotic arm.

At this point, snapd will possibly refresh the kernel and core snaps. That will lead to a couple of system reboots, and once complete  those snaps will have been updated. After this, we need to modify some files from the first stage bootloader because Niryo One needs some changes in the default GPIO configuration so the RPi3 can control all the attached sensors and motors. First, edit /boot/uboot/cmdline.txt, remove console=ttyAMA0,115200, and add plymouth.ignore-serial-consoles, so the content is:

dwc_otg.lpm_enable=0 console=tty0 elevator=deadline rng_core.default_quality=700 plymouth.ignore-serial-consoles

Then, add the following lines at the end of /boot/uboot/config.txt:

# For niryo

Now, it is time to install the needed snaps and perform connections:

snap install network-manager
snap install --devmode --beta abeato-niryo-one
snap connect abeato-niryo-one:network-manager network-manager

We have just installed and configured a full ROS stack with these simple commands!

Nyrio robotic arm in action
The robotic arm in action

Finally, insert the SD card in the robotic arm, and wait until you see that the LED at the base turns green. After that you can connect to it using Niryo Studio in the usual way. You can now handle the robotic arm in the same way as when using the original image, but now with all the Ubuntu Core goodies: minimal footprint, atomic updates, confined applications, app store…

As an added bonus, the snap can also be installed on your x86 PC to use it in simulation mode. You just need to stop the service and start the simulation with:

snap install --devmode --beta abeato-niryo-one
snap stop --disable abeato-niryo-one
sudo abeato-niryo-one.simulation

Then, run Niryo Studio and connect to, as simple as that – no need at all to add the ROS archive and install manually lots of deb packages.

And this is it – as you can see, moving from a ROS debian based project to Ubuntu Core and snaps is not difficult, and has great gains. Easy updates, security first, 10 years updates, and much more, is a few keystrokes away!

The post Your first robotic arm with Ubuntu Core, coming from Niryo appeared first on Ubuntu Blog.

Read more
Karl Waghorn-Moyce

Over the past year, we’ve been working hard to bring you the next release of Vanilla framework: version 2.0, our most stable release to date.

Since our last significant release, v1.8.0 back in July last year, we’ve been working hard to bring you new features, improve the framework and make it the most stable version we’ve released.

You can see the full list of new and updated changes in the framework in the full release notes .

New to the Framework


The release has too many changes to list them all here but we’ve outlined a list of the high-level changes below.

The first major change was removing the Shelves grid, which has been in the framework since the beginning, and reimplementing the functionality with CSS grid. A Native CSS solution has given us more flexibility with layouts. While working on the grid, we also upped the grid max width base value from 990px to 1200px, following trends in screen sizes and resolutions.

We revisited vertical spacing with a complete overhaul of what we implemented in our previous release. Now, most element combinations correctly fit the baseline vertical grid without the need to write custom styles.

To further enforce code quality and control we added a prettier dependency with a pre-commit hook, which led to extensive code quality updates following running it for the first time. And in regards to dependencies, we’ve added renovate to the project to help to keep dependencies up-to-date.

If you would like to see the full list of features you can look at our release notes, but below we’ve captured quick wins and big changes to Vanilla.

  • Added a script for developers to analyse individual patterns with Parker
  • Updated the max-width of typographic elements
  • Broke up the large _typography.scss file into smaller files
  • Standardised the naming of spacing variables to use intuitive (small/medium/large) naming where possible
  • Increased the allowed number of media queries in the project to 50 in the parker configuration
  • Adjusted the base font size so that it respects browser accessibility settings
  • Refactored all *.scss files to remove sass nesting when it was just being used to build class names – files are now flatter and have full class names in more places, making the searching of code more intuitive

Components and utilities

Two new components have been added to Vanilla in this release: `p-subnav` and `p-pagination`. We’ve also added a new `u-no-print` utility to exclude web-only elements from printed pages.

Two new components in the framework, sub-navigation and page pagination.
New components to the framework: Sub navigation and Pagination.

Removed deprecated components

As we extend the framework, we find that some of our older patterns are no longer needed or are used very infrequently. In order to keep the framework simple and to reduce the file size of the generated CSS, we try to remove unneeded components when we can. As core patterns improve, it’s often the case that overly-specific components can be built using more flexible base components.

  • p-link–strong: this was a mostly-unused link variant which added significant maintenance overhead for little gain
  • p-footer: this component wasn’t flexible enough for all needs and its layout is achievable with the much more flexible Vanilla grid
  • p-navigation–sidebar: this was not widely used and can be easily replicated with other components

Documentation updates


During this cycle we improved content structure per component, each page now has a template with hierarchy and grouping of component styles, do’s and don’ts of using and accessible rules. With doing so we also updated the examples to showcase real use-case examples used across our marketing sites and web applications.

Updated colours page on our documentation site, including accessibility rules.
Updated Colorpage on our documentation site.

As well as updating content structure across all component pages, we also made other minor changes to the site listed below:

  • Added new documentation for the updated typographic spacing
  • Documented pull-quote variants
  • Merged all “code” component documentation to allow easier comparison
  • Changed the layout of the icons page


In addition to framework and documentation content, we still managed to make time for some updates on, below is a list of high-level items we completed to users navigate when visiting our site:

  • Updated the navigation to match the rest of the website
  • Added Usabilla user feedback widget
  • Updated the “Report a bug” link
  • Updated mobile nav to use two dropdown menus grouped by “About” and “Patterns” rather than having two nav menus stacked
  • Restyled the sidebar and the background to light grey

Bug fixes

As well as bringing lots of new features and enhancements, we continue to fix bugs to keep the framework up-to-date. Going forward we plan to improve our release process by pushing our more frequent patch releases to help the team with bugs that may be blocking feature deliverables.

Graph showing project work items over the past 6 month cycle.

Getting Vanilla framework

To get your hands on the latest release, follow the getting started instuctions which include all options for using Vanilla.

The post New release: Vanilla framework 2.0 appeared first on Ubuntu Blog.

Read more

Drones, and their wide-ranging uses, have been a constant topic of conversation for some years now, but we’re only just beginning to move away from the hypothetical and into reality. The FAA estimates that there will be 2 million drones in the United States alone in 2019, as adoption within the likes of distribution, construction, healthcare and other industries accelerates.

Driven by this demand, Ubuntu – the most popular Linux operating system for the Internet of Things (IoT) – is now available on the Manifold 2, a high-performance embedded computer offered by leading drone manufacturer, DJI. The Manifold 2 is designed to fit seamlessly onto DJI’s drone platforms via the onboard SDK and enables developers to transform aerial platforms into truly smarter drones, performing complex computing tasks and advanced image processing, which in-turn creates rapid flexibility for enterprise usage.

As part of the offering, the Manifold 2 is planning to feature snaps. Snaps are containerised software packages, designed to work perfectly across cloud, desktop, and IoT devices – with this the first instance of the technology’s availability on drones. The ability to add multiple snaps means a drone’s functionality can be altered, updated, and expanded over time. Depending on the desired use case, enterprises can ensure the form a drone is shipped in does not represent its final iteration or future worth.

Snaps also feature enhanced security and greater flexibility for developers. Drones can receive automatic updates in the field, which will become vital as enterprises begin to deploy large-scale fleets. Snaps also support roll back functionality in the event of failure, meaning developers can innovate with more confidence across this growing field.

Designed for developers, having the Manifold 2 pre-installed with Ubuntu means support for Linux, CUDA, OpenCV, and ROS. It is ideal for the research and development of professional applications, and can access flight data and perform intelligent control and data analysis. It can be easily mounted to the expansion bay of DJI’s Matrice 100, Matrice 200 Series V2 and Matrice 600, and is also compatible with the A3 and N3 flight controller.

DJI has now counted at least 230 people have been rescued with the help of a drone since 2013. As well as being used by emergency services, drones are helping to protect lives by eradicating the dangerous elements of certain occupations. Apellix is one such example; supplying drones which run on Ubuntu to alleviate the need for humans to be at the forefront of work in elevated, hazardous environments, such as aircraft carriers and oil rigs.

Utilising the freedom brought by snaps, it is exciting to see how developers drive the drone industry forward. Software is allowing the industrial world to move from analog to digital, and mission-critical industries will continue to evolve based on its capabilities.

The post Customisable for the enterprise: the next-generation of drones appeared first on Ubuntu Blog.

Read more
Chad Smith

Hello Ubuntu Server

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion.

Spotlight: Bryce Harrington

Keeping with the theme of “bringing them back into the fold”, we are proud to announce that Bryce Harrington has rejoined Canonical on the Ubuntu Server team. In his former tenure at Canonical, he maintained the stack for Ubuntu and helped bridge us from the old ‘edit your own’ days, swatted GPU hang bugs on Intel, and contributed to Launchpad development.

Home-based in Oregon, with around 20 years of open source development experience. Bryce created the Inkscape project, and he is currently a board member of the Foundation. He joins us most recently from Samsung Research America where he was a Senior Open Source Developer and the release manager for the Cairo and Wayland projects. Bryce will be helping us tackle the development and maintenance of Ubuntu Server packages. We are thrilled to have his additional expertise to help spread the wealth of software and packaging improvements that help make Ubuntu great. When he’s not building software, he is building things in his woodworking shop.

Welcome (back) Bryce (bryce on Freenode)!


  • Allow identification of OpenStack by Asset Tag [Mark T. Voelker] (LP: #1669875)
  • Fix spelling error making ‘an Ubuntu’ consistent. [Brian Murray]
  • run-container: centos: comment out the repo mirrorlist [Paride Legovini]
  • netplan: update netplan key mappings for gratuitous-arp [Ryan Harper] (LP: #1827238)


  • vmtest: dont raise SkipTest in class definition [Ryan Harper]
  • vmtests: determine block name via dname when verifying volume groups [Ryan Harper]
  • vmtest: add Centos66/Centos70 FromBionic release and re-add tests [Ryan Harper]
  • block-discover: add cli/API for exporting existing storage to config [Ryan Harper]
  • vmtest: refactor test_network code for Eoan [Ryan Harper]
  • curthoooks: disable daemons while reconfiguring mdadm [Michael Hudson-Doyle] (LP: #1829325.)
  • mdadm: fix install to existing raid [Michael Hudson-Doyle] (LP: #1830157)

Contact the Ubuntu Server team

Bug Work and Triage

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Proposed Uploads to the Supported Releases

Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed.

Total: 10

Uploads Released to the Supported Releases

Total: 26

Uploads to the Development Release

Total: 9

The post Ubuntu Server development summary – 11 June 2019 appeared first on Ubuntu Blog.

Read more

Ubuntu community

As open-source software, Ubuntu is designed to serve a community of users and innovators worldwide, ranging from enterprise IT pros to small-business users to hobbyists.

Ubuntu users have the opportunity to share experiences and contribute to the improvement of this platform and community, and to encourage our wonderful community to continue learning, sharing and shaping Ubuntu, here are five helpful resources:

Ubuntu Tutorials

These tutorials provide step-by-step guides on using Ubuntu for different projects and tasks across a wide range of Linux tools and technologies.
Many of these tutorials are contributed and suggested by users, so this site also provides guidance on creating and requesting a tutorial on a topic you believe needs to be covered.

Ubuntu Community Hub

This community site for user discourse is relatively new and intended for people working at all levels of the stack on Ubuntu. The site is evolving, but currently includes discussion forums, announcements, QA and testing requests, feedback to the Ubuntu Community Council and more.

Ubuntu Community Help Wiki page

From installation to documentation of Ubuntu flavours such as Lubuntu and Kubuntu, this wiki page offers instructions and self-help to users comfortable doing it themselves. Learn some tips, tricks and hacks, and find links to Ubuntu official documentation as well as additional help resources.

Ubuntu Server Edition FAQ page

Its ease of use, ability to customise and capacity to run on a wide range of hardware makes Ubuntu the most popular server choice of the cloud age. This FAQs page provides answers to technical questions, maintenance, support and more to address any Ubuntu server queries.

Ubuntu Documentation

If you are a user who relies extensively on Ubuntu documentation, perhaps you can lend a hand to the Documentation Team to help improve it by:

  • Submitting a bug: Sending in a bug report when you find mistakes.
  • Fixing a bug: Proposing a fix an existing bug.
  • Creating new material: Adding to an existing topic or writing on a new topic.

These are just a few of the available resources and recommended suggestions for getting involved in the Ubuntu community. For more, visit

The post Get to know these 5 Ubuntu community resources appeared first on Ubuntu Blog.

Read more
Anthony Dillon

This was a fairly busy two weeks for the Web & design team at Canonical.  Here are some of the highlights of our completed work.


Web is the squad that develop and maintain most of the brochure websites across the Canonical.

Integrating the blog into

We have been working on integrating the blog into, building new templates and an improved blog module that will serve pages more quickly.

Takeovers and engage pages

We built and launched a few new homepage takeovers and landing pages, including:

– Small Robot Company case study

– 451 Research webinar takeover and engage page

– Whitepaper takeover and engage page for Getting started with AI

– Northstar robotics case study engage page  

– Whitepaper about Active directory

Verifying checksum on download thank you pages

We have added the steps to verify your Ubuntu download on the website. To see these steps download Ubuntu and check the thank you page.

Mir Server homepage refresh

A new homepage hero section was designed and built for

The goal was to update that section with an image related to digital signage/kiosk and also to give a more branded look and feel by using our Canonical brand colours and Suru folds.


Brand squad champion a consistent look and feel across all media from web to social to print and logos.

Usage guide to using the company slide deck

The team have been working on storyboarding a video to guide people to use the new company slide deck correctly and highlight best practices for creating great slides.

Illustration and UI work

We have been working hard on breaking down the illustrations we have into multiple levels. We have identified x3 levels of illustrations we use and are in the process of gathering them across all our websites and reproducing them in a consistent style.

Alongside this we have started to look at the UI icons we use in all our products with the intention of creating a single master set that will be used across all products to create a consistent user experience.

Marketing support

Created multiple documents for the Marketing team, these included x2 whitepapers and x3 case studies for the Small Robot Company, Northstar and Yahoo Japan.

We also created an animated screen for the stand back wall at Kubecon in Barcelona.


The MAAS squad develop the UI for the maas project.

Renamed Pod to KVM in the MAAS UI

MAAS has been using the label “pod” for any KVM (libvert) or RSD host – a label that is not industry standard and can be confused with the use of pods in Kubernetes. In order to avoid this, we went through the MAAS app and renamed all instances of pod to KVM and separated the interface for KVM and RSD hosts.

Replaced Karma tests with Jest

The development team working on MAAS have been focusing on modernising areas of the application. This lead to moving from the Karma test framework to Jest.

Absolute import paths to modules

Another area the development team would like to tackle is migrating from AngularJS to React. To decouple us from Angular we moved to loading modules from a relative path.

KVM/RSD: In-table graphs for CPU, RAM and storage

MAAS CPU, RAM and Storage mini charts
MAAS usage tooltip
MAAS storage tooltip


The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

Design update for

We have worked on a design update for which includes new colours and page backgrounds. The aim is to bring the website in line with recent updates carried out by the brand team.

Top of the new JAAS homepage

Design refresh of the top of the store page

We have also updated the design of the top section of the Store page, to make it clearer and more attractive, and again including new brand assets.

Top of jaas store page

UX review of the CLI between snap and juju

Our team has also carried out some research in the first step to more closely aligning the commands used in the CLI for Juju and Snaps. This will help to make the experience of using our products more consistent.


The Vanilla squad design and maintain the design system and Vanilla framework library. They ensure a consistent style throughout web assets.  

Vanilla 2.0.0 release

Since our last major release, v1.8.0 back in July last year, we’ve been working hard to bring you new features, improve the framework and make it the most stable version of Vanilla yet. These changes have been released in v2.0.0.

Over the past 2 weeks, we’ve been running QA tests across our marketing sites and web applications using our pre-release 2.0.0-alpha version. During testing, we’ve been filing and fixings bugs against this version, and have pushed up to a pre-release 2.0.0-beta version.

Vanilla framework v2.0.0 banner

We plan to launch Vanilla 2.0.0 today once we have finalised our release notes, completed our upgrade document which will help and guide users during upgrades of their sites.

Look out for our press release posts on Vanilla 2.0.0 and our Upgrade document to go along with it.


The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

Install any snap on any platform

We’re pleased to announce we’ll be releasing distribution install pages for all Snaps. They’re one-stop-shops for any combination of Snap and supported distro. Simply visit or, say The combinations are endless and not only do the pages give you that comfy at-home feeling when it comes to branding they’re also pretty useful. If you’ve never installed Snaps before we provide some easy step-by-step instructions to get the snap running and suggest some other Snaps you might like.

Snap how to install VSC

The post Design and Web team summary – 10 June 2019 appeared first on Ubuntu Blog.

Read more
Colin Ian King

Over the past 9+ months I've been cleaning up stress-ng in preparation for a V0.10.00 release.   Stress-ng is a portable Linux/UNIX Swiss army knife of micro-benchmarking kernel stress tests.

The Ubuntu kernel team uses stress-ng for kernel regression testing in several ways:

  • Checking that the kernel does not crash when being stressed tested
  • Performance (bogo-op throughput) regression checks
  • Power consumption regression checks
  • Core CPU Thermal regression checks
The wide range of micro benchmarks in stress-ng allow us to keep track of a range of metrics so that we can catch regressions.

I've tried to focus on several aspects of stress-ng over the last last development cycle:
  • Improve per-stressor modularization. A lot of code has been moved from the core of stress-ng back into each stress test.
  • Clean up a lot of corner case bugs found when we've been testing stress-ng in production.  We exercise stress-ng on a lot of hardware and in various cloud instances, so we find occasional bugs in stress-ng.
  • Improve usability, for example, adding bash command completion.
  • Improve portability (various kernels, compilers and C libraries). It really builds on runs on a *lot* of Linux/UNIX/POSIX systems.
  • Improve kernel test coverage.  Try to exercise more kernel core functionality and reach parts other tests don't yet reach.
Over the past several days I've been running various versions of stress-ng on a gcov enabled 5.0 kernel to measure kernel test coverage with stress-ng.  As shown below, the tool has been slowly gaining more core kernel coverage over time:

With the use of gcov + lcov, I can observe where stress-ng is not currently exercising the kernel and this allows me to devise stress tests to touch these un-exercised parts.  The tool has a history of tripping kernel bugs, so I'm quite pleased it has helped us to find corners of the kernel that needed improving.

This week I released V0.09.59 of stress-ng.  Apart from the usual sets of clean up changes and bug fixes, this new release now incorporates bash command line completion to make it easier to use.  Once the 5.2 Linux kernel has been released and I'm satisfied that stress-ng covers new 5.2 features I will  probably be releasing V0.10.00. This  will be a major release milestone now that stress-ng has realized most of my original design goals.

Read more
Sarah Dickinson

In Europe, the cost of running a cereal farm – cultivating wheat, rice, and other grains – has risen by 85% in the last 25 years, yet crop yields and revenues have stagnated. And while farms struggle to remain profitable, it won’t be long before those static yields become insufficient to support growing populations.

Reliance on tractors is at the heart of both of these problems. Not only are tractors immensely costly to buy and maintain, they are also inefficient.

The Small Robot Company (SRC), a UK based agri-tech start up, is working to overturn this paradigm by replacing tractors with lightweight robots. Developed using Ubuntu, these robots are greener and cheaper to run than tractors, and generate far higher yields thanks to AI-driven precision.

In this innovative deployment of robotics and AI for commercial farming, you’ll learn:

  • How the agri-tech start up is using robotics to grow crops and reduce waste, including its current partnership with a leading UK supermarket chain.
  • The emergence of Farming as a Service (FaaS) business model, eliminating the need to invest upfront in expensive machinery
  • How the use of Ubuntu in the cloud and on the hardware powers SRC’s three robots – Tom, Dick and Harry – plus Wilma, its AI system, to accelerate development and provide a stable platform.

The post Small Robot Company sows the seeds for autonomous and more profitable farming appeared first on Ubuntu Blog.

Read more
Igor Ljubuncic

Snaps. The final frontier. These are the voyages of the OSS Snapcraft. Its continuing mission, to provide snap users with simple, clear and effective tips and tricks on how to build and publish their applications.

In this tutorial, we are going to talk about confinement and interfaces – how to restrict what your snaps can do, and then fine-tune the restrictions to make your applications both secure and useful.

What are we going to do?

Our tasks for today will be:

  • Overview of confinement levels and their use cases.
  • Overview of interfaces, automatic and manual connections.
  • Basic but practical examples.

Moreover, you should read the first two articles – Introduction to snapcraft and Parts & Plugins, to get a better sense of the content today. This will help you familiarize with the snapcraft ecosystem, the tools and commands, as well as several practical examples of how to build snaps.

Confinement levels

By design, snaps are confined and limited in what they can do. This is an important feature that distinguishes snaps from software distributed using the traditional repository methods. The confinement allows for a high level of isolation and security, and prevents snaps from being affected by underlying system changes, affecting one another or affecting the system.

Different confinement levels describe what type of access the application will have once installed on the user’s system. Confinement levels can be treated as security filters that define what type of system resources the application can access outside the snap.

Confinement level is specified in the snapcraft.yaml file and will affect how your application behaves during runtime.

Confinement levels

Devmode – This is a debug mode level used by developers as they iterate on the creation of their snap. This allows developers to troubleshoot applications, because they may behave differently when confined.

Strict – This confinement level uses Linux kernel security features to lock down the applications inside the snap. By default, a strictly confined application cannot access the network, the user’s home directory, any audio subsystems or webcams, and it cannot display any graphical output via X or Wayland.

Classic – This is a permissive level equivalent to the full system access that traditionally packaged applications have. Classic confinement is often used as a stop-gap measure to enable developers to publish applications that need more access than the current set of permissions allow. The classic level should be used only when required for functionality, as it lowers the security of the application.

Classically confined snaps are reviewed by the Snap Store reviewers team before they can be published. Snaps that use classic confinement may be rejected if they don’t meet the necessary requirements.

Confinement in action

We will shortly touch upon a practical example. Before we do that, let’s briefly focus on how confinement works and how it manifests when you run your snaps.

On the system level, the isolation of snaps is enforced via Discretionary Access Controls (DAC), Mandatory Access Control (MAC) via AppArmor, Seccomp kernel system call filtering (which limits the system calls a process may use), and cgroups device access controls for hardware assignment.

If a snap tries to access a resource that has not been explicitly granted, the access will be gated based on the confinement level specified in the snapcraft.yaml file:

  • In the devmode, the access will be allowed, but there will be a notification of the error.
  • In the strict mode, the access will be denied.
  • In the classic mode, the access will be identical to system-level permissions.


As mentioned earlier, a strictly confined snap is considered untrusted, and it runs in a restricted sandbox. By design, untrusted applications:   

  • can freely access their own data
  • cannot access other applications data
  • cannot access non-application-specific user data
  • cannot access privileged portions of the OS
  • cannot access privileged system APIs
  • may access sensitive APIs under some conditions

Strictly confined applications are not always functional with the default security policy. For example, a browser without network access or a media player without audio access do not serve their intended purpose.

Access to networkNYSystem
Access to home dirNYSystem
Access to audioNYSystem
Access to webcamNYSystem
Access to displayNYSystem

To that end, snap developers can use interfaces. These allow developers to expand on existing permissions and security policies and connect their applications to system resources. Interfaces are commonly used to enable a snap to access OpenGL acceleration, sound playback or recording, the network and the user’s $HOME directory. But which interfaces a snap requires, and provides, is very much dependent on the type of snap and its own requirements.

An interface consists of a connection between a slot and a plug. The slot is the provider of the interface while the plug is the consumer, and a slot can support multiple plug connections.


The Snap Connection

Interfaces can be automatically or manually connected. Some interfaces will be auto-connected. Others may not, especially if they have access to sensitive resources (like network control, for instance). Users have the option to manually control interfaces – connect and disconnect them.

Interfaces definition

In the first article, we briefly touched on the use of interfaces in the wethr example. The snapcraft.yaml file specifies the network plug for the wethr binary. Based on this definition, when the snap is installed, a security profile will be generated that grants the application access to network during runtime. This interface will be auto-connected so the user does not need to make any manual adjustments to have the snap work.

   command: wethr
    - network

Now, let’s examine a different example

    command: bin/bettercap
     - home
     - network
     - network-bind
     - network-control
     - network-observe
     - netlink-connector
     - netlink-audit
     - bluetooth-control
     - firewall-control
     - x11

What do we have here?

We define an application that will have access to a range of resources, including the home directory, firewall, bluetooth, network, and even X11. Some of these interfaces will not be auto-connected. To that end, we need to examine what happens during runtime.

Automatic and manual interface connection

Once installed, you can check the list of interfaces your snap has by running:

snap interfaces “snap name”

Auto-connected interfaces will be shown with both the slot and the plug listed. Interfaces without connection will have an empty slot denoted by the dash character.

Slot             Plug
:desktop         gimp,vlc
:desktop-legacy  gimp,vlc
:gsettings       gimp
:home            gimp,review-tools,vlc
:network         gimp,lxd,review-tools,vlc
:opengl          gimp,vlc
:unity7          gimp,vlc
:wayland         gimp
:x11             gimp,vlc
-                gimp:cups-control
-                gimp:removable-media

You can manually connect (or disconnect) interfaces by running:

snap connect “snap”:”plug interface” “snap”:”slot interface”

A slot and a plug can only be connected if they have the same interface name. For example, to connect FFmpeg’s ALSA plug to the system’s ALSA slot, you’d enter the following:

sudo snap connect ffmpeg:alsa :alsa

If you do not specify the slot interface, the default (system) one will be use, e.g.:

snap connect easy-openvpn:home
snap disconnect easy-openvpn:home

Snappy Debug

We can also examine what happens in the background. You can do this using the snappy-debug snap, which is designed to help developers troubleshoot common problems with their snaps. You can use it to help identify missing interfaces by reporting on application security failures. It will also make suggestions on how to improve the snap, perhaps by adding interfaces.

snap install snappy-debug

After the snappy-debug tool is installed, run it in a separate shell: scanlog

In a different command-line window, run your application and go through the expected behavior steps until you encounter an error. You can then consult the log for more details, which should highlight any issues with your snap. For instance, if you use a Firefox snap with the removable-media that isn’t auto-connected, and you try to save a file to a USB drive, you will see something like the error below:

= AppArmor =
Time: Oct 24 13:39:04
Log: apparmor="DENIED" operation="open" profile="snap.firefox.firefox" name="/etc/fstab" pid=25299 comm="firefox" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0
File: /etc/fstab (read)
* adjust program to read necessary files from $SNAP, $SNAP_DATA, $SNAP_COMMON, $SNAP_USER_DATA or $SNAP_USER_COMMON
* add 'mount-observe' to 'plugs'


And that brings us to the end of this tutorial. Today, we focused on confinement and interfaces. The former is a method of restricting your application’s access to system resources using security policies. The latter is a method of allowing fine-tuned control of said resources.

We learned about the differences between confinement levels, the use of plugs and slots, and how to connect interfaces, both automatically and manually. Lastly, we reviewed some debugging tools that can help developers troubleshoot their applications. In the future, we will talk about hooks and health checks, as well as touch on simple recipes for how to create some common application types.

Thank you for reading. If you have any comments or suggestions, please join the forum for a friendly discussion.

Photo by Fleur on Unsplash.

The post Snapcraft confinement & interfaces appeared first on Ubuntu Blog.

Read more