Canonical Voices

Sarah Dickinson

Octave is a numerical computing environment largely compatible with MATLAB. As free software, Octave runs on GNU/Linux, macOS, BSD, and Windows. At the 2019 Snapcraft Summit, Mike Miller and Jordi Gutiérrez Hermoso of the Octave team worked on creating an Octave snap in stable and beta versions for the Snap Store

As Mike and Jordi explained, “Octave is currently packaged for most of the major distributions, but sometimes it’s older than we would like.” The goal of the Octave snap was to allow users to easily access the current stable release of the software, independently of Linux distribution release cycles. A snap would also let them release Octave on distributions not covered so far.

Before starting with snaps, Octave depended on distribution maintainers, including those of CentOS, Debian, Fedora, and Ubuntu, for its binary packaging. With snaps, the situation has improved. The Octave team can now push out a release as soon as it ready for users eager to get it now, while other more conservative users wait for more traditional packages from their distribution. Mike and Jordi envisioned this to be the biggest benefit of coming to the Summit and creating an Octave snap.

They also foresee a reduction in the amount of maintenance needed, using one package across many Linux distributions. The Snap Store will help users discover Octave easier while the Octave homepage will also feature snaps as a download option.

Nevertheless, there was a learning curve: “We’re more used to Debian packaging, and snap packaging has different quirks that we’re not used to,” comments Jordi. In the first day of their snap creation, it took time to set up the environment and getting an initial build to work. “Time was needed for recompiling Octave each time with fixes for re-testing, as the application is large and has many dependencies, all of which must be compiled,” explains Mike. 

The Octave team used Multipass on Linux to help build their snap and found that they “didn’t even notice it was there” for the most part. As Mike explains, “I had no issues other than a couple of teething problems due to the large build that Octave requires. However, after a little bit of digging in the documentation and asking the right people this was soon solved.”  

Advice that they would pass on to others about using snaps is to avoid thinking that they are the same as containers. Mike and Jordi speak from experience because they started with this preconception as they explain, “this made it difficult because everything we did in the build environment had to be readjusted once we wanted to go into the runtime environment. We had to change the paths of everything.” Some functions that happened automatically in other packaging methods, like including libraries according to dependencies, must also be done manually for snaps.

Coming to an event like the Summit is another tip that they would give to would-be snap developers. As Mike and Jordi put it, “reading the documentation and doing this ourselves would have taken longer than having everyone here. The fact we can just walk around and say hey, how do we do this and get that help.”

Install Octave as a snap here.

The post Octave turns to snaps to reduce dependency on Linux distribution maintainers appeared first on Ubuntu Blog.

Read more
Carmine Rimi

Edge computing continues to gain momentum to help solve unique challenges across telco, media, transportation, logistics, agricultural and other market segments. If you are new to edge computing architectures, of which there are several, the following diagram is a simple abstraction for emerging architectures:

Edge Kubernetes: computing architecture diagram

In this diagram you can see that an edge cloud sits next to field devices. In fact, there is a concept of extreme edge computing which puts computing resources in the field – which is the circle on the far left. An example of extreme edge computing is a gateway device that connects to all of your office or home appliances and sensors.

What exactly is edge computing? Edge computing is a variant of cloud computing, with your infrastructure services – compute, storage, and networking – placed physically closer to the field devices that generate data. Edge computing allows you to place applications and services closer to the source of the data, which gives you the dual benefit of lower latency and lower Internet traffic. Lower latency boosts the performance of field devices by enabling them to not only respond quicker, but to also respond to more events. And lowering Internet traffic helps reduce costs and increase overall throughput – your core datacenter can support more field devices. Whether an application or service lives in the edge cloud or the core datacenter will depend on the use case.

How can you create an edge cloud? Edge clouds should have at least two layers – both layers will maximise operational effectiveness and developer productivity – and each layer is constructed differently. 

The first layer is the Infrastructure-as-a-Service (IaaS) layer. In addition to providing compute and storage resources, the IaaS layer should satisfy the network performance requirements of ultra-low latency and high bandwidth.

The second layer is the Kubernetes layer, which provides a common platform to run your applications and services. Whereas using Kubernetes for this layer is optional, it has proven to be an effective platform for those organisations leveraging edge computing today. You can deploy Kubernetes to field devices, edge clouds, core datacenters, and the public cloud. This multi-cloud deployment capability offers you complete flexibility to deploy your workloads anywhere you choose. Kubernetes offers your developers the ability to simplify their devops practices and minimise time spent integrating with heterogeneous operating environments.

Okay, but how can I deploy these layers? At Canonical, we accomplish this through the use of well defined, purpose-built technology primitives. Let’s start with the IaaS layer, which the Kubernetes layer relies upon.

Physical infrastructure lifecycle management

The first step is to think about the physical infrastructure, and what technology can be used to manage the infrastructure effectively, converting the raw hardware into an IaaS layer. Metal-as-a-Service (MAAS) has proven to be effective in this area. MAAS provides the operational primitives that can be used for hardware discovery, giving you the flexibility to allocate compute resources and repurpose them dynamically. These primitives expose bare metal servers to a higher level of orchestration through open APIs, much like you would experience with OpenStack and public clouds.

Using MAAS for hardware provisioning at the edge

With the latest MAAS release you can automatically create edge clouds based on KVM pods, which effectively enable operators to create virtual machines with pre-defined sets of resources (RAM, CPU, storage and over-subscription ratios). You can do this through the CLI, Web UI or the MAAS API. You can use your own automation framework or use Juju, Canonical’s advanced orchestration solution.

MAAS can also be deployed in a very optimised fashion to run on top of the rack switches – just as we demonstrated during the OpenStack Summit in Berlin.

Example of edge router provisioning servers using MAAS

Image 1: OpenStack Summit Demo : MAAS running on ToR switch (Juniper QFX5100AA

Edge application orchestration

Once discovery and provisioning of physical infrastructure for the edge cloud is complete, the second step is to choose an orchestration tool that will make it easy to install Kubernetes, or any software, on your edge infrastructure. Juju allows you to do just that – you can easily install Charmed Kubernetes, a fully compliant and upstream Kubernetes. And with Kubernetes you can install containerised workloads, offering them the highest possible performance. In the telecommunications sector, workloads like Container Network Functions (CNFs) are well suited to this architecture.

There are additional benefits to Charmed Kubernetes. With the ability to run in a virtualised environment or directly on bare metal, fully automated Charmed Kubernetes deployments are designed with built-in high availability, allowing for in place, zero downtime upgrades. This is a proven, truly resilient edge infrastructure architecture and solution. An additional benefit of Charmed Kubernetes is its ability to automatically detect and configure GPGPU resources for accelerated AI model inferencing and containerised transcoding workloads.

Next steps

Once the proper technology primitives are selected, it is time to deploy the environment and start onboarding and validating the application. The next part of this blog series will include hands-on examples of what to do.

The post Deploying Kubernetes at the edge – Part I: building blocks appeared first on Ubuntu Blog.

Read more
Andres Rodriguez

Canonical is happy to announce the availability of MAAS 2.6. This new release introduces a range of very exciting features and several improvements that enhances MAAS across various areas. Let’s talk about a few notable ones:

Growing support for ESXi Datastores

MAAS has expanded its support of ESXi by allowing administrators to create & configure VMFS datastores on physically connected disks.

MAAS 2.5 introduced the ability to deploy VMWare’s ESXi. This, however, was limited in its ability to configure storage devices by just being able to select the disk in which to deploy the operating system on. As of 2.6, MAAS now also provides the ability to configure datastores. This allows administrators to create one or more datastores, using one or more physical disks. More information is available in https://docs.maas.io/2.6/en/installconfig-vmfs-datastores .

More information on how to create MAAS ESX images is available in https://docs.maas.io/2.6/en/installconfig-images-vmware .

Multiple default gateways

MAAS 2.6 introduces a network configuration change (for Ubuntu), where it will leverage the use of source routing to support multiple default gateways.

As of MAAS 2.5, all deployed machines were configured with a single default gateway. By doing so, if a machine were to be configured in multiple subnets (that had gateways defined), all outgoing traffic would go out the default gateway even though the traffic was intended to go out through the subnets configured gateway.

To address this, MAAS 2.6 has changed the way it configures the network when a machine has multiple interfaces in different subnets, to ensure that all traffic that is meant to go through the subnet’s gateway actually does.

Please note that this is currently limited to Ubuntu provided that this depends on source routing using netplan, and this is only currently supported by cloud-init in Ubuntu.

Leveraging HTTP boot for most of the PXE process

MAAS 2.6 is now leveraging the use of HTTP (as much as possible) to boot machines over the PXE process rather than solely rely on TFTP. The reasons for the change are not only to support newer standards/features, but also to improve PXE boot performance. As such, you should now expect that:

  • UEFI systems that implement the 2.5 spec can now fully boot over HTTP.
  • KVM’s will rely on iPXE to perform HTTP boot
  • Other architectures that support HTTP boot, such as arm64, will prefer it over tftp.

Prometheus metrics

MAAS now exposes Prometheus data that can be used to either track statistics or performance.  For more information into what metrics are exposed, please refer to https://discourse.maas.io/t/maas-2-6-0-released/724 and to learn how to enable them, refer to https://docs.maas.io/2.6/en/manage-prometheus-metrics .

Other features and improvements

A more extensive list of features and improvements introduced in MAAS 2.6 includes:

  • Performance – Leverage HTTP for most of the PXE process
  • Performance – Track stats and metrics with Prometheus
  • User experience – Provides a more granular boot output
  • Networking – Multiple default gateways
  • Power control – Added support for redfish
  • Power control – Added support for OpenBMC
  • ESXi – Support configuring datastores
  • ESXi – Support registering to vCenter
  • User experience – Dismiss/supress failed tests
  • User experience – Clear discovered devices
  • User experience – Added note to machine
  • User experience – Added grouping to machine listing page

Please refer to https://discourse.maas.io/t/maas-2-6-0-released/724/2 for more information.

The post MAAS 2.6 – ESXi storage, multiple gateways, HTTP boot and more appeared first on Ubuntu Blog.

Read more
Alex Cattle

Traditional development methods do not scale into the IoT sphere. Strong inter-dependencies and blurred boundaries among components in the edge device stack result in fragmentation, slow updates, security issues, increased cost, and reduced reliability of platforms.

This reality places a major strain on IoT players who need to contend with varying cycles and priorities in the development stack, limiting their flexibility to innovate and introduce changes into their products, both on the hardware and software sides.

One notable way to reduce the complexity in multi-component stacks is through the use DevOps – a paradigm that combines development with operations in a single and streamlined process. However, DevOps alone cannot solve the complexity and dependency that exists in monolithic IoT products.

Highlights of this whitepaper include:

  • How the decoupling of components in a reliable and predictable fashion will reduce the inter-dependency, improve security and allow for faster development cycles.
  • A look at the technical architecture of Ubuntu Core and snaps in the context of an IoT DevOps model.
  • Real life case studies of accelerated Linux development cycles as an alternative to the existing model.

To download the whitepaper complete the form below:

In submitting this form, I confirm that I have read and agree to Canonical’s Privacy Notice and Privacy Policy.

The post The DevOps guide to IoT projects appeared first on Ubuntu Blog.

Read more
Anthony Dillon

This was a fairly busy two weeks for the Web & design team at Canonical.  Here are some of the highlights of our completed work.

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical.  Here are a few highlights of completed work.  We also moved several other projects forward that we will share in coming weeks.

Takeover and landing page for Edge month

A series of four webinars which will explains how edge computing provides enterprises with real-time data processing, cost-effective security and scalability.

Learn more about edge month

OpenStack content updates

Each week the team reviews how our content is performing in a section of the site.  This week’s focus was OpenStack. The result was around ten changes to the section.

MAAS

The MAAS squad develop the UI for the maas project.

Implementation of small graphs for KVM listing

Reorganization of table columns and per KVM actions added to each row. in the new KVM page. We have also added new mini in table charts with popovers showing data for storage, RAM and CPU per KVM.

Settings fresh navigation wireframes 

The settings in MAAS is due a revamp and as part of the preparation for this work the UX team have been testing different settings navigation structures and layouts. This has been wireframed and will be designer in the next few weeks. So stay tuned for the latest and greatest in settings navigation.

JAAS

The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

Design to disabled for k8s deployments in the GUI

Juju team is working on the production of Kubernetes charms and bundles in order to deploy with Juju on Kubernetes clusters (beta). GUI-wise, this functionality is not yet supported, and we will redirect users to the relevant pages on the documentation.

Prepare initial engagement

The design team is getting in touch with our customers and users through different media. We are setting up regular meet-ups, defining the initial work with marketing for newsletter and email content, and assist to working sessions with Juju users.

This user research phase is particularly important to gather feedback and information for the new products we are working on for Juju and JAAS, and to be able to collect users’ needs and requests for the current products.

Research, UX and explorations around organisation, models and controllers

We conducted a research around organisation and enterprise structures applied to Juju model and JIMM controller to define the hierarchy of teams and users (RBAC and IM).

Suru designs applied to jaas.ai

The JAAS team has been working on applying the new suru style design to the jaas.ai website. This week this has been implemented and will be live very shortly. This redesign introduces slanted strips with suru crossed header and footer sections.

User flows comparison Snap and Charm stores consumer/publisher experience

The design team is exploring the user experiences of the Snap and Charm stores in order to align the consumer and publisher user journeys of the front pages. The same explorations are on going for the command lines of Snap/Juju and Snapcraft/Charm.

Snapcraft

The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

Release UI drag’n’drop

The drag’n’drop functionality for Releases page was finalised this iteration. We updated the visual style to draggable elements to make them more clear and consistent and made it possible to promote whole channel or single revision using drag’n’drop.

This feature is already released and available to snapcraft.io users.

Default tracks

We started work on the UI for managing default tracks of the snaps. As the details of this functionality are still being discussed among the teams this work will continue in the next iteration.

Proof of concept for build

We started investigating technical aspect of moving build.snapcraft.io functionality into snapcraft.io web application for more consistent and consolidated experience for users.

This work is in the early stage and will continue over next iterations. 

Blog Posts

The post Design and Web team summary – 8 July 2019 appeared first on Ubuntu Blog.

Read more
Carmine Rimi

This article is the first in a series of machine learning articles focusing on model serving. I assume you’re reading this article because you’re excited about machine learning and quite possibly Kubeflow as well. You might have done some model training and are now trying to understand how to serve those models in production. There are many ways to serve a trained model in both Kubeflow and outside of Kubeflow. This post should help the reader explore some of the alternatives and what to consider.

Here’s a summary of what we’ll explore in this article:

  • What is model serving?
  • How do applications interact with models?
  • What is the Kubeflow approach to model serving?
  • Model serving examples
  • Developer Setup

As the title suggests, this article is only the first part in a series of posts. Sign up to the newsletter to be notified of the next post in this series, as well as technical posts discussing:

  • TensorFlow Serving
  • TensorRT Serving
  • TensorFlow.js
  • Seldon Core
  • Kubeflow Serving

What is model serving?

In simple terms, it is making a trained model available to other software components. How you’ve arrived at a trained model – what framework you used to produce it – will play a role in what options are available to you. And you may not have produced the trained the model yourself – there are open source, pre-trained models that can be used today, models that were trained on data that you may not have access to. BERT is an example of an area that produces pre-trained models. We’ll discuss BERT in more detail in a future article.

How do applications interact with models?

Probably the most immediate concern is determining how you want to integrate the model into your application. Should it be embedded? Should other systems be able to access it? Is scaling a concern?

For embedded model serving, the model can be compiled into the application and accessed via native function calls. This could be done within a Python application, or it could be done from within a JavaScript application in a browser.

For API model serving – where others can access your model dynamically – the most common approach is to put a REST API in front of the model. Most of the popular frameworks like TensorFlow come with native mechanisms for this, and there are some links below. But API model serving creates another concern – does it need to scale? For instance, assume your model can handle 100 requests a second. Is that enough? Could there be a spike of 5000 requests a second? If so, you need to think about scaling the model.

What is the Kubeflow approach to model serving?

Fortunately there are a few frameworks included with Kubeflow that will help accomplish both tasks – put an API in front of your model, and allow it to scale based on demand. The Kubeflow community has included a couple of examples, using different frameworks – a TensorFlow serving example and a Seldon example. The community is also in the middle of creating a new, generic approach to model serving. This new approach is in flight and we will write about this more later, once it is closer to release.

Model serving examples

Using a crawl, walk, run approach, one of the best next steps is to run through some of the examples below so that you can get grounded in the manual approach to serving models. After a low level understanding of how these things work, try the more automated approach with Kubeflow. In summary, if you are just getting started, I suggest these steps:

  1. Basic TensorFlow example 
  2. REST TensorFlow example
  3. Kubernetes TensorFlow example
  4. Kubeflow TensorFlow example

Developer Setup

An easy way to explore the examples above is to get access to the Ubuntu platform. This starts with the Ubuntu operating system. If you’re on a Windows or a Mac desktop, you can start with Multipass – a native application for Windows, Mac, and Linux that will let you create a virtual machine. Here’s a complete list of software that you are free to use:

  • Multipass – A mini-cloud on your Mac, Windows or Linux workstation.
  • MicroK8s – A single package of K8s that installs on Linux
  • Kubeflow – The Machine Learning Toolkit for Kubernetes

Resources


The post Machine Learning: serving models with Kubeflow on Ubuntu, Part 1 appeared first on Ubuntu Blog.

Read more
Alex Hung

I often need to implement tests for new ACPI tables before they become available on real hardware. Fortunately, FWTS provides a framework to read ACPI tables’ binary.

The below technique is especially convenient for ACPI firmware and OS kernel developers. It provides a simple approach to verifying ACPI tables without compiling firmware and deploying it to hardware.

Using acpidump.log as an input for FWTS

The command to read ACPI tables’ binary is

# check ACPI methods in a text file
$ fwts method --dumpfile=acpidump.log

or

# check ACPI FACP table in a text file
$ fwts facp --dumpfile=acpidump.log

where acpidump.log contains ACPI tables’ binary in a specific format as depicted below:

Format of acpidump
  • Table Signature – the 4-byte long ACPI table signature
  • Offset – data starts from 0000 and increases by 16 bytes per line
  • Hex Data- each line has 16 hex integers of the compiled ACPI table
  • ASCII Data – the ASCII presentation of the hex data

This format may look familiar because it is not specific to FWTS. In fact, it is the same format generated by acpidump. In other words, the below two code snippets generate identical results.

# reading ACPI tables from memory
$ sudo fwts method
# dumping ACPI tables and testing it
$ sudo acpidump > acpidump.log
$ fwts method --dumpfile=acpidump.log

For developers, using –dumpfile option means that it is possible to test ACPI tables before deploying them on real hardware. The following sections present how to prepare a customized log file.

Using a customized acpidump.log for FWTS

We can use acpica-tools to generate an acpidump.log. The following is an example of building a customized acpidump.log to test the fwts method command.

Generate a dummy FADT

A Fixed ACPI Description Table (FADT) contains vital information to ACPI OS such as the base addresses for hardware registers. As a result, FWTS requires a FADT in an acpidump.log so it can recognize acpidump.log as a valid input file.

$ iasl -T FACP
$ iasl facp.asl > /dev/zero
$ echo "FACP @ 0x0000000000000000" >> acpidump.log
$ xxd -c 16 -g 1 facp.aml  | sed 's/^0000/    /' >> acpidump.log
$ echo "" >> acpidump.log

Develop a Customized DSDT table

A Differentiated System Description Table (DSDT) is designed to provide OEM’s value-added features. A dummy DSDT can be generated as below, and OEM value-added features, such as ACPI battery or hotkey for airplane mode, can be added to it.

# Generate a DSDT
$ iasl -T DSDT
# Customize dsdt.asl
#  ex. adding an ACPI battery or airplane mode devices

Compile the DSDT table to binary

The customized DSDT can be compiled and appended to acpidump.log.

$ iasl dsdt.asl > /dev/zero
$ echo "DSDT @ 0x0000000000000000" >> acpidump.log
$ xxd -c 16 -g 1 dsdt.aml  | sed 's/^0000/    /' >> acpidump.log
$ echo "" >> acpidump.log

Run method test with acpidump.log

And finally, run the fwts method test.

$ fwts method --dumpfile=acpidump.log

Final Words

While we use DSDT as an example, the same technique applies to all ACPI tables. For instance, HMAT was introduced and frequently updated in recent ACPI specs, and the Firmware Test Suite includes most, if not all, changes up-to-date. As a consequence, FWTS is able to detect errors before firmware developers integrate HMAT into their projects, and therefore reduces errors in final products.

The post Analyze ACPI Tables in a Text File with FWTS appeared first on Ubuntu Blog.

Read more
Canonical

Ubuntu updates for TCP SACK Panic vulnerabilities
Issues have been identified in the way the Linux kernel’s TCP implementation processes Selective Acknowledgement (SACK) options and handles low Maximum Segment Size (MSS) values. These TCP SACK Panic vulnerabilities could expose servers to a denial of service attack, so it is crucial to have systems patched. Updated versions of the Linux kernel packages are being published as part of the standard Ubuntu security maintenance of Ubuntu releases 16.04 LTS, 18.04 LTS, 18.10, 19.04 and as part of the extended security maintenance for Ubuntu 14.04 ESM users. It is recommended to update to the latest kernel packages and consult Ubuntu Security Notices for further updates. Ubuntu Advantage for Infrastructure subscription customers can find the latest status information in our Knowledge Base and file a support case with Canonical support for any additional questions or concerns around SACK Panic. Canonical’s Kernel Livepatch updates for security vulnerabilities related to TCP SACK processing in the Linux kernel have been released and are described by CVEs 2019-11477 and 2019-11478, with details of the patch available in LSN-0052-1. These CVEs have a Livepatch fix available, however, a minimum kernel version is required for Livepatch to install the fix as denoted by the table in LSN-0052-1, reproduced here:
| Kernel                   | Version | flavors           |
|--------------------------+----------+--------------------------|
| 4.4.0-148.174            | 52.3 | generic, lowlatency      |
| 4.4.0-150.176            | 52.3 | generic, lowlatency      |
| 4.15.0-50.54             | 52.3 | generic, lowlatency      |
| 4.15.0-50.54~16.04.1     | 52.3 | generic, lowlatency      |
| 4.15.0-51.55             | 52.3 | generic, lowlatency      |
| 4.15.0-51.55~16.04.1     | 52.3 | generic, lowlatency      |
Livepatch fixes for CVEs 2019-11477 and 2019-11478 are not available for prior kernels, and an upgrade and reboot to the appropriate minimum version is necessary. These kernel versions correspond to the availability of mitigations for the MDS series of CVEs (CVE-2018-12126, CVE-2018-12127, CVE-2018-12130 and CVE-2019-11091). Additionally, a third SACK related issue, CVE-2019-11479, does not have a Livepatch fix available because it is not technically feasible to apply the changes via Livepatch. Mitigation information is available at the Ubuntu Security Team Wiki. If you have any questions and want to learn more about these patches, please do not hesitate to get in touch. The post Ubuntu updates for TCP SACK Panic vulnerabilities appeared first on Ubuntu Blog.

The post Canonical Design Blog: Ubuntu updates for TCP SACK Panic vulnerabilities appeared first on Ubuntu Blog.

Read more
Canonical

Ubuntu updates for TCP SACK Panic vulnerabilities

Issues have been identified in the way the Linux kernel’s TCP implementation processes Selective Acknowledgement (SACK) options and handles low Maximum Segment Size (MSS) values. These TCP SACK Panic vulnerabilities could expose servers to a denial of service attack, so it is crucial to have systems patched.

Updated versions of the Linux kernel packages are being published as part of the standard Ubuntu security maintenance of Ubuntu releases 16.04 LTS, 18.04 LTS, 18.10, 19.04 and as part of the extended security maintenance for Ubuntu 14.04 ESM users.

It is recommended to update to the latest kernel packages and consult Ubuntu Security Notices for further updates.

Ubuntu Advantage for Infrastructure subscription customers can find the latest status information in our Knowledge Base and file a support case with Canonical support for any additional questions or concerns around SACK Panic.

Canonical’s Kernel Livepatch updates for security vulnerabilities related to TCP SACK processing in the Linux kernel have been released and are described by CVEs 2019-11477 and 2019-11478, with details of the patch available in LSN-0052-1.

These CVEs have a Livepatch fix available, however, a minimum kernel version is required for Livepatch to install the fix as denoted by the table in LSN-0052-1, reproduced here:

| Kernel                   | Version | flavors           |
|--------------------------+----------+--------------------------|
| 4.4.0-148.174            | 52.3 | generic, lowlatency      |
| 4.4.0-150.176            | 52.3 | generic, lowlatency      |
| 4.15.0-50.54             | 52.3 | generic, lowlatency      |
| 4.15.0-50.54~16.04.1     | 52.3 | generic, lowlatency      |
| 4.15.0-51.55             | 52.3 | generic, lowlatency      |
| 4.15.0-51.55~16.04.1     | 52.3 | generic, lowlatency      |

Livepatch fixes for CVEs 2019-11477 and 2019-11478 are not available for prior kernels, and an upgrade and reboot to the appropriate minimum version is necessary. These kernel versions correspond to the availability of mitigations for the MDS series of CVEs (CVE-2018-12126, CVE-2018-12127, CVE-2018-12130 and CVE-2019-11091).

Additionally, a third SACK related issue, CVE-2019-11479, does not have a Livepatch fix available because it is not technically feasible to apply the changes via Livepatch. Mitigation information is available at the Ubuntu Security Team Wiki.

If you have any questions and want to learn more about these patches, please do not hesitate to get in touch.

The post Ubuntu updates for TCP SACK Panic vulnerabilities appeared first on Ubuntu Blog.

Read more
Igor Ljubuncic

Recently, we published several blog posts, aimed at helping developers enjoy a smoother, faster, more streamlined experience creating snaps. We discussed the tools and tricks you can employ in snapcraft to accelerate the speed at which you iterate on your builds.

We want to continue the work presented in the Make your snap development faster tutorial, by giving you some fresh pointers and practical tips that will make the journey even brisker and snappier than before.

You shall not … multipass

Multipass is a cross-platform tool used to launch and manage virtual machines on Windows, Mac and Linux. Behind the scenes, snapcraft uses multipass to setup a clean, isolated build environment inside which your snaps will be created. Multipass leverages KVM (qemu) to create virtual machine instances. While this is handy when running natively on a host, this approach is not reliable for nested virtual machines or systems with limited KVM support.

Indeed, if you are running snapcraft inside a VM that does not support hardware acceleration passthrough, you’re running on a host with a CPU that does not support hardware acceleration, hardware acceleration is disabled in BIOS, or KVM modules are not loaded into memory, you will most likely see the following error:

failed to launch: Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory

If you encounter this problem – which can happen if you’re using Linux as a virtual machine, a setup that is quite popular with a large number of developers, this means that you cannot use multipass for your builds. However, snapcraft supports several other clever methods that will let you successfully create your snaps in a safe, isolated manner.

Use LXD

You can run snapcraft with the LXD backend. This requires that you have the LXD software installed and configured on your system. Start by installing and configuring the tool:

snap install lxd
lxd init

To verify that LXD has been set up correctly, you can start an Ubuntu 18.04 container instance and open a shell inside it:

lxc launch ubuntu:18.04 test-instance
lxc exec test-instance -- /bin/bash

You will now have a minimal Ubuntu installation. You can exit and destroy the container, or continue working inside it. For instance, you can use it now to setup snapcraft, installation additional software you may need, as well as copy any assets, like your project files and source code, into the container.

Outside the container environment, if you want to invoke snapcraft with LXD, you can run snapcraft with the –use-lxd flag:

snapcraft --use-lxd

Destructive mode & manual container setup

By default, snapcraft uses multipass to start virtual machine instances and run the build inside them. This mode will not work when snapcraft is invoked inside the container environment. To that end, snapcraft needs to be invoked with the –destructive-mode argument. Please note that this feature is intended for short-lived CI systems, as the build will install packages on the (virtual) host, may include existing files from the host and could thus be unclean

snapcraft --destructive-mode

In this case, the full sequence of a manual container setup would include the following steps:

  • Manually start a container (lxc launch).
  • Copy your snapcraft.yaml into the container (lxc file push).
  • Open an interactive shell (lxc exec).
  • Inside the container, run snapcraft –destructive-mode flag. Please be extra careful and make sure that you run this command in the right shell, so you don’t accidentally do this on your host system. You may end up retrieving various packages and libraries that could potentially conflict with your setup.
  • Once you have successfully completed the build, you can retrieve the snap from inside the container (lxc pull).
  • Stop and/or destroy the container instance (lxc stop).

Shell after

In the tutorial linked above, we talked about the debug flag, which lets you step into the build environment on failure, allowing you to examine the system and understand better what might have gone wrong. Similarly, you can step into the virtual machine or container upon successful build using the –shell-after flag.

snapcraft --shell-after

You also have the option to run your snaps with the –shell flag. This can be useful in troubleshooting runtime issues, like missing libraries, permissions – or other errors that you have encountered while reviewing your snap before pushing to the store. Alongside the try and pack commands, which we examined last week, you get a great deal of flexibility in nailing down issues and bugs during the development phase.

Summary

If you’ve ever raced a car, you know the best lap times aren’t decided by straight line dashes, they are decided by how fast you go through corners. Slow in, fast out. This article comes with a handful of useful, advanced tricks – the ability to use different provisioning backends, the destructive mode and the after-build shell. These should help you enjoy higher, faster productivity creating snaps. If you have any feedback or questions on this topic, please join our forum for a discussion.

No iconic Lord of the Rings phrases were harmed in the writing of this article.

Photo by Marco Bicca on Unsplash.

The post Faster snap development – additional tips and tricks appeared first on Ubuntu Blog.

Read more
Alex Cattle

cloud-init logo

Private cloud, public cloud, hybrid cloud, multi-cloud… the variety of locations, platforms and physical substrate you can start a cloud instance on is vast. Yet once you have selected an operating system which best supports your application stack, you should be able to use that operating system as an abstraction layer between different clouds.

However, in order to function together an operating system and its cloud must share some critical configuration data which tells the operating system how to ‘behave’ in the particular environment. Separating out this configuration data from the operating system and the underlying cloud is the key to effortlessly launching instances across multi-cloud.

Cloud-init provides a mechanism for separating out configuration data from both the operating system and the execution environment so that you maintain the ability to change either at any time. It serves as a useful abstraction which ensures that the investments you make in your application stack on a specific operating system can be leveraged across multiple clouds.

This whitepaper will explain:

  • The background and history behind the cloud-init open source project
  • How cloud-init is invoked and how it configures an instance
  • How to get started with cloud-init

To view the whitepaper sign up using the form below:

In submitting this form, I confirm that I have read and agree to Canonical’s Privacy Notice and Privacy Policy.

The post Cloud Instance Initialisation with cloud-init appeared first on Ubuntu Blog.

Read more
Christian Brauner

Linux Kernel VFSisms

Introduction

This is intended as a collection of helpful knowledge bits around Linus Kernel VFS internals. It mostly contains (hopefully) useful bits and pieces I picked up while working on the Linux kernel and talking to VFS maintainers or high-profile contributors.

ksys_close()

Should never be used. One of the major reasons being that it is too easy to get wrong.

On creating and installing new file descriptors

A file descriptor should only be installed past every possible point of failure. Specifically for a syscall the file descriptor should be installed right before returning to userspace. Consider the function anon_inode_getfd(). This functions creates and installs a new file descriptor for a task. Hence, by the rule given above it should only ever be called when the syscall cannot fail anymore in any other way then by failing anon_inode_getfd().

For all other cases the rule is to reserve a file descriptor but defer the installation of the file descriptor past the last point of failure. Note, that installing an file descriptor itself is not an operation that can fail.

Back to the anonymous inode example: Instead of calling anon_inode_getfd() callers who need a file descriptor before the last point of failure should reserve a file descriptor, call anon_inode_getfile() and then defer the fd_install() until after the last point of failure. Here is a concrete example blessed by Al Viro:

	if (clone_flags & CLONE_PIDFD) {
		/* reserve a new file descriptor */
		retval = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
		if (retval < 0)
			goto bad_fork_free_pid;

		pidfd = retval;

		/* get file to associate with file descriptor */
		pidfile = anon_inode_getfile("[pidfd]", &pidfd_fops, pid,
					      O_RDWR | O_CLOEXEC);
		if (IS_ERR(pidfile)) {
			put_unused_fd(pidfd);
			retval = ERR_PTR(pidfile);
			goto bad_fork_free_pid;
		}
		get_pid(pid);	/* held by pidfile now */

		/* place file descriptor in buffer accessible for userspace */
		retval = put_user(pidfd, parent_tidptr);
		if (retval)
			goto bad_fork_put_pidfd;
	}

	/* a lot more code that can fail somehow */

	/* Let kill terminate clone/fork in the middle */
	if (fatal_signal_pending(current)) {
		retval = -EINTR;
		goto bad_fork_cancel_cgroup;
	}

	/* past the last point of failure */
	if (pidfile)
		fd_install(pidfd, pidfile);

Read more
Igor Ljubuncic

Over the past several months, we have shared with you several articles and tutorials showing how to accelerate application development so that a typically demanding, time-consuming process becomes an easier, faster and more fun one. Today, we’d like to introduce some additional tips and tricks. Namely, we want to talk about elegant ways you can streamline the final steps of a snap build.

Snap try

A rather handy thing you can do to speed up the development and testing is the snap try command. It allows you to install a snap AND make live (non-metadata) changes to the snap contents without having to go through the build process. This may sound a little confusing, so let’s discuss a practical example.

Say you built a snap, and you want to test how it works. Typically, the standard process is to install the snap (with the –dangerous flag), and then run the snap. Early in the testing process, a likely scenario is that a snap may not launch because it could be missing runtime libraries. With the usual development model, you would iterate in the following manner:

  • Edit the snapcraft.yaml.
  • Add relevant stage packages to include necessary runtime libraries.
  • Re-run the build process.
  • Remove the installed snap.
  • Install the new version and test again.

This is perfectly fine, but it can take some time. The alternative approach with snap try allows you to make live changes to your snap without going through the rebuild process. The way snap try works, it installs the snap, and it uses the specified directory (containing the valid snap contents) as its root. If you make non-metadata changes there, they will be automatically reflected. For instance, you can add libraries into usr/lib or lib, and see whether you can resolve runtime issues during the test phase. Once you’re satisfied the snap works well, you can then make the one final build.

Where do you start?

The easiest way is to simply unsquash a built snap, make changes to the contents contained inside the extracted squashfs-root directory, and then snap try against it, and see whether you have a successful build with all the assets correctly included. Moreover, with snap try, you can also change confinement modes, which gives you additional flexibility in testing your snap under different conditions, and see whether the application works correctly.

snap try
electron-quick-start 1.0.0 mounted from /home/igor/snap-tests/electron-quick-start/dist/squashfs-root

Snapcraft pack

Side by side with snap try, you can use the snapcraft pack command. It lets you create a snap from a directory holding a valid snap (the layout of the target directory must contain a meta/snap.yaml file). Going back to our previous example, you would alter the contents of your project directory, add assets (like libraries), and then pack those into a squashfs file.

snapcraft pack .
Snapping 'electron-quick-start' /
Snapped electron-quick-start_1.0.0_amd64.snap

The two commands, snap try and snapcraft pack, complement each other really well. For instance, while you cannot make live changes to metadata for snap try without reinstalling the snap (directory), you can edit the snap.yaml file and pack additional snaps, allowing you to quickly test new changes.

You can also manually create your own snap and pack them for both offline and online distribution. This might be useful if your application language isn’t currently supported as a plugin in snapcraft, or if you have ready archives of binary code you want to assemble into snaps in a quick and convenient way.

Summary

Sometimes, small things can make a big difference. The ability to quickly make changes to your snaps in the testing phase, while still retaining the full separation and containment from the underlying system provides developers with the peace of mind they need to iterate quickly and release their applications. Snap try and snapcraft pack are convenient ways to blend the standard build process and runtime usage in a streamlined manner. As always, if you have any comments or suggestions, please join our forum for a discussion.

Photo by chuttersnap on Unsplash.

The post Development tips and tricks – snap try and snapcraft pack appeared first on Ubuntu Blog.

Read more

Hello Ubuntu Server The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team […]

The post Ubuntu Server development summary – 26 June 2019 appeared first on Blog.

Read more

Mobile operators face a range of challenges today from saturation, competition and regulation – all of which are having a negative impact on revenues. The introduction of 5G offers new customer segments and services to offset this decline. However, unlike the introduction of 4G which was dominated by consumer benefits, 5G is expected to be […]

The post The future of mobile connectivity appeared first on Blog.

Read more

This was a fairly busy two weeks for the Web & design team at Canonical.  Here are some of the highlights of our completed work. Web squad Web is the squad that develop and maintain most of the brochure websites across the Canonical. Getting started with AI webinar We build a page to promote the […]

The post Design and Web team summary – 25 June 2019 appeared first on Blog.

Read more

Thanks to the huge amount of feedback this weekend from gamers, Ubuntu Studio, and the WINE community, we will change our plan and build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS. We will put in place a community process to determine which 32-bit packages are needed to support legacy software, and can […]

The post Statement on 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS appeared first on Blog.

Read more

Disclosure: read the post until the end, a surprise awaits you! Moving from ROS 1 to ROS 2 can be a little overwhelming.It is a lot of (new) concepts, tools and a large codebase to get familiar with. And just like many of you, I am getting started with ROS 2. One of the central […]

The post ROS 2 Command Line Interface appeared first on Blog.

Read more

MicroK8s can be used to run Kubernetes on Mac for testing and developing apps on macOS. Follow the steps below for easy setup. MicroK8s is the local distribution of Kubernetes developed by Ubuntu. It’s a compact Linux snap that installs a single node cluster on a local PC. Although MicroK8s is only built for Linux, […]

The post Kubernetes on Mac: how to set up appeared first on Blog.

Read more

We have just released Vanilla Framework 2.0, Canonical’s SCSS styling framework, and – despite our best efforts to minimise the impact – the new features come with changes that will not be automatically backwards compatible with sites built using previous versions of the framework. To make the transition to v2.0 easier, we have compiled a […]

The post Vanilla Framework 2.0 upgrade guide appeared first on Blog.

Read more