Canonical Voices

What I me mine talks about

Sergio Schvezov

Now that Ubuntu Core has been officially released, it might be a good time to get your snaps into the Store.

Delivery and Store Concepts

So let’s start with a refresher on what we have available on the Store side to manage your snaps.

Every time you push a snap to the store, the store will assign it a revision, this revision is unique in the store for this particular snap.

However to be able to push a snap for the first time, the name needs to be registered which is pretty easy to do given the name is not already taken.

Any revision on the store can be released to a number of channels which are defined conceptually to give your users the idea of a stability or risk level, these channel names are:

  • stable
  • candidate
  • beta
  • edge

Ideally anyone with a CI/CD process would push daily or on every source update to the edge channel. During this process there are two things to take into account.

The first thing to take into account is that at the beginning of the snapping process you will likely get started with a non confined snap as this is where the bulk of the work needs to happen to adapt to this new paradigm. With that in mind, your project gets started with a confinement set to devmode. This makes it possible to get going on the early phases of development and still get your snap into the store. Once everything is fully supported with the security model snaps work in, this confinement entry can be switched to strict. Given the confinement level of devmode this snap is only releasable on the edge and beta channels which hints your users on how much risk they are taking by going there.

So let’s say you are good to go on the confinement side and you start a CI/CD process against edge but you also want to make sure in some cases that early releases of a new iteration against master never make it to stable or candidate and for this we have a grade entry. If the grade of the snap is set to devel the store will never allow you to release to the most stable channels (stable and candidate). not be possible.

Somewhere along the way we might want to release a revision into beta which some users are more likely want to track on their side (which given good release management process should be to some level more usable than a random daily build). When that stage in the process is over but want people to keep getting updates we can choose to close the beta channel as we only plan to release to candidate and stable from a certain point in time, by closing this beta channel we will make that channel track the following open channel in the stability list, in this case it is candidate, if candidate is tracking stable whatever is in stable is what we will get.

Enter Snapcraft

So given all these concepts how do we get going with snapcraft, first of all we need to login:

$ snapcraft login
Enter your Ubuntu One SSO credentials.
Email: sxxxxx.sxxxxxx@canonical.com
Password: **************
Second-factor auth: 123456

After logging in we are ready to get our snap registered, for examples sake let’s say we wanted to register awesome-database, a fantasy snap we want to get started with:

$ snapcraft register awesome-database
We always want to ensure that users get the software they expect
for a particular name.

If needed, we will rename snaps to ensure that a particular name
reflects the software most widely expected by our community.

For example, most people would expect ‘thunderbird’ to be published by
Mozilla. They would also expect to be able to get other snaps of
Thunderbird as 'thunderbird-sergiusens'.

Would you say that MOST users will expect 'a' to come from
you, and be the software you intend to publish there? [y/N]: y

You are now the publisher for 'awesome-database'.

So assuming we have the snap built already, all we have to do is push it to the store. Let’s take advantage of a shortcut and --release in the same command:

$ snapcraft push awesome-databse_0.1_amd64.snap --release edge
Uploading awesome-database_0.1_amd64.snap [=================] 100%
Processing....
Revision 1 of 'awesome-database' created.

Channel    Version    Revision
stable     -          -
candidate  -          -
beta       -          -
edge       0.1        1

The edge channel is now open.

If we try to release this to stable the store will block us:

$ snapcraft release awesome-database 1 stable
Revision 1 (devmode) cannot target a stable channel (stable, grade: devel)                                                                           

We are safe from messing up and making this available to our faithful users. Now eventually we will push a revision worthy of releasing to the stable channel:

$ snapcraft push awesome-databse_0.1_amd64.snap
Uploading awesome-database_0.1_amd64.snap [=================] 100%
Processing....
Revision 10 of 'awesome-database' created.

Notice that the version is just a friendly identifier and what really matters is the revision number the store generates for us. Now let’s go ahead and release this to stable:

$ snapcraft release awesome-database 10 stable
Channel    Version    Revision
stable     0.1        10
candidate  ^          ^
beta       ^          ^
edge       0.1        10

The 'stable' channel is now open.

In this last channel map view for the architecture we are working with, we can see that edge is going to be stuck on revision 10, and that beta and candidate will be following stable which is on revision 10. For some reason we decide that we will focus on stability and make our CI/CD push to beta instead. This means that our edge channel will slightly fall out of date, in order to avoid things like this we can decide to close the channel:

$ snapcraft close awesome-database edge
Arch    Channel    Version    Revision
amd64   stable     0.1        10
        candidate  ^          ^
        beta       ^          ^
        edge       ^          ^

The edge channel is now closed.

In this current state, all channels are following the stable channel so people subscribed to candidate, beta and edge would be tracking changes to that channel. If revision 11 is ever pushed to stable only, people on the other channels would also see it.

This listing also provides us with a full architecture view, in this case we have only been working with amd64.

Getting more information

So some time passed and we want to know what was the history and status of our snap in the store. There are two commands for this, the straightforward one is to run status which will give us a familiar result:

$ snapcraft status awesome-database
Arch    Channel    Version    Revision
amd64   stable     0.1        10
        candidate  ^          ^
        beta       ^          ^
        edge       ^          ^

We can also get the full history:

$ snapcraft history awesome-database
Rev.    Uploaded              Arch       Version    Channels
3       2016-09-30T12:46:21Z  amd64      0.1        stable*
...
...
...
2       2016-09-30T12:38:20Z  amd64      0.1        -
1       2016-09-30T12:33:55Z  amd64      0.1        -

Closing remarks

I hope this gives an overview of the things you can do with the store and more people start taking advantage of it!

Read more
Sergio Schvezov

The Snapcraft Parts Ecosystem

Today I am going to be discussing parts. This is one of the pillars of snapcraft (together with plugins and the lifecycle).

For those not familiar, this is snapcraft’s general purpose landing page, http://snapcraft.io/ but if you are a developer and have already been introduced to this new world of snaps, you probably want to just go and hop on to http://snapcraft.io/create/

If you go over this snapcraft tour you will notice the many uses of parts and start to wonder how to get started or think that maybe you are duplicating work done by others, or even better, maybe an upstream. This is where we start to think about the idea of sharing parts and this is exactly what we are going to go over in this post.

To be able to reproduce what follows, you’d need to have snapcraft 2.12 installed.

An overview to using remote parts

So imagine I am someone wanting to use libcurl. Normally I would write the part definition from scratch and be on with my own business but surely I might be missing out on something about optimal switches used to configure the package or even build it. I would also need to research on how to use the specific plugin required. So instead, I’ll see if someone already has done the work for me, hence I will,

$ snapcraft update
Updating parts list... |
$ snapcraft search curl
PART NAME  DESCRIPTION
curl       A tool and a library (usable from many languages) for client side URL tra...

Great, there’s a match, but is this what I want?

$ snapcraft define curl
Maintainer: 'Sergio Schvezov <sergio.schvezov@ubuntu.com>'
Description: 'A tool and a library (usable from many languages) for client side URL transfers, supporting FTP, FTPS, HTTP, HTTPS, TELNET, DICT, FILE and LDAP.'

curl:
  configflags:
  - --enable-static
  - --enable-shared
  - --disable-manual
  plugin: autotools
  snap:
  - -bin
  - -lib/*.a
  - -lib/pkgconfig
  - -lib/*.la
  - -include
  - -share
  source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
  source-type: tar

Yup, it’s what I want.

An example

There are two ways to use these parts in your snapcraft.yaml, say this is your parts section

parts:
    client:
       plugin: autotools
       source: .

My client part which is using sources that sit alongside this snapcraft.yaml, will hypothetically fail to build as it depends on the curl library I don’t yet have. There are some options here to get this going, one using after in the part definition implicitly, another involving composing and last but not least just copy pasting what snapcraft define curl returned for the part.

Implicitly

The implicit path is really straightforward. It only involves making the part look like:

parts:
    client:
       plugin: autotools
       source: .
       after: [curl]

This will use the cached definition of the part and may potentially be updated by running snapcraft update.

Composing

What if we like the part, but want to try out a new configure flag or source release? Well we can override pieces of the part; so for the case of wanting to change the source:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        source: http://curl.haxx.se/download/curl-7.45.0.tar.bz2

And we will get to build curl but using a newer version of curl. The trick is that the part definition here is missing the plugin entry, thereby instructing snapcraft to look for the full part definition from the cache.

Copy/Pasting

This path is a path one would take if they want full control over the part. It is as simple as copying in the part definition we got from running snapcraft define curl into your own. For the sake of completeness here’s how it would look like:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        configflags:
            - --enable-static
            - --enable-shared
            - --disable-manual
        plugin: autotools
        snap:
            - -bin
            - -lib/*.a
            - -lib/pkgconfig
            - -lib/*.la
            - -include
            - -share
        source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
        source-type: tar

Sharing your part

Now what if you have a part and want to share it with the rest of the world? It is rather simple really, just head over to https://wiki.ubuntu.com/snapcraft/parts and add it.

In the case of curl, I would write a yaml document that looks like:

origin: https://github.com/sergiusens/curl.git
maintainer: Sergio Schvezov <sergio.schvezov@ubuntu.com>
description:
  A tool and a library (usable from many languages) for
  client side URL transfers, supporting FTP, FTPS, HTTP,
  HTTPS, TELNET, DICT, FILE and LDAP.
project-part: curl

What does this mean? Well, the part itself is not defined on the wiki, just a pointer to it with some meta data, the part is really defined inside a snapcraft.yaml living in the origin we just told it to use.

The extent of the keywords is explained in the documentation, that is an upstream link to it.

The core idea is that a maintainer decides he wants to share a part. Such a maintainer would add a description that provides an idea of what that part (or collection of parts) is doing. Then, last but not least, the maintainer declares which parts to expose to the world as maybe not all of them should. The main part is exposed in project-part and will carry a top level name, the maintainer can expose more parts from snapcraft.yaml using the general parts keyword. These parts will be namespaced with the project-part.

Read more
Sergio Schvezov

Snapcrafting a kernel

Introduction

With snapcraft 2.5 which can be installed on the upcoming 16.04 Xenial Xerus with apt or consumed from the 2.5 tag on github we have included two interesting plugins: kbuild and kernel.

The kbuild plugin is interesting in itself, but here we will be discussing the kernel plugin which is based out of the kbuild one.

A note of caution though, this kernel plugin is still not considered production ready. This doesn’t mean you will build kernels that don’t work on today’s version of Ubuntu Core; but caution is required as the nature of rolling, which is what this kernel plugin targets, can still change. Additionally we may still modify the plugin’s options for the part setup itself.

Last but not least we are introducing, given the nature of kernel building, some experimental cross building support. The reason for this is that cross compiling a kernel is well understood and straightforward.

Walkthrough

Objective

The final objective is to obtain a kernel snap; we will want to create a kernel that would work on the 410c DragonBoard from Arrow which features Qualcomm’s Snapdragon 410. To do so we will take a look at the 96boards wiki and the 96boards published kernel.

Setup

You must be running from a Xenial Xerus system and have at least snapcraft 2.5 installed, make sure by running:

$ snacraft -v
2.5

If not, then:

$ apt update
$ apt install snapcraft

Cloning the kernel

Since the kernel is the main project and to iterate quickly it makes sense to clone it and start snapcrafting from there, so let’s clone

git clone --depth 1 https://github.com/96boards/linux

Depending on when you do this, you might need to also cherry pick 6113222fa5386433645c7707b4239a9eba444523

Creating the base snapcraft.yaml

Go into the recently cloned kernel directory and let’s get started with a yaml that has the standard entries for someone familiar with snapcraft.yaml:

name: 96boards-kernel
version: 4.4.0
summary: 96boards reference kernel with qualcomm firmware
description: this is an example on how to build a kernel snap.

Now this is a kernel snap, so let’s add that information in; this is rather important since if not done, the resulting snap might as well be some sort of asset holder; by adding the type of snap, snappy Ubuntu Core will know what to do:

name: 96boards-kernel
version: 4.4.0
summary: 96boards reference kernel with qualcomm firmware
description: this is an example on how to build a kernel snap.
type: kernel

That’s all we need with regards to headers.

Adding parts

kernel

So let’s add some parts, the first part will use the new kernel plugin, This plugin’s help can be seen by running:

snapcraft help kernel

The kernel plugin is based out of the kbuild one, so there are some extra parameters we can use from that plugin which can be seen by running:

snapcraft help kbuild

And finally these plugins make use of snapcraft’s source helpers which can be discovered by runnning:

snapcraft help sources

So when we look at the wiki again we will notice there are 2 defconfigs, defconfig and distro.conf. Even though distro.config defines squashfs support to be built as a module, let’s make use of kconfigs and explicitly set it (we also set a couple of other kernel configurations). We will build 2 device trees making use of kernel-device-trees. In kernel-initrd-modules we will mention squashfs as we need support for it to boot.

Given that particular piece of information let’s work on adding this part:

name: 96boards-kernel
version: 4.4.0
summary: 96boards reference kernel with qualcomm firmware
description: this is an example on how to build a kernel snap.
type: kernel

parts:
        plugin: kernel
        source: .
        kdefconfig: [defconfig, distro.config]
        kconfigs:
            - CONFIG_LOCALVERSION="-96boards"
            - CONFIG_DEBUG_INFO=n
            - CONFIG_SQUASHFS=m
        kernel-initrd-modules:
            - squashfs
        kernel-image-target: Image
        kernel-device-trees:
            - qcom/apq8016-sbc
            - qcom/msm8916-mtp

firmware

To run this kernel on the DragonBoard we will need to get some firmware from Qualcomm, so head over to https://developer.qualcomm.com/download/db410c/linux-board-support-package-v1.2.zip and get the zip file. Extract the firmware tarball from inside that zip and create a firmware part:

name: 96boards-kernel
version: 4.4.0
summary: 96boards reference kernel with qualcomm firmware
description: this is an example on how to build a kernel snap.
type: kernel

parts:
    kernel:
        plugin: kernel
        source: .
        kdefconfig: [defconfig, distro.config]
        kconfigs:
            - CONFIG_LOCALVERSION="-96boards"
            - CONFIG_DEBUG_INFO=n
            - CONFIG_SQUASHFS=m
        kernel-initrd-modules:
            - squashfs
        kernel-image-target: Image
        kernel-device-trees:
            - qcom/apq8016-sbc
            - qcom/msm8916-mtp
    firmware:
        plugin: tar-content
        source: firmware.tar
        destination: lib/firmware

Building

Now that we have a complete snapcraft.yaml we will proceed to build. If you did this on a 64bit system, you will be able to cross compile this snap, just run:

$ snapcraft --target-arch arm64

This build will take a while, an average of 30 minutes give or take. You will eventually see a message that says Snapped 96boards-kernel_4.4.0_arm64.snap. That means you are done and have successfully created a kernel snap.

Read more
Sergio Schvezov

Linaro Connect BKK16

Just a week ago I made my way back from Linaro Connect. It is my first time at a Linaro Connect that was not jointly done with a UDS and even then I did not participate in the event. It was also my first time in Thailand and to a greater extent Asia so I was very interested in going.

The main purpose for attending was to show snappy and in particular, snapcraft. The part of snapcraft that I was going to show was related to building kernels, I must say I am quite pleased with the results.

My first two days at the event mostly involved attending the keynote and then going to the quiet not so quiet hacking room and work on supporting whatever my colleagues Ricardo and Alexander wanted to demo and present in their accepted talk.

In one of those two keynotes or in another random presentation I attended, I discovered (personally, others may have known already) that 96boards had released a working 4.4 kernel to support the 96boards initiative so those two days I spent polishing the kernel snapcraft plugin we had to be able to build a nice kernel snap for Ubuntu Core using the 96boards kernel, that was a success and part of what is being released today with snapcraft (a follow up post will describe how to make use of this plugin).

On day three, jetlag really kicked in so I zombie participated with some comments when relevant in some of the business and/or planning meetings that went on related to Canonical, this was really fun, during introductions it seems it was a nice ice breaker (even though not my intention) to say I’m just an engineer working on …. I felt I had to put emphasis on mentioning just after all other folk exchanged business cards (note to self: maybe get some business cards) and mentioned all these fancy titles :-)

On day four the main highlight from an Ubuntu point of view, was Alexander and Ricardo’s presentation.

I also briefly met some of the fine Qualcomm folk and it turns out they are running a maker’s contest

The last day was demo day mostly, so Ricardo and me setup a couple of snappy related demo’s, there was lot of interest across which left us pleased.

There was time to socialize as well and I did get to see some former Canonical faces and catch up a bit; there was also some spare time to get to know new people and that is always fun as well.

Given that I’ve never been to Bangkok before, the Saturday that followed was dedicated to some sightseeing.

Read more
Sergio Schvezov

UbuCon and SCaLE 2016 trip report

I’m finally taking some time to wite about things that happened during UbuCon and SCaLE. I am really grateful for Canonical as without the sponsorship I wouldn’t have been there at all.

UbuCon

UbuCon is where was I was mostly involved, I had a scheduled lightning talk, a proper talk that I gave out with Manik and also actively involved in 3 unconference sessions.

Presenting

Plenary talk

The day started with some an intro to the UbuCon Summit and what it was all about followed by a keynote from Mark Shuttleworth. Once Mark left the stage the UbuCon Plenary Day 1 sessions started. Mine was the first and so it went… not any task is without issues, I started out with the lack of an hdmi cable to hook up my laptop, apparently when I mentioned I needed one, the organizers took it as I was saying I wouldn’t need one, in the end, 10 minutes later, the problem was solved.

There were also problems with the Ubuntu archives at the event, a transparent proxy mirroring issue of some sort making installing and updating packages a not so happy experience. Luckily I focused on live snapping shout which does not require any Ubuntu packages. It seemed to go rather well for a lightning talk.

After my quick talk followed Jorge Castro talking about gaming, Didier Roche about Ubuntu as a development platform and Scarlett Clark about Kubuntu Development.

Talking IoT getting snappy

Manik Taneja and myself did a joint talk just to spice things up a bit; he talked mostly from a PM point of view and I from someone down in the trenches. It seemed to provide good balance.

We presented some slides and also got a demo going with ROS and opencv going through the new features in the soon to be Ubuntu Core 16.04 like the classic dimension.

There was many interest in the audience and many questions asked. People liked the fact that we were focusing on ROS as well. Personally, I felt the whole thing went down rather well.

Unconference

On the second day, followed by a keynote from Cory Doctorow, we had the Ubuntu unconference part of the summit. As a snappy person I proposed 2 things:

  • creating a snap that uses SDL.
  • snapping your project.

Additionally, I attended another session snappy for sysadmins.

These sessions were basically round tables; small groups were formed, probably due to the focus required and the fact that the larger SCaLE event had started.

The admin questions and discussion was pretty good and a lot of doubts were aired out. The SDL session I consider a failure, as mentioned the archive was broken so we had to juggle around that and we also spent a lot of time setting up http://tmate.io/ so everyone could follow. In the end, I still have to work on that SDL based snap.

The snapping session got meshed into the SDL one as we ended up doing a lot of snapcrafting there, nothing working or final came out, but we got to walk through many scenarios, most of which translated into a bug and a fix in snapcraft so I do feel there was good value in this session overall even if during the session it felt as we weren’t moving forward.

Observing

I have to say, I had little chance and time to see other sessions taking place at UbuCon, I only got to see Marvin: Test your Ubuntu apps on phones you haven’t got which was interesting in itself

On a different day I saw an Ubuntu Leadership panel mediated by Jono.

Most of my time took place in the famous or infamous hallway track talking snappy, Ubuntu Core and snapcraft to people and sitting down and getting things done with these fine folk :-)

SCaLE

What can I say, I liked the exibit hall, it was massive compared to the events I go to in general. Lot’s of fun walking around collecting swag and getting the marketing speech from some vendors ;-) Microsoft even had representation, they had run out of T-shirts by the time we arrived but offered to send one over which was kind of cool and kudos to the new Microsoft as well, I guess 10 years ago no one would of seen this change coming!

As it goes with hallway tracks, I didn’t have enough time to see much of the events presentations. On Saturday I got to see Mark’s SCaLE oriented keynote

At some point in the day I went and saw Jono Bacon present about Build Awesome Communities on github. I take away here is I learned about this Trello like tool tailored for github called Waffle.

Then at the end of the day I saw Nathan Haines talk about writing books with free software which to my surprise did not involve Latex or similar but LibreOffice and other GUI productivity tools.

On day 2 I went and saw Sarah Sharp talk about diversity but to be fair, those problems are rather far away from where I live where we have a whole different set of problems so I couldn’t feel so identified with the discussion.

Again, at the end of the day, I got to see two talks, one which seemed hilarious The Road to Mordor given by Amye Scavarda, and then another talk from Dustin introducing adapt which shows a lot of potential.

Side events

There was lot of fun talking with everyone, people I already knew and people I got to meet. The social events were packed. We had a bunch of Ubuntu meetups and SCaLE specific ones, I attended these:

And of course hanged out with a bunch of awesome folk all around.

We also did some walking around Pasadena.

Closing thoughts

Simple a great event worth attending!

Read more
Sergio Schvezov

UbuConLA 2015 Summary

This is a post I never got to publish from way back.

About UbuConLA

Last week I attended the 2015 edition of UbuConLA, a successor to what once was UDS, the Ubuntu Developer Summit which later transformed into vUDS, the v standing for virtual which eventually was renamed to Ubuntu Online Summit. UbuConLA, fully organized by the community, tries to relive the days of UDS for a chance to see face to face with fellow Ubuntu contributors or contributors to be or just people generally interested in Ubuntu. So in other words, the social and human aspect of it.

My first UbunConLa was in 2013 and took place in Montevido, we had recently announced and released the Ubuntu Phone (dubbed as Ubuntu Touch) and spoke about it then.

This year, the even took place in Lima and was organized by a very avid Ubuntu Member, Jose Antonio Rey; he did an excellent job overall with the organization.

For this event I took my Ubuntu powered phones and tablets to be able to display and show around. People seemed to like them and the general comment was “I expected this to crash more”.

Walk the talk with Snappy

The thing I wanted to talk about this year wasn’t specifically about phones though, as was in 2013 when the phone was fresh, this year I talked about Snappy, Ubuntu Core and to some extent Ubuntu Personal. Everyone seemed receptive to the idea and the roadmap. I tried to go through the history and lead the way to the logical conclusion of why a snappier architecture was needed instead of just laying it out which seemed to hit the nail on the right spot. I must add that the audience was a mix of users and developers.

Listening in

I had the pleasure to listen to some great talks here and there, all were good to some degree but these are the ones that kept resonating after a while

Software Libre en las Nubes, by Juanjo Ciarlante

Led with grace and ease, when he talked it seemed so straight forward. What was complicated felt simple and elegant. He rambled over the hot cloud topic, going over a cube and triangle…

Juju, by Sebastián Ferrari

Was great, I liked how the presenter presented this, so far my interactions with how juju works and is used was limiting (I had the basics, but that was it).

Ubuntu in Schools, by Neyder Achahuanco

This guy came from Puno, an engineer turned school teacher for the love of the art, he went through how he failed at teaching kids how to develop software by jumping straight into it and instead on how he approached it with simpler things like programming without a computer and only using your mouth and ears. Later on diving into other things like codely and blocky and MITs Android development kit. It seemed pretty effective as he says the acceptance and joy in his class is pretty good.

He did not stop there, with a sprinkle of Peruvian politics and comments on how One Laptop per Child failed miserably in Peru, he told us his anectode of how he repurposes unopened OLPCs boxed in a closet with Ubuntu to be able to teach kids.

My personal take on this, is if you want something like this to succeed, it needs to be bottom up, instead of top down which he alluded to when telling the story on how to get teachers out of their dogma and buy into change and it is not that teachers don’t like change more so that they can’t cope with things just being dropped on them (like OLPCs to schools without electricity).

Closing remarks

I mostly liked the whole event, the organization was great. Everything was streamed live through ubuntuonair.com and available for offline consumption through the Ubuntu on Air Youtube channel.

On Saturday, we had a group photo taken outside on campus just like what was done during the UDS’s of the past.

While I’m not the best person socializing, I did have a good chance at it. My socializing was mostly with other speakers for some odd reason though.

My only critique here is that it is hard to make this event known when the cities are switched between the whole Latin American continent, well, it is two fold; on the one hand it’s good to spread the knowledge, on the other it becomes harder to grow a base and dig deeper into the nitty gritty details.

Read more
Sergio Schvezov

Recovering Ubuntu Core

Introduction

A note of caution, most of this is an experiment and lacks finess.

Ubuntu Core was released over half a year ago using this nice thing called snappy, the design allows for transactional updates among others, these updates keep on rolling though their streams and can be kicked out (rolled back) if something was fishy, guaranteeing a certain level of confidence that the system should not break.

Now introduce the concept of storage, that thing that will always limit you no matter the amount; with this consider that a popular method to avoid this is to garbage collect, old things you forgot about will just go away.

To add a twist to the story, imagine you want to wipe your system. Given the fact that we have a clear separation of what is writable and what is intrinsic to what makes up the core, this is rather trivial. This would indeed reset any customization done to the system, but…

The OS parts that compose Ubuntu Core are also garbage collected, better said it is like a round robin of size 2, these parts of the system are implicitly garbage collected so if I want to do a real factory reset, there is no way to do that today because you don’t have the core part of the system that the device came with.

There’s a couple more questions:

  • do we want to update the boot loader of the running system?
  • can we recovery from a completely broken system in an autonomous way?
  • how do we make this generic?

There are more…

Booting Ubuntu Core

We use two boot loaders to power this snappy system, one is grub, the other u-boot. We default to the former for x86 based systems while the latter we use for arm.

Both are similar, using an A/B model to boot with some try variables that the boot loader in question reads to determine which system to boot and where to rollback to in case of an issue.

The OS part of the system lives in either a partition labeled system-a or system-b, this is similar to the Ubuntu rootfs with the booting parts stripped out to a platform specific part that lives in the system-boot partition with an A/B scheme. Take note that the platform name and full functionality is under (re) design, and also currently known as kernel or device.

All snappy packages are intended to be real snaps, today these are snappy package types:

  • app
  • framework
  • oem (to be repurposed as gadget)

As mentioned, two more package types are arriving, platform and os which today are driven by system image pending a migration to actual snappy packages.

Bootloader

There is only one boot loader core that takes care of booting into the right system, updating this boot loader logic adds risk as breaking it would render a system useless. As long as it’s not updated everything should be fine.

Regular booting

When business is as usual, the boot loader will load its environment, read the snappy_ab variable, which would contain a value of either a or b together with the snappy_mode variable which would contain the value of regular.

If snappy_ab were to have the value a, the kernel cmdline would contain an entry with root=LABEL=system-a (it’s root=LABEL=system-$snappy_ab) whilst the kernel part (for grub) would start with something like linux /a/vmlinuz…, the initrd line for grub would be rather similar initrd /a/initrd.img

Booting into an upgrade

When the system updates the os and platform parts of the system, if the system was currently running from system-a it would drop the update onto system-b and the b part of the system-boot partition for the kernel, initrd and related files. The snappy_mode variable would be set to try and after the system finished booting it would set the mode to regular and life would move on.

The experiment

In this experiment we want to have a recovery partition with its own boot loader and the original image that was put on the system by the manufacturer. This would allow:

  • for potential updates to the running systems.
  • a mechanism for a real factory wipe.
  • a tentative installer.

For this, two new components are needed:

  • a better ubuntu-device-flash (call it uflash).
  • a recovery component.

Additionally, and this is not final or has been discussed, we created some stub platform and os snappy packages. The os snap is an ubuntu rootfs put into a squashfs while the platform snap provides the kernel, a modified initrd that knows how to go into recovery or a running system and some boot loader assets (initial grub.cfg).

The recovery component lives in the ubuntu rootfs.

For simplicity the focus was on grub, gpt and pc-bios.

Creating an image.

To create an image in this experiment one would do:

sudo ./uflash \
--platform platform_rolling1_all.snap \
--os ubuntu_rolling1_amd64.snap \
--gadget generic-amd64_1.4_all.snap \
--snaps  webdm.canonical_0.9_multi.snap \
core.img

This would create an image called core.img, with 2 partitions:

  • grub, for grub’s boot.img
  • recovery, with all those snappy packages passed in the command line and a grub’s core.img.

I want to take into account again that this uflash thing is just for play and its cli will likely be different if it comes to fruition.

Recovering

The grub.cfg put into recovery would boot into recovery using the platform and os snaps to drive it. The recovery logic would create two new partitions:

  • boot
  • writable

It will then set up boot to have an A/B schema and insert the platform and os snappy packages used in the recovery partition.

All the snappy packages passed in with --snap in uflash will be installed onto the system (depending on restraints defined in the gadget snap which are ignored here as it’s not part of the current experiment).

It will also install a core.img which the boot loader in recovery would jump to providing boot loader independence for the running system.

Try it

  • Download core.img.xz from recovery
  • Make sure the checksum is correct.
  • Extract into core.img
  • Run it kvm -m 1500 core.img

Take away

All this is experimental, but everything used here can be found in recovery and the code is composed of multiple branches under my name which can be found in snappy.

There is no indication of this merging into the main branch or product and the code is not production quality (yet).

A side benefit is that the recovery logic could potentially serve as an installer.

There are some things you just can’t do with this image, which if paying attention would be easy to spot but just in case, system image updates won’t work.

That’s all

Read more
Sergio Schvezov

Grub and snappy updates

This week, the snappy powered Ubuntu Core landed some interesting changes with regards to how it handles grub based systems.

History

The original implementation was based on traditional Ubuntu systems, where a bunch of files that any debian package could setup influenced the resulting grub.cfg that resulted after running update-grub. This produced a grub.cfg that was really hard to manage or debug, and what is most important, out of our control. This also differed greatly from our u-boot story where any gadget could override the boot setup so it bootstraps as needed.

We don’t want to own the logic for this, but only provide the guidelines for the necessary primitives for proper rollback at that level to work.

These steps also make our bootloader story look and feel more similar between each other where soon we may be able to just not care about it as the logic will be driven as if it were a black box.

Rolling it out

Even though this worked targeted the development release, also known as the rolling one, we trie to make sure all systems would transition to this model transparently. Given the model though, it isn’t a one step solution as we need to be able to update to systems which do not run update-grub and rollback to systems that do. We also need to update from a sytem that has this new snappy logic to one that doesn’t. This was solved with a very grub.cfg slick grub.cfg delivered through the gadget packages (still named oem in current implementations), in contrast, similar to the u-boot and uEnv.txt mechanics.

On a running system, these would be the steps taken:

  • Device is running a version that runs update-grub.
  • oem package update is delivered through autopilot containing the new grub.cfg.
  • The os is updated bringing in some new snappy logic.
  • The next os update would be driven by the new snappy logic which would sync grub.cfg from the oem package into the bootloader area. This new snappy would not run update-grub. The system would boot from the legacy kernel paths as if it were a delta update no new kernel would be delivered.
  • Updates would rinse and repeat, when there’s a new kernel provided in an update, the bootloader a and b locations would be used to store that kernel, grub.cfg already has logic to boot from those locations so the change is transparent.
  • On the next update, kernel asset file syncing would take place and populate the other label (a or b).

This is the development release so we shouldn’t worry to much about breaking, but why do so if it can be avoided ;-)

Outcome

The resulting code base is much simpler, there are less headaches and we don’t need to maintain or understand huge grub script snippets. Just for kicks, this is the grub.cfg we use:

set default=0
set timeout=3

insmod part_gpt
insmod ext2

if [ -s $prefix/grubenv ]; then
  load_env
fi

if [ -z "$snappy_mode" ]; then
    set snappy_mode=regular
    save_env snappy_mode
fi
if [ -z "$snappy_ab" ]; then
    set snappy_ab=a
    save_env snappy_ab
fi

if [ "$snappy_mode" = "try" ]; then
    if [ "$snappy_trial_boot" = "1" ]; then
        # Previous boot failed to unset snappy_trial_boot, so toggle
        # rootfs.
        if [ "$snappy_ab" = "a" ]; then
            set snappy_ab=b
        else
            set snappy_ab=a
        fi
        save_env snappy_ab
    else
        # Trial mode so set the snappy_trial_boot (which snappy is
        # expected to unset).
        #
        # Note: don't use the standard recordfail variable since that forces
        # the menu to be displayed and sets an infinite timeout if set.
        set snappy_trial_boot=1
        save_env snappy_trial_boot
    fi
fi

set label="system-$snappy_ab"
set cmdline="root=LABEL=$label ro init=/lib/systemd/systemd console=ttyS0 console=tty1 panic=-1"

menuentry "$label" {
    if [ -e "$prefix/$snappy_ab/vmlinuz" ]; then
        linux $prefix/$snappy_ab/vmlinuz $cmdline
        initrd $prefix/$snappy_ab/initrd.img
    else
        # old-style kernel-in-os-partition
        search --no-floppy --set --label "$label"
        linux /vmlinuz $cmdline
        initrd /initrd.img
    fi
}

and this was the grub.cfg that was auto generated:

#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
  set have_grubenv=true
  load_env
fi
if [ "${next_entry}" ] ; then
   set default="${next_entry}"
   set next_entry=
   save_env next_entry
   set boot_once=true
else
   set default="0"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}
function recordfail {
  set recordfail=1
  if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi
}
function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

terminal_input console
terminal_output console
if [ "${recordfail}" = 1 ] ; then
  set timeout=0
else
  if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=hidden
    set timeout=0
  # Fallback hidden-timeout code in case the timeout_style feature is
  # unavailable.
  elif sleep --interruptible 0 ; then
    set timeout=0
  fi
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/05_debian_theme ###
set menu_color_normal=white/black
set menu_color_highlight=black/light-gray
### END /etc/grub.d/05_debian_theme ###

### BEGIN /etc/grub.d/09_snappy ###
menuentry 'Ubuntu Core Snappy system-b rootfs'  $menuentry_id_option 'gnulinux-simple-LABEL=system-b' {
	load_video
	insmod gzio
	if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
	insmod part_gpt
	insmod ext2
	set root='hd0,gpt4'
	if [ x$feature_platform_search_hint = xy ]; then
	  search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt4 --hint-efi=hd0,gpt4 --hint-baremetal=ahci0,gpt4  4ff468ee-953f-45df-a751-e6232a1c8ef7
	else
	  search --no-floppy --fs-uuid --set=root 4ff468ee-953f-45df-a751-e6232a1c8ef7
	fi
	linux /boot/vmlinuz-3.19.0-22-generic root=LABEL=system-b ro init=/lib/systemd/systemd console=tty1 console=ttyS0 panic=-1
	initrd /boot/initrd.img-3.19.0-22-generic
}
menuentry 'Ubuntu Core Snappy system-a rootfs'  $menuentry_id_option 'gnulinux-simple-LABEL=system-a' {
	load_video
	insmod gzio
	if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
	insmod part_gpt
	insmod ext2
	set root='hd0,gpt3'
	if [ x$feature_platform_search_hint = xy ]; then
	  search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt3 --hint-efi=hd0,gpt3 --hint-baremetal=ahci0,gpt3  47ea9cad-ec4f-4290-85e8-eee7dac19f1c
	else
	  search --no-floppy --fs-uuid --set=root 47ea9cad-ec4f-4290-85e8-eee7dac19f1c
	fi
	linux /boot/vmlinuz-3.19.0-22-generic root=LABEL=system-a ro init=/lib/systemd/systemd console=tty1 console=ttyS0 panic=-1
	initrd /boot/initrd.img-3.19.0-22-generic
}
    # set defaults
    if [ -z "$snappy_mode" ]; then
        set snappy_mode=regular
        save_env snappy_mode
    fi
    if [ -z "$snappy_ab" ]; then
        set snappy_ab=a
        save_env snappy_ab
    fi

    if [ "$snappy_mode" = "try" ]; then
        if [ "$snappy_trial_boot" = "1" ]; then
            # Previous boot failed to unset snappy_trial_boot, so toggle
            # rootfs.
            if [ "$snappy_ab" = "a" ]; then
                set default="Ubuntu Core Snappy system-b rootfs"
                set snappy_ab=b
            else
                set snappy_ab=a
                set default="Ubuntu Core Snappy system-a rootfs"
            fi
            save_env snappy_ab
        else
            # Trial mode so set the snappy_trial_boot (which snappy is
            # expected to unset).
            #
            # Note: don't use the standard recordfail variable since that forces
            # the menu to be displayed and sets an infinite timeout if set.
            set snappy_trial_boot=1
            save_env snappy_trial_boot

            if [ "$snappy_ab" = "a" ]; then
                set default="Ubuntu Core Snappy system-a rootfs"
            else
                set default="Ubuntu Core Snappy system-b rootfs"
            fi
        fi
    else
        if [ "$snappy_ab" = "a" ]; then
            set default="Ubuntu Core Snappy system-a rootfs"
        else
            set default="Ubuntu Core Snappy system-b rootfs"
        fi
    fi
### END /etc/grub.d/09_snappy ###

### BEGIN /etc/grub.d/10_linux ###
function gfxmode {
	set gfxpayload="${1}"
	if [ "${1}" = "keep" ]; then
		set vt_handoff=vt.handoff=7
	else
		set vt_handoff=
	fi
}
if [ "${recordfail}" != 1 ]; then
  if [ -e ${prefix}/gfxblacklist.txt ]; then
    if hwmatch ${prefix}/gfxblacklist.txt 3; then
      if [ ${match} = 0 ]; then
        set linux_gfx_mode=keep
      else
        set linux_gfx_mode=text
      fi
    else
      set linux_gfx_mode=text
    fi
  else
    set linux_gfx_mode=keep
  fi
else
  set linux_gfx_mode=text
fi
export linux_gfx_mode
menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-4ff468ee-953f-45df-a751-e6232a1c8ef7' {
	recordfail
	load_video
	gfxmode $linux_gfx_mode
	insmod gzio
	if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
	insmod part_gpt
	insmod ext2
	set root='hd0,gpt4'
	if [ x$feature_platform_search_hint = xy ]; then
	  search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt4 --hint-efi=hd0,gpt4 --hint-baremetal=ahci0,gpt4  4ff468ee-953f-45df-a751-e6232a1c8ef7
	else
	  search --no-floppy --fs-uuid --set=root 4ff468ee-953f-45df-a751-e6232a1c8ef7
	fi
	linux	/boot/vmlinuz-3.19.0-22-generic root=/dev/sda4 ro init=/lib/systemd/systemd console=tty1 console=ttyS0 panic=-1
	initrd	/boot/initrd.img-3.19.0-22-generic
}
submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-4ff468ee-953f-45df-a751-e6232a1c8ef7' {
	menuentry 'Ubuntu, with Linux 3.19.0-22-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.19.0-22-generic-advanced-4ff468ee-953f-45df-a751-e6232a1c8ef7' {
		recordfail
		load_video
		gfxmode $linux_gfx_mode
		insmod gzio
		if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
		insmod part_gpt
		insmod ext2
		set root='hd0,gpt4'
		if [ x$feature_platform_search_hint = xy ]; then
		  search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt4 --hint-efi=hd0,gpt4 --hint-baremetal=ahci0,gpt4  4ff468ee-953f-45df-a751-e6232a1c8ef7
		else
		  search --no-floppy --fs-uuid --set=root 4ff468ee-953f-45df-a751-e6232a1c8ef7
		fi
		echo	'Loading Linux 3.19.0-22-generic ...'
		linux	/boot/vmlinuz-3.19.0-22-generic root=/dev/sda4 ro init=/lib/systemd/systemd console=tty1 console=ttyS0 panic=-1
		echo	'Loading initial ramdisk ...'
		initrd	/boot/initrd.img-3.19.0-22-generic
	}
	menuentry 'Ubuntu, with Linux 3.19.0-22-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.19.0-22-generic-recovery-4ff468ee-953f-45df-a751-e6232a1c8ef7' {
		recordfail
		load_video
		insmod gzio
		if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
		insmod part_gpt
		insmod ext2
		set root='hd0,gpt4'
		if [ x$feature_platform_search_hint = xy ]; then
		  search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt4 --hint-efi=hd0,gpt4 --hint-baremetal=ahci0,gpt4  4ff468ee-953f-45df-a751-e6232a1c8ef7
		else
		  search --no-floppy --fs-uuid --set=root 4ff468ee-953f-45df-a751-e6232a1c8ef7
		fi
		echo	'Loading Linux 3.19.0-22-generic ...'
		linux	/boot/vmlinuz-3.19.0-22-generic root=/dev/sda4 ro single nomodeset init=/lib/systemd/systemd
		echo	'Loading initial ramdisk ...'
		initrd	/boot/initrd.img-3.19.0-22-generic
	}
}

### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_linux_xen ###

### END /etc/grub.d/20_linux_xen ###

### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###

### BEGIN /etc/grub.d/30_uefi-firmware ###
### END /etc/grub.d/30_uefi-firmware ###

### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###

### BEGIN /etc/grub.d/41_custom ###
if [ -f  ${config_directory}/custom.cfg ]; then
  source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
  source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###

Read more
Sergio Schvezov

The github or launchpad dilemma

We wanted to start a migration path from bazaar to git given how ubiquitous it is and due to the fact that most in our team prefer it. A few months ago the decision was easy, since launchpad did not support git, we would just switch to github given it’s popularity. That’s not true anymore…

Today launchpad supports git and our comparison becomes finer grained and we have to break it down a bit more.

So here are things I like github:

  • Code is presented first.
  • Documentation is easy to write and very nice to read.
  • Non technical people can make edits and propose pull requests all from the web.
  • It’s a bit more social (e.g; you have mentions).
  • Web hooks and many things embracing them.
  • A big user base, mostly everyone is already on github.
  • The code review interface.
  • The UI layout in general.
  • The API.

The things I like about launchpad:

  • Direct link between the source and ubuntu.
  • A very nice bug tracking system.
  • Given we work with Ubuntu, a very big existing database. Every other team working on Ubuntu uses launchpad already.
  • Very product oriented.
  • A nice language translation system.

Most of the things a like about one are probably things that I don’t like or are missing in the other.

snappy

Given we work on lp:snappy most of the time now, I want to have a look at the would be workflow when on launchad and on github.

The launchpad workflow

First of all, if the codebase were moved to launchpad’s git support we’d be missing proper support to query merge proposal status and linking bug reports to commits.

The flow with git would be as follows:

  1. cd $GOPATH/src/launchpad.net/snappy.
  2. git branch -c <feature>
  3. edit/create/fix
  4. git commit -s -m '...'
  5. git push git+ssh://USER@git.launchpad.net/~USER/snappy`
  6. Create merge proposal.
  7. Manually merge.
  8. Manually invoke test run.
  9. git push git+ssh://USER@git.launchpad.net/snappy

It is an improvement over bzr (especially since branches are colocated and go likes that), but we miss:

  • unit test runs.
  • unit test coverage tracking.
  • automatic merging, launchpad support required and a new tarmac implementation.
  • translation support, only supported for bazaar.
  • package recipe to push latest trunk to a PPA, also requires launchpad support.

That said, things are coming along and most of this would be solved by either launchpad API enhancements to understand git or webhooks.

The github workflow

Given github’s popularity, mostly everything is already done for you, and since they have webhooks a chain of events that follow an action in github gives us a very neat experience.

This is what would happen:

  1. cd $GOPATH/src/launchpad.net/snappy.
  2. git branch -c <feature>
  3. edit/create/fix
  4. git commit -s -m '...', if the issue is part of the comment it gets linked through github.
  5. git push git@github.com:/snappy.git
  6. Create pull request.
  7. travis is triggered by the event and runs everything we
    tell it to:
    1. Run a test build.
    2. Runs unit tests.
    3. Runs sanity checks (go vet, lint, …)
    4. Push the unit test coverage to corevalls.io
    5. Build deb.
  8. Reviewer uses the data updated in real time aside from his human provided factor to determine if the PR should be merged. This data includes, travis passing unit tests and it’s coverage increase or decrease among others with nice badges.
  9. Click on Merge PR.
  10. The master branch has it’s status/sanity presented with badges as well.

Closing thoughts

It is no secret I’ve been wanting to move to github for a while, it solves many problems we have that we don’t want to go around and solve ourselves. It is not the panacea but it does seem fit for most of the things we need.

Given that now both launchpad and github support git we can ping pong between them as seen fit (not out of spite though).

The biggest hurdle we’d be facing on every change is our go import paths which are absolute to make go get straightforward (even if we don’t take too many other advantages for it) for which one solution I’ve been wanting to give a try is http://getgb.io.

In some sense I sometimes get the feeling that github is like vim and launchpad is like emacs, and I am a vim person.

Read more
Sergio Schvezov

Snappy rolling back on kernel panic

Image you get an update and the kernel panics with that update, what are you to do? Suppose now that you have a snappy based system, this is automatically solved for you.

Here’s a short video showing this on a Beagle Bone Black, the concept is quite simple, if there is a panic, we revert to the last known state. In this video I inject an initrd that panics on boot after issuing a snappy update and before rebooting into the update.

In this video you observe the following:

  • Manually checking for updates.
  • Manually applying the updates.
  • How the a/b boot selection is done.
  • Implicitly observe the internal (subject to change) layout of snappy-boot and system-a or system-b selection.
  • Rebooting into the new kernel.
  • Observing a panic and rebooting back into the working system.

In the normal case this would seldom happen (the broken boot aside) as the autopiloting feature is enabled by default today, which you can check by running snappy config ubuntu-core

Read more
Sergio Schvezov

Updates to snappy and ubuntu-device-flash

The past few weeks in the snappy world have been a revolt, better said a rapid evolution for it to be closer to what we wanted it to be.

Some things have change, if you are tracking the bleeding edge you will notice a couple of changes, the store for example now exposes packages depending on the OS release, and system images are now built against an OS release as well. For core images we have two choices:

  • 15.04
  • rolling

15.04 will be nicely locked down and guarantee stability while rolling will just roll on and you will see it stumble over (although it shouldn’t break badly, APIs are what we will try and aspire to keep in the breaking zone). Try is a strong word, which is why channels are being used; the core images have the concept of channel which can be:

  • stable
  • rc
  • beta
  • alpha
  • edge

Today, as of this writing, we are supporting edge and alpha for each OS release and as soon as we release we will have a stable channel enabled. Store support for channels is coming to a future near you which means that eventually packages can track different channels.

Another addition is a new snap type called oem, this snappy package allows OEMs to enable devices and boards with a degree of customization such as:

  • preinstalled unremovable or removable packages
  • default configurations for preinstalled packages and ubuntu-core
  • lock down configurations.
  • custom DTBs
  • boot files (e.g.; u-boot, uEnv.txt)

This package, uploaded to the store allows people to create custom enablements to support their product stories. This package’s capabilities can grow in the future to support some other niceties.

If you happen to use the development ppa for snappy ppa:snappy-dev/tools you should be seeing a new ubuntu-device-flash in the updates which supports most of this syntax and retires early enablement work.

So in order to create a default image for the Beagle Bone Black you would do:

sudo ubuntu-device-flash core 15.04 --channel edge --oem beagleblack --output bbb.img

To create an generic amd64 image

sudo ubuntu-device-flash core 15.04 --channel edge --output x86.img

15.04 could be replaced with rolling and today the default channel is edge but will be stable as soon as we have something in there :-)

Keep in mind now that 15.04 and rolling will return different store search results depending on what the developer has targetted.

Installing local oem snaps passing in --oem forces you to setup --developer-mode if the package is not signed by the store.

Last but not least, the flashassets entry from device tarballs used to enable new devices are now ignored in favor of using the information from the oem snappy package, this means that if you have a port you will need to move it over to the oem packaging.

Read more
Sergio Schvezov

Preliminary support for dtb override from OEM snaps

Today the always in motion ppa ppa:snappy-dev/tools has landed support for overriding the dtb provided by the platform in the device part with one provided by the oem snap.

The package.yaml for the oem snap has been extended a bit to support this, an example follows for extending the am335x-boneblack platform.


name: mydevice.sergiusens
vendor: sergiusens
icon: meta/icon.png
version: 1.0
type: oem

branding:
    name: My device
        subname: Sergiusens Inc.

        store:
            oem-key: 123456

            hardware:
                dtb: mydtb.dtb

The path hardware/dtb key in the yaml holds a value which is the path to the dtb withing the package, so in this case, I put mydtb.dtb in the root of the snap.

After that it’s just a snappy build away:

snappy build .

In order to get this properly provisioned, first we need the latest ubuntu-device-flash from the ppa:snappy-dev/tools, so let’s get it

sudo add-apt-repository ppa:snappy-dev/tools
sudo apt update
sudo apt install ubuntu-device-flash

And now we are ready to flash

sudo ubuntu-device-flash core \
    --platform am335x-boneblack \
    --size 4 \
    --install mydevice_sergiusens_1.0_all.snap
    --output bbb_custom.img

If everything went well, the boot partiton will hold your custom dtb instead of the default one, specifying --platform is required for this.

Please note that some of these things described here are subject to change.

Read more
Sergio Schvezov

Snappy Things

A while back, Snappy was introduced and it was great, while that was happening we were already working on the next great thing, Snappy for devices, or as everyone calls it, things.

Today that was finally announced. It’s been lots of fun working on this. Enablement aside, we also created a very minimal webdm, it is a Web Device Management snap framework provided in the store which can be easily installed on existing devices by calling

sudo snappy install webdm

On networks where it is allowed, it can be accessed by going to http://webdm.local:4200. Here’s a quick demo of it running on a BeagleBone Black

So to get this going, all you need is to follow what is mentioned in the main site and pop that sdcard into the device

wget  http://cdimage.ubuntu.com/ubuntu-core/preview/ubuntu-core-WEBDM-alpha-02_armhf-bbb.img.xz
unxz ubuntu-core-WEBDM-alpha-02_armhf-bbb.img.xz
dd if=ubuntu-core-WEBDM-alpha-02_armhf-bbb.img of=/dev/sdXXX bs=32M

The alternative way is to use ubuntu-device-flash to create such image, you can get it easily for Ubuntu by adding our tools ppa

sudo add-apt-repository ppa:snappy-dev/beta
sudo apt update
sudo apt install snappy-tools

And then move on to building your image, this is what I do:

ubuntu-device-flash --verbose core --channel ubuntu-core/devel-proposed --output snappy-core.img --size 10 --developer-mode --install webdm_0.1_multi.snap --install beagleboneblack.element14_1.0_all.snap

The install option is basically an option to install snaps during provisioning; you may notice this weird one beagleboneblack.element14_1.0_all.snap, that is an oem snap and in summary, it’s similar to the customization framework in Ubuntu Touch, but different. today it’s pretty minimal and allows to just set the branding text either as can be seen on the video or at the login prompt, where you would see something like this:

(BeagleBoneBlack)ubuntu@localhost:~$

More on the oem part later.

Happy sapping!

Read more
Sergio Schvezov

Ubuntu Core

Ubuntu Core is what we’ve been working on this past time, it has been an interesting ride. It was developed completely in the open, there was just no real promotion about it until we were ready.

If you had noticed, we use ubuntu-device-flash to create this core image, and for development we used it across the board with the core subcommand. We did learn a couple of things from the phone and decided to just provide a static image that we could make sure would work for everyone giving it a try (aka more QA). In essence you can still upgrade and if something is not to your like, just rollback, it’s that neat. So in summary, ubuntu-device-flash today is just a step in the release process to get to the final image.

Yesterday I played around with creating a snap for camlistore and it was a breeze, To get it just snap install camlistore, all the command line tools are in there provided by the binary stanza from package.yaml. The camlistored daemon is created in the services list where I just needed to provide a start, which in the background creates a systemd unit.

The beauty here is that I don’t really need to know much of the underlying technology, and that is awesome for just quickly creating a snap.

What is missing here though, is an easy way to configure the package that was just intalled, for now, it should be as easy to look at the file system layout and going to /var/lib/apps/<app-name>/<version>/ which would be /var/lib/apps/camlistore/0.8 and within we’d have .config/camlistore/server-config.json, in most cases you’d want to setup your authentication in there.

And here’s the mandatory screenshot of this running on my kvm instance:

Read more
Sergio Schvezov

Travis and ruby gems setup on Ubuntu

Quick install

Most of the instalation instructions that dangle the web just make you sudo gem install, I don’t like that, so here’s the quick reference for when I need to do this next time:

sudo apt install ruby-dev
export GEM_HOME=$HOME/gems
gem install travis -v 1.7.4 --no-rdoc --no-ri

And this goes into ~/.bashrc:

export GEM_HOME=$HOME/gems
export PATH=$PATH:$GEM_HOME/bin

Read more