Canonical Voices

What Leo Arias talks about

Leo Arias

I'm a Quality Assurance Engineer. A big part of my job is to find problems, then make sure that they are fixed and automated so they don't regress. If I do my job well, then our process will identify new and potential problems early without manual intervention from anybody in the team. It's like trying to automate myself, everyday, until I'm no longer needed and have to jump to another project.

However, as we work in the project, it's unavoidable that many small manual tasks accumulate on my hands. This happens because I set up the continuous integration infrastructure, so I'm the one who knows more about it and have easier access, or because I'm the one who requested access to the build farm so I'm the one with the password, or because I configured the staging environment and I'm the only one who knows the details. This is a great way to achieve job security, but it doesn't lead us to higher quality. It's a job half done, and it's terribly boring to be a bottleneck and a silo of information about testing and the release process. All of these tasks should be shared by the whole team, as with all the other tasks in the project.

There are two problems. First, most of these tasks involve delicate credentials that shouldn't be freely shared with everybody. Second, even if the task itself is simple and quick to execute, it's not very simple to document how to set up the environment to be able to execute them, nor how to make sure that the right task is executed in the right moment.

Chatops is how I like to solve all of this. The idea is that every task that requires manual intervention is implemented in a script that can be executed by a bot. This bot joins the communication channel where the entire team is present, and it will execute the tasks and report about their results as a response to external events that happen somewhere in the project infrastructure, or as a response to the direct request of a team member in the channel. The credentials are kept safe, they only have to be shared with the bot and the permissions can be handled with access control lists or membership to the channel. And the operative knowledge is shared with all the team, because they are all listening in the same channel with the bot. This means that anybody can execute the tasks, and the bot assists them to make it simple.

In snapcraft we started writing our bot not so long ago. It's called snappy-m-o (Microbe Obliterator), and it's written in python with errbot. We, of course, packaged it as a snap so we have automated delivery every time we change it's source code, and the bot is also autoupdated in the server, so in the chat we are always interacting with the latest and greatest.

Let me show you how we started it, in case you want to get your own. But let's call this one Baymax, and let's make a virtual environment with errbot, to experiment.

drawing of the Baymax bot

$ mkdir -p ~/workspace/baymax
$ cd ~/workspace/baymax
$ sudo apt install python3-venv
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install errbot
$ errbot --init

The last command will initialize this bot with a super simple plugin, and will configure it to work in text mode. This means that the bot won't be listening on any channel, you can just interact with it through the command line (the ops, without the chat). Let's try it:

$ errbot
[...]
>>> !help
All commands
[...]
!tryme - Execute to check if Errbot responds to command.
[...]
>>> !tryme
It works !
>>> !shutdown --confirm

tryme is the command provided by the example plugin that errbot --init created. Take a look at the file plugins/err-example/example.py, errbot is just lovely. In order to define your own plugin you will just need a class that inherits from errbot.BotPlugin, and the commands are methods decorated with @errbot.botcmd. I won't dig into how to write plugins, because they have an amazing documentation about Plugin development. You can also read the plugins we have in our snappy-m-o, one for triggering autopkgtests on GitHub pull requests, and the other for subscribing to the results of the pull requests tests.

Let's change the config of Baymax to put it in an IRC chat:

$ pip install irc

And in the config.py file, set the following values:

BACKEND = 'IRC'
BOT_IDENTITY = {
    'nickname' : 'baymax-elopio',  # Nicknames need to be unique, so append your own.
                                   # Remember to replace 'elopio' with your nick everywhere
                                   # from now on.
    'server' : 'irc.freenode.net',
}
CHATROOM_PRESENCE = ('#snappy',)

Run it again with the errbot command, but this time join the #snappy channel in irc.freenode.net, and write in there !tryme. It works ! :)

screenshot of errbot on IRC

So, this is very simple, but let's package it now to start with the good practice of continuous delivery before it gets more complicated. As usual, it just requires a snapcraft.yaml file with all the packaging info and metadata:

name: baymax-elopio
version: '0.1-dev'
summary: A test bot with errbot.
description: Chat ops bot for my team.
grade: stable
confinement: strict

apps:
  baymax-elopio:
    command: env LC_ALL=C.UTF-8 errbot -c $SNAP/config.py
    plugs: [home, network, network-bind]

parts:
  errbot:
    plugin: python
    python-packages: [errbot, irc]
  baymax:
    source: .
    plugin: dump
    stage:
      - config.py
      - plugins
    after: [errbot]

And we need to change a few more values in config.py to make sure that the bot is relocatable, that we can run it in the isolated snap environment, and that we can add plugins after it has been installed:

import os

BOT_DATA_DIR = os.environ.get('SNAP_USER_DATA')
BOT_EXTRA_PLUGIN_DIR = os.path.join(os.environ.get('SNAP'), 'plugins')
BOT_LOG_FILE = BOT_DATA_DIR + '/err.log'

One final try, this time from the snap:

$ sudo apt install snapcraft
$ snapcraft
$ sudo snap install baymax*.snap --dangerous
$ baymax-elopio

And go back to IRC to check.

Last thing would be to push the source code we have just written to a GitHub repo, and enable the continuous delivery in build.snapcraft.io. Go to your server and install the bot with sudo snap install baymax-elopio --edge. Now everytime somebody from your team makes a change in the master repo in GitHub, the bot in your server will be automatically updated to get those changes within a few hours without any work from your side.

If you are into chatops, make sure that every time you do a manual task, you also plan for some time to turn that task into a script that can be executed by your bot. And get ready to enjoy tons and tons of free time, or just keep going through those 400 open bugs, whichever you prefer :)

Read more
Leo Arias

I love playing with my prototyping boards. Here at Ubuntu we are designing the core operating system to support every single-board computer, and keep it safe, updated and simple. I've learned a lot about physical computing, but I always have a big problem when my prototype is done, and I want to deploy it. I am working with a Raspberry Pi, a DragonBoard, and a BeagleBone. They are all very different, with different architectures, different pins, onboard capabilities and peripherals, and they can have different operating systems. When I started learning about this, I had to write 3 programs that were very different, if I wanted to try my prototype in all my boards.

picture of the three different SBCs

Then I found Gobot, a framework for robotics and IoT that supports my three boards, and many more. With the added benefit that you can write all the software in the lovely and clean Go language. The Ubuntu store supports all their architectures too, and packaging Go projects with snapcraft is super simple. So we can combine all of this to make a single snap package that with the help of Gobot will work on every board, and deploy it to all the users of these boards through the snaps store.

Let's dig into the code with a very simple example to blink an LED, first for the Raspberry PI only.

package main

import (
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  adaptor := raspi.NewAdaptor()
  led := gpio.NewLedDriver(adaptor, "7")

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

In there you will see some of the Gobot concepts. There's an adaptor for the board, a driver for the specific device (in this case the LED), and a robot to control everything. In this program, there are only two things specific to the Raspberry Pi: the adaptor and the name of the GPIO pin ("7").

picture of the Raspberry Pi prototype

It works nicely in one of the boards, but let's extend the code a little to support the other two.

package main

import (
  "log"
  "os/exec"
  "strings"
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/beaglebone"
  "gobot.io/x/gobot/platforms/dragonboard"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  out, err := exec.Command("uname", "-r").Output()
  if err != nil {
    log.Fatal(err)
  }
  var adaptor gobot.Adaptor
  var pin string
  kernelRelease := string(out)
  if strings.Contains(kernelRelease, "raspi2") {
    adaptor = raspi.NewAdaptor()
    pin = "7"
  } else if strings.Contains(kernelRelease, "snapdragon") {
    adaptor = dragonboard.NewAdaptor()
    pin = "GPIO_A"
  } else {
    adaptor = beaglebone.NewAdaptor()
    pin = "P8_7"
  }
  digitalWriter, ok := adaptor.(gpio.DigitalWriter)
  if !ok {
    log.Fatal("Invalid adaptor")
  }
  led := gpio.NewLedDriver(digitalWriter, pin)

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

We are basically adding in there a block to select the right adaptor and pin, depending on which board the code is running. Now we can compile this program, throw the binary in the board, and give it a try.

picture of the Dragonboard prototype

But we can do better. If we package this in a snap, anybody with one of the boards and an operating system that supports snaps can easily install it. We also open the door to continuous delivery and crowd testing. And as I said before, super simple, just put this in the snapcraft.yaml file:

name: gobot-blink-elopio
version: master
summary:  Blink snap for the Raspberry Pi with Gobot
description: |
  This is a simple example to blink an LED in the Raspberry Pi
  using the Gobot framework.

confinement: devmode

apps:
  gobot-blink-elopio:
    command: gobot-blink

parts:
  gobot-blink:
    source: .
    plugin: go
    go-importpath: github.com/elopio/gobot-blink

To build the snap, here is a cool trick thanks to the work that kalikiana recently added to snapcraft. I'm writing this code in my development machine, which is amd64. But the raspberry pi and beaglebone are armhf, and the dragonboard is arm64; so I need to cross-compile the code to get binaries for all the architectures:

snapcraft --target-arch=armhf
snapcraft clean
snapcraft --target-arch=arm64

That will leave two .snap files in my working directory that then I can upload to the store with snapcraft push. Or I can just push the code to GitHub and let build.snapcraft.io to take care of building and pushing for me.

Here is the source code for this simple example: https://github.com/elopio/gobot-blink

Of course, Gobot supports many more devices that will let you build complex robots. Just take a look at the documentation in the Gobot site, and at the guide about deployable packages with Gobot and snapcraft.

picture of the BeagleBone prototype

If you have one of the boards I'm using here to play, give it a try:

sudo snap install gobot-blink-elopio --edge --devmode
sudo gobot-blink-elopio

Now my experiments will be to try make the snap more secure, with strict confinement. If you have any questions or want to help, we have a topic in the forum.

Read more
Leo Arias

Travis CI offers a great continuous integration service for the projects hosted on GitHub. With it, you can run tests, deliver artifacts and deploy applications every time you push a commit, on pull requests, after they are merged, or with some other frequency.

Last week Travis CI updated the Ubuntu 14.04 (Trusty) machines that run your tests and deployment steps. This update came with a nice surprise for everybody working to deliver software to Linux users, because it is now possible to install snaps in Travis!

I've been excited all week telling people about all the doors that this opens; but if you have been following my adventures in the Ubuntu world, by now you can probably guess that I'm mostly thinking about all the potential this has for automated testing. For the automation of user acceptance tests.

User acceptance tests are executed from the point of view of the user, with your software presented as a black box to them. The tests can only interact with the software through the entry points you define for your users. If it's a CLI application, then the tests will call commands and subcommands and check the outputs. If it's a website or a desktop application, the tests will click things, enter text and check the changes on this GUI. If it's a service with an HTTP API, the tests will make requests and check the responses. On these tests, the closer you can get to simulate the environment and behaviour of your real users, the better.

Snaps are great for the automation of user acceptance tests because they are immutable and they bundle all their dependencies. With this we can make sure that your snap will work the same on any of the operating systems and architectures that support snaps. The snapd service takes care of hiding the differences and presenting a consistent execution environment for the snap. So, getting a green execution of these tests in the Trusty machine of Travis is a pretty good indication that it will work on all the active releases of Ubuntu, Debian, Fedora and even on a Raspberry Pi.

Let me show you an example of what I'm talking about, obviously using my favourite snap called IPFS. There is more information about IPFS in my previous post.

Check below the packaging metadata for the IPFS snap, a single snapcraft.yaml file:

name: ipfs
version: master
summary: global, versioned, peer-to-peer filesystem
description: |
  IPFS combines good ideas from Git, BitTorrent, Kademlia, SFS, and the Web.
  It is like a single bittorrent swarm, exchanging git objects. IPFS provides
  an interface as simple as the HTTP web, but with permanence built in. You
  can also mount the world at /ipfs.
confinement: strict

apps:
  ipfs:
    command: ipfs
    plugs: [home, network, network-bind]

parts:
  ipfs:
    source: https://github.com/ipfs/go-ipfs.git
    plugin: nil
    build-packages: [make, wget]
    prepare: |
      mkdir -p ../go/src/github.com/ipfs/go-ipfs
      cp -R . ../go/src/github.com/ipfs/go-ipfs
    build: |
      env GOPATH=$(pwd)/../go make -C ../go/src/github.com/ipfs/go-ipfs install
    install: |
      mkdir $SNAPCRAFT_PART_INSTALL/bin
      mv ../go/bin/ipfs $SNAPCRAFT_PART_INSTALL/bin/
    after: [go]
  go:
    source-tag: go1.7.5

It's not the most simple snap because they use their own build tool to get the go dependencies and compile; but it's also not too complex. If you are new to snaps and want to understand every detail of this file, or you want to package your own project, the tutorial to create your first snap is a good place to start.

What's important here is that if you run snapcraft using the snapcraft.yaml file above, you will get the IPFS snap. If you install that snap, then you can test it from the point of view of the user. And if the tests work well, you can push it to the edge channel of the Ubuntu store to start the crowdtesting with your community.

We can automate all of this with Travis. The snapcraft.yaml for the project must be already in the GitHub repository, and we will add there a .travis.yml file. They have good docs to prepare your Travis account. First, let's see what's required to build the snap:

sudo: required
services: [docker]

script:
  - docker run -v $(pwd):$(pwd) -w $(pwd) snapcore/snapcraft sh -c "apt update && snapcraft"

For now, we build the snap in a docker container to keep things simple. We have work in progress to be able to install snapcraft in Trusty as a snap, so soon this will be even nicer running everything directly in the Travis machine.

This previous step will leave the packaged .snap file in the current directory. So we can install it adding a few more steps to the Travis script:

[...]

script:
  - docker [...]
  - sudo apt install --yes snapd
  - sudo snap install *.snap --dangerous

And once the snap is installed, we can run it and check that it works as expected. Those checks are our automated user acceptance test. IPFS has a CLI client, so we can just run commands and verify outputs with grep. Or we can get fancier using shunit2 or bats. But the basic idea would be to add to the Travis script something like this:

[...]

script:
  [...]
  - /snap/bin/ipfs init
  - /snap/bin/ipfs cat /ipfs/QmVLDAhCY3X9P2uRudKAryuQFPM5zqA3Yij1dY8FpGbL7T/readme | grep -z "^Hello and Welcome to IPFS!.*$"
  - [...]

If one of those checks fail, Travis will mark the execution as failed and stop our release process until we fix them. If instead, all of the checks pass, then this version is good enough to put into the store, where people can take it and run exploratory tests to try to find problems caused by weird scenarios that we missed in the automation. To help with that we have the snapcraft enable-ci travis command, and a tutorial to guide you step by step setting up the continuous delivery from Travis CI.

For the IPFS snap we had for a long time a manual smoke suite, that our amazing community of testers have been executing over and over again, every time we want to publish a new release. I've turned it into a simple bash script that from now on will be executed frequently by Travis, and will tell us if there's something wrong before anybody gives it a try manually. With this our community of testers will have more time to run new and interesting scenarios, trying to break the application in clever ways, instead of running the same repetitive steps many times.

Thanks to Travis and snapcraft we no longer have to worry about a big part of or release process. Continuous integration and delivery can be fully automated, and we will have to take a look only when something breaks.

As for IPFS, it will keep being my guinea pig to guide new features for snapcraft and showcase them when ready. It has many more commands that have to be added to the automated test suite, and it also has a web UI and an HTTP API. Lots of things to play with! If you would like to help, and on the way learn about snaps, automation and the decentralized web, please let me know. You can take a look on my IPFS snap repo for more details about testing snaps in Travis, and other tricks for the build and deployment.

screenshot of the IPFS smoke test running in travis

Read more
Leo Arias

Here at Ubuntu we are working hard on the future of free software distribution. We want developers to release their software to any Linux distro in a way that's safe, simple and flexible. You can read more about this at snapcraft.io.

This work is extremely fun because we have to work constantly with a wild variety of free software projects to make sure that the tools we write are usable and that the workflow we are proposing makes sense to developers and gives them a lot of value in return. Today I want to talk about one of those projects: IPFS.

IPFS is the permanent and decentralized web. How cool is that? You get a peer-to-peer distributed file system where you store and retrieve files. They have a nice demo in their website, and you can give it a try on Ubuntu Trusty, Xenial or later by running:

$ sudo snap install ipfs

screenshot of the IPFS peers

So, here's one of the problems we are trying to solve. We have millions of users on the Trusty version of Ubuntu, released during 2014. We also have millions of users on the Xenial version, released during 2016. Those two versions are stable now, and following the Ubuntu policies, they will get only security updates for 5 years. That means that it's very hard, almost impossible, for a young project like IPFS to get into the Ubuntu archives for those releases. There will be no simple way for all those users to enjoy IPFS, they would have to use a Personal Package Archive or install the software from a tarball. Both methods are complex with high security risks, and both require the users to put a lot of trust on the developers, more than what they should ever trust anybody.

We are closing the Zesty release cycle which will go out in April, so it's too late there too. IPFS could make a deb, put it into Debian, wait for it to sync to Ubuntu, and then it's likely that it will be ready for the October release. Aside from the fact that we have to wait until October, there are a few other problems. First, making a deb is not simple. It's not too hard either, but it requires quite some time to learn to do it right. Second, I mentioned that IPFS is young, they are on the 0.4.6 version. So, it's very unlikely that they will want to support this early version for such a long time as Debian and Ubuntu require. And they are not only young, they are also fast. They add new features and bug fixes every day and make new releases almost every week, so they need a feedback loop that's just as fast. A 6 months release cycle is way too slow. That works nicely for some kinds of free software projects, but not for one like IPFS.

They have been kind enough to let me play with their project and use it as a test subject to verify our end-to-end workflow. My passion is testing, so I have been focusing on continuous delivery to get happy early adopters and constant feedback about the most recent changes in the project.

I started by making a snapcraft.yaml file that contains all the metadata required for the snap package. The file is pretty simple and to make the first version it took me just a couple of minutes, true story. Since then I've been slowly improving and updating it with small changes. If you are interested in doing the same for your project, you can read the tutorial to create a snap.

I built and tested this snap locally on my machines. It worked nicely, so I pushed it to the edge channel of the Ubuntu Store. Here, the snap is not visible on user searches, only the people who know about the snap will be able to install it. I told a couple of my friends to give it a try, and they came back telling me how cool IPFS was. Great choice for my first test subject, no doubt.

At this point, following the pace of the project by manually building and pushing new versions to the store was too demanding, they go too fast. So, I started working on continuous delivery by translating everything I did manually into scripts and hooking them to travis-ci. After a few days, it got pretty fancy, take a look at the github repo of the IPFS snap if you are curious. Every day, a new version is packaged from the latest state of the master branch of IPFS and it is pushed to the edge channel, so we have a constant flow of new releases for hardcore early adopters. After they install IPFS from the edge channel once, the package will be automatically updated in their machines every day, so they don't have to do anything else, just use IPFS as they normally would.

Now with this constant stream of updates, me and my two friends were not enough to validate all the new features. We could never be sure if the project was stable enough to be pushed to the stable channel and make it available to the millions and millions of Ubuntu users out there.

Luckily, the Ubuntu community is huge, and they are very nice people. It was time to use the wisdom of the crowds. I invited the most brave of them to keep the snap installed from edge and I defined a simple pipeline that leads to the stable release using the four available channels in the Ubuntu store:

  • When a revision is tagged in the IPFS master repo, it is automatically pushed to edge channel from travis, just as with any other revision.
  • Travis notifies me about this revision.
  • I install this tagged revision from edge, and run a super quick test to make sure that the IPFS server starts.
  • If it starts, I push the snap to the beta channel.
  • With a couple of my friends, we run a suite of smoke tests.
  • If everything goes well, I push the snap to the candidate channel.
  • I notify the community of Ubuntu testers about a new version in the candidate channel. This is were the magic of crowd testing happens.
  • The Ubuntu testers run the smoke tests in all their machines, which gives us the confidence we need because we are confirming that the new version works on different platforms, distros, distro releases, countries, network topologies, you name it.
  • This candidate release is left for some time in this channel, to let the community run thorough exploratory tests, trying to find weird usage combinations that could break the software.
  • If the tag was for a final upstream release, the community also runs update tests to make sure that the users with the stable snap installed will get this new version without issues.
  • After all the problems found by the community have been resolved or at least acknowledged and triaged as not blockers, I move the snap from candidate to the stable channel.
  • All the users following the stable channel will automatically get a very well tested version, thanks to the community who contributed with the testing and accepted a higher level of risk.
  • And we start again, the never-ending cycle of making free software :)

Now, let's go back to the discussion about trust. Debian and Ubuntu, and most of the other distros, rely on maintainers and distro developers to package and review every change on the software that they put in their archives. That is a lot of work, and it slows down the feedback loop a lot, as we have seen. In here we automated most of the tasks of a distro maintainer, and the new revisions can be delivered directly to the users without any reviews. So the users are trusting directly their upstream developers without intermediaries, but it's very different from the previously existing and unsafe methods. The code of snaps is installed read-only, very well constrained with access only to their own safe space. Any other access needs to be declared by the snap, and the user is always in control of which access is permitted to the application.

This way upstream developers can go faster but without exposing their users to unnecessary risks. And they just need a simple snapcraft.yaml file and to define their own continuous delivery pipeline, on their own timeline.

By removing the distro as the intermediary between the developers and their users, we are also making a new world full of possibilities for the Ubuntu community. Now they can collaborate constantly and directly with upstream developers, closing this quick feedback loop. In the future we will tell our children of the good old days when we had to report a bug in Ubuntu, which would be copied to Debian, then sent upstream to the developers, and after 6 months, the fix would arrive. It was fun, and it lead us to where we are today, but I will not miss it at all.

Finally, what's next for IPFS? After this experiment we got more than 200 unique testers and almost 300 test installs. I now have great confidence on this workflow, new revisions were delivered on time, existing Ubuntu testers became new IPFS contributors and I now can safely recommend IPFS users to install the stable snap. But there's still plenty of work ahead. There are still manual steps in the pipeline that can be scripted, the smoke tests can be automated to leave more free time for exploratory testing, we can release also to armhf and arm64 architectures to get IPFS into the IoT world, and well, of course the developers are not stopping, they keep releasing new interesting features. As I said, plenty of opportunities for us as distro contributors.

screenshot of the IPFS snap stats

I'd like to thank everybody who tested the IPFS snap, specially the following people for their help and feedback:

  • freekvh
  • urcminister
  • Carla Sella
  • casept
  • Colin Law
  • ventrical
  • cariboo
  • howefield

<3

If you want to release your project to the Ubuntu store, take a look at the snapcraft docs, the Ubuntu tutorials, and come talk to us in Rocket Chat.

Read more
Leo Arias

Last Sunday we went to the Poás Volcano to make free maps.

This is the second geek outing of the JaquerEspéis. From the first one we learned that we had to wait until summer because it's not possible to make maps during a storm. And the day was perfect. It wasn't just sunny, but the crater was totally clear and thus we could add a new spot of Costa Rica to the virtual tour.

In addition to that, this time we arrived much better prepared, with multiple phones with mapillary, osmand and OSMTracker, a 360 cam, a Garmin GPS, a drone and even a notebook and two biologists.

The procession of the MapperSpace

Here's how it works. Everybody with the GPS in the phone activated waits until it finds the location. Then, each person uses the application of his preference to collect data: pictures, audio, video, text notes, traces, annotations in the notebook...

Later, in our homes, we upload, publish and share all the collected data. These is useful to improve the free maps of OpenStreetMap. We add from really simple things like the location of a trash bin to really important things like how accessible is the place for a person in a wheelchair, together with the location of all the accesses or the places that have a lack of them. Each person improves the map a little, in the region that he knows or passed by. With more than 3 million users, OpenStreetMap is the best map of the world that exists; and it has a particular importance in regions like ours, without a lot of economic potential for the megacorporations that make and sell closed maps stealing private data from their users.

Because the maps we make are free, what comes next has no limits. There are groups working on the reconstruction of 3D models from the pictures, on the identification and interpretation of signs, on applications to calculate the optimal route to reach any place using any combination of means of transportation, on applications to assist decission making during the design of the future of a city, and many other things. All of this based on shared knowledge and community.

The image above is the virtual tour in Mapillary. As we recorded it with the 360 cam, you can click and drag with the mouse to see all the angles. You can also click above, in the play button to follow the path we took. Or you can click in any of the green dots in the map to follow your own path.

Thank you very much to everybody who joined us, specially to Denisse and Charles for being our guides, and for filling up the trip with interesting information about flora, fauna, geology and historic importance of El Poás.

Miembros del MaperEspeis

(More pictures and videos here)

The next MapperSpace will be on march the 12th.

Read more
Leo Arias

Call for testing: MySQL

I promised that more interesting things were going to be available soon for testing in Ubuntu. There's plenty coming, but today here is one of the greatest:

$ sudo snap install mysql --channel=8.0/beta

screenshot of mysql snap running

Lars Tangvald and other people at MySQL have been working on this snap for some time, and now they are ready to give it to the community for crowd testing. If you have some minutes, please give them a hand.

We have a testing guide to help you getting started.

Remember that this should run in trusty, xenial, yakkety, zesty and in all flavours of Ubuntu. It would be great to get a diverse pool of platforms and test it everywhere.

In here we are introducing a new concept: tracks. Notice that we are using --channel=8.0/beta, instead of only --beta as we used to do before. That's because mysql has two different major versions currently active. In order to try the other one:

$ sudo snap install mysql --channel=5.7/beta

Please report back your results. Any kind of feedback will be highly appreciated, and if you have doubts or need a hand to get started, I'm hanging around in Rocket Chat.

Read more
Leo Arias

There is a huge announcement coming: snaps now run in Ubuntu 14.04 Trusty Tahr.

Take a moment to note how big this is. Ubuntu 14.04 is a long-term release that will be supported until 2019. Ubuntu 16.04 is also a long-term release that will be supported until 2021. We have many many many users in both releases, some of which will stay there until we drop the support. Before this snappy new world, all those users were stuck with the versions of all their programs released in 2014 or 2016, getting only updates for security and critical issues. Just try to remember how your favorite program looked 5 years ago; maybe it didn't even exist. We were used to choose between stability and cool new features.

Well, a new world is possible. With snaps you can have a stable base system with frequent updates for every program, without the risk of breaking your machine. And now if you are a Trusty user, you can just start taking advantage of all this. If you are a developer, you have to prepare only one release and it will just work in all the supported Ubuntu releases.

Awesome, right? The Ubuntu devs have been doing a great job. snapd has already landed in the Trusty archive, and we have been running many manual and automated tests on it. So we would like now to invite the community to test it, explore weird paths, try to break it. We will appreciate it very much, but all of those Trusty users out there will love it, when they receive loads of new high quality free software on their oldie machines.

So, how to get started?

If you are already running Trusty, you will just have to install snapd:

$ sudo apt update && sudo apt install snapd

Reboot your system after that in case you had a kernel update pending, and to get the paths for the new snap binaries set up.

If you are running a different Ubuntu release, you can Install Ubuntu in a virtual machine. Just make sure that you install the http://releases.ubuntu.com/14.04/ubuntu-14.04.5-desktop-amd64.iso.

Once you have Trusty with snapd ready, try a few commands:

$ snap list
$ sudo snap install hello-world
$ hello-world
$ snap find something

screenshot of snaps running in Trusty

Keep searching for snaps until you find one that's interesting. Install it, try it, and let us know how it goes.

If you find something wrong, please report a bug with the trusty tag. If you are new to the Ubuntu community or get lost on the way, come and join us in Rocket Chat.

And after a good session of testing, sit down, relax, and get ohmygiraffe. With love from popey:

$ sudo snap install ohmygiraffe
$ ohmygiraffe

screenshot of ohmygiraffe

Read more
Leo Arias

After a little break, on the first Friday of February we resumed the Ubuntu Testing Days.

This session was pretty interesting, because after setting some of the bases last year we are now ready to dig deep into the most important projects that will define the future of Ubuntu.

We talked about Ubuntu Core, a snap package that is the base of the operating system. Because it is a snap, it gets the same benefits as all the other snaps: automatic updates, rollbacks in case of error during installation, read-only mount of the code, isolation from other snaps, multiple channels on the store for different levels of stability, etc.

The features, philosophy and future of Core were presented by Michael Vogt and Zygmunt Krynicki, and then Federico Giménez did a great demo of how to create an image and test it in QEMU.

Click the image below to watch the full session.

Alt text

There are plenty of resources in the Ubuntu websites related to Ubuntu Core.

To get started, we recommend to follow this guide to run the operating system in a virtual machine.

After that, and if you are feeling brave and want to help Michael, Zygmund and Federico, you can download the candidate image instead, from http://cdimage.ubuntu.com/ubuntu-core/16/candidate/pending/ubuntu-core-16-amd64.img.xz This is the image that's being currently tested, so if you find something wrong or weird, please report a bug in Launchpad.

Finally, if you want to learn more about the snaps that compose the image and take a peek at the things that we'll cover in the following testing days, you can follow the tutorial to create your own Core image.

On this session we were also accompanied by Robert Wolff who works on 96boards at Linaro. He has an awesome show every Thursday called Open Hours. At 96boards they are building open Linux boards for prototyping and embedded computing. Anybody can jump into the Open Hours to learn more about this cool work.

The great news that Robert brought is that both Open Hours and Ubuntu Testing Days will be focused on Ubuntu Core this month. He will be our guest again next Friday, February 10th, where he will be talking about the DragonBoard 410c. Also my good friend Oliver Grawert will be with us, and he will talk about the work he has been doing to enable Ubuntu in this board.

Great topics ahead, and a full new world of possiblities now that we are mixing free software with open hardware and affordable prototyping tools. Remember, every Friday in http://ubuntuonair.com/, no se lo pierda.

Read more
Leo Arias

Happy new year Ubunteros and Ubunteras!

If you have been following our testing days, you will know by now that our intention is to get more people contributing to Ubuntu and free software projects, and to help them getting started through testing and related tasks. So, we will be making frequent calls for testing where you can contribute and learn. Educational AND fun ^_^

To start the year, I would like to invite you to test the IPFS candidate snap. IPFS is a really interesting free project for distributed storage. You can read more about it and watch a demo in the IPFS website.

We have pushed a nice snap with their latest stable version to the candidate channel in the store. But before we publish it to the stable channel we would like to get more people testing it.

You can get a clean and safe environment to test following some of the guides you'll find on the summaries of the past testing days.

Or, if you want to use your current system, you can just do:

$ sudo snap install ipfs --candidate

I have written a gist with a simple guide to get started testing it

If you finish that successfully and still have more time, or are curious about ipfs, please continue with an exploratory testing session. The idea here is just to execute random commands, try unusual inputs and just play around.

You can get ideas from the IPFS docs.

When you are done, please send me an email with your results and any comments. And if you get stuck or have any kind of question, please don't hesitate to ask. Remember that we welcome everybody.

Read more
Leo Arias

Today we had the last Ubuntu Testing Day of the year.

We invited Sergio Schvezov and Joe Talbott, to join Kyle and myself. Together we have been working on Snapcraft the whole year.

Sergio did a great introduction of snapcraft, and showed some of the new features that will land next week in Ubuntu. And because it was the last day of work for everybody (except Kyle), we threw some beers into the hang out and made it our team end of year party.

You can watch the full recording by clicking the image below.

Alt text

Snapcraft is one of the few projects that have an exception to land new features into released versions of Ubuntu. So every week we are landing new things in Xenial and Yakkety. This means that we need to constantly test that we are not breaking anything for all the people using stable Ubuntu releases; and it means that we would love to have many more hands helping us with those tests.

If you would like to help, all you have to do is set up a virtual machine and enable the proposed pocket in there.

This is the active bug for the Stable Release Update of snapcraft 2.24: bug #1650632

Before I shut down my computer and start my holidays, I would like to thank all the Ubuntu community for one more year, it has been quite a ride. And I would like to thank Sergio, Kyle and Joe in particular. They are the best team a QA Engineer could ask for <3.

See you next year for more testing days.

Read more
Leo Arias

Ubuntu has a six month cycle for releases. This means that we work for six months updating software, adding new features and making sure that it all works consistently together when it’s integrated into the operating system, and then we release it. Once we make a release, it is considered stable and then we almost exclusively add to that release critical bug fixes and patches for security vulnerabilities. The exceptions are just a few, and are for software that’s important for Ubuntu and that we want to keep up-to-date with the latest features even in stable releases.

These bug fixes, security patches and exceptional new features require a lot of testing, because right after they are published they will reach all the Ubuntu users that are in the stable release. And we want the release to remain stable, never ever introduce a regression that will make those users unhappy.

We call all of them Stable Release Updates, and we test them in the proposed pocket of the Ubuntu archive. This is obviously not enabled by default, so the brave souls that want to help us testing the changes in proposed need to enable it.

Before we go on, I would recommend to test SRUs in a virtual machine. Once you enable proposed following this guide you will get constant and untested updates from many packages, and these updates will break parts of your system every now and then. It’s not likely to be critical, but it can bother you if it happens on the machine you need to do your work, or other stuff. And if somebody makes a mistake, you might need to reinstall the system.

You will also have to find a package that needs testing. Snapcraft is one of the few exceptions allowed to land in a stable release every week. So lets use it as an example. Lets say you want to help us testing the upcoming release of snapcraft in Ubuntu 16.04.

With your machine ready and before enabling proposed, install the version already released of the package you want to test. This way you’ll test later an update to the newer version, just what a normal user would get once the update is approved and lands in the archive. So in a terminal, write:

  $ sudo apt update
  $ sudo apt install snapcraft

Or replace snapcraft with whatever package you are testing. If you are doing it just during the weekend after I am writing this, the released version of snapcraft will be 2.23. You can check it with:

  $ dpkg -s snapcraft | grep Version
  Version: 2.23

Now, to enable proposed, open the Ubuntu Software application, and select Software & Updates from the menu in the top of the window.

image

From the Software & Updates window, select the Developer Options tab. And check the item that says Pre-released updates.

image

This will prompt for your password. Enter it, and then click the Close button. You will be asked if you want to reload your sources, so yes, click the Reload button.

image

Finally try to upgrade. If there is a newer version available in the proposed pocket for the packet you are testing, now you will get it.

  $ sudo apt install snapcraft
  $ dpkg -s snapcraft | grep Version
  Version: 2.24

Every time there is a proposed update, the package will have corresponding SRU bugs with the tag “verification-needed”. In the case of snapcraft this weekend, this is the bug for the 2.24 proposed update: https://bugs.launchpad.net/ubuntu/+source/snapcraft/+bug/1650632

The SRU bugs will have instructions about how to test that the fix or the new features released are correct. Follow those steps, and if you are confident that they work as expected, change the tag to “verification-done”. If you find that the fix is not correct, change the tag to “verification-failed”. In case of doubt, you can leave a comment in the bug explaining what you tried and what you found.

You can read more about SRUs in the Stable Release Updates wiki page, and also in the wiki page explaining how to perform verifications. This last page includes a section to find packages and bugs that need verification. If you want to help the Ubuntu community, you can just jump in and start verifying some of the pending bugs. It will be highly appreciated.

If you have questions or find a problem, join us in the Ubuntu Rocket Chat.

Read more
Leo Arias

For the third session of the Ubuntu Testing Days, Kevin Gunn joined us to talk about the Unity8 snap.

This is a thriving project with lots of things to do and bugs to uncover, so it's the perfect candidate for newcomers eager to help Ubuntu.

If you are interested and didn't attend our meeting on Friday, click the image below to watch the recording.

Alt text

To install Unity 8 in the Ubuntu 16.04 cloned virtual machine that we prepared on the past session, enter the following commands in the terminal:

$ sudo add-apt-repository -u ppa:ci-train-ppa-service/stable-phone-overlay
$ sudo apt install unity8-session-snap
$ unity8-snap-install
$ sudo reboot

After reboot, on the login screen click the Ubuntu icon next to your user name and change the selection to Unity 8.

Kevin and his team have a google doc with the instructions to build and install the snap, and the current status. If you find a bug, you can talk to the developers joining the #ubuntu-touch IRC channel on freenode

While preparing the testing environment on this session we had a crash, and explained how easy it is to send the report to the Ubuntu developers by just clicking the Report problem... button. The reports from all our users are in the error tracker, along with a frequency graph and the links to bug reports where those crashes are being fixed.

We also showed two tricks to make faster and more pleasant the testing session in a virtual machine:

Thanks to Julia, Kyle and Kevin for the nice session.

Join us again next Friday at Ubuntu On-Air.

Read more
Leo Arias

All Linux distributions are constantly updating the versions of the packages in their archives. That’s what makes them great, lots of people working in a distributed way to let you easily update your software and get the latest features or critical bug fixes.

And you should constatly update your operating system. Otherwise you’ll become an easy target for criminals exploiting known vulnerabilities.

The problem, at least for me, is that I have many many Ubuntu machines in the house and my badwidth is really bad. So keeping all my real machines, virtual machines and various devices up-to-date every day has become a slow problem.

The solution is to cache the downloaded deb packages. So only one machine has to make the downloads from the internet, and they will be kept in my local network to make much faster to get the packages in the other machines.

So let me introduce you to Apt-Cacher NG.

Setting it up is simple. First, choose a machine to run the cacher and store the packages. Ideally, this machine should be running all the time, and should have a good amount of storage space. I’m using my desktop as the cacher; but as soon as I update my router to one that runs Ubuntu, I will make that one the cacher.

On that machine, install apt-cacher-ng:

  sudo apt install apt-cacher-ng

And that’s it. The cacher is installed and configured. Now we need the name of this machine to use it on the other ones:

  $ hostname
  calchas

In this example, calchas is the name of the machine I’m using as the cacher. Take note of the name of your machine, and now, in all the other machines:

  $ sudo gedit /etc/apt/apt.conf.d/02proxy

That will create a new empty configuration file for apt, and open it to be edited with gedit, the default graphical editor in Ubuntu. In the editor, write this:

  Acquire::http::proxy "http://calchas.lan:3142";

replacing calchas with the name of your cacher machine, collected above. The .lan part is really only needed when you are setting it up in a virtual machine and the host is the same as the cacher, but it doesn’t hurt to add it on real machines. That number, 3142, is the network port where the caching service is running, leave it unchanged.

After that, the first time you update a package in your network it will be slow just as before. But all the other machines updating the same package will be very fast. I have to thank apt-cacher-ng for saving me many hours during my updates of the past years.

Read more
Leo Arias

We have survived two testing days, and now we can safely say that it will become a Friday tradition :)

Last Friday our nice guest was Aaron Ogle, from Rocket Chat. He gave us a tour on the Rocket Chat UI and we discussed about how they packaged it as a snap.

If you missed it, click the image below to watch it.

Alt text

Building from what we saw on the first session, we tested the snap using a virtual machine again. But this time, we cloned it to keep a pristine machine and make following testing sessions faster. If you want to help the Ubuntu and Rocket Chat communities, this is an easy way to prepare your environment:

Once you have your clone ready, install the most recent and bleeding edge version of rocket chat with:

$ sudo snap install rocketchat-server --edge

Then you can follow this gist with the initial steps to start testing the Rocket Chat snap

Also you can test a real installation of Rocket Chat, joining our community channel, where we are available all day, every day. If you have a question, just ask. I am elopio in there.

During the session we took a look at the GitHub website, where many free software communities do their development in the open. They have a great guide to start contributing to open source projects. Go on and spread your love for free software in the form of bug reports :)

The gratitude this week goes to our newly acquired staff members Julia and Kyle, and of course to Aaron for letting us have a funny Friday evening. Make sure to take a look at the cool things he and his teammates are doing; and if you have some free time and want to join an exciting, open and nice community, give them a hand. Also try the Jitsi integration for video conference, it's mind-blowing that there are no closed components anywhere.

See you next Friday at Ubuntu On-Air.

Read more
Leo Arias

The basic idea of using virtual machines for testing is to always start from a clean state, as close as possible to what a user would get after freshly installing the system on their real machines.

So, every time we need to test something, we could create a new virtual machine and intall Ubuntu from scratch there, but that would take a considerable amount of time.

An alternative is to install a machine once, and keep it clean. Never test in that one, and instead clone it every time you need to test something. Use the clone to experiment, and delete it when you are done. Cloning the machine is a lot faster than reinstalling each time. The base machine, the one you clone everytime, is called a pristine machine. If you are careful with it, the only time you will need to reinstall is when you are testing the installer.

Cloning the pristine machine using Virtual Machine Manager is really simple, it’s done in like two clicks.

First, of course, you will need to install your pristine machine. I always put pristine in the name, to make it less likely that I will start using it for testing. Every time I forget and play with the pristine machine, I’m polluting its state, and have to recreate it.

With the pristine machine ready, I then recommend to open it and run in a terminal:

  sudo apt update && sudo apt upgrade

That will update all the packages installed, so your clone starts also in the most up-to-date state. If you do this step every time, you will never have to wait for long while your machine downloads and installs lots of packages, just wait a little every day.

image

Now shut down the pristine machine and don’t touch it again, only to update it.

To clone it, right click on it and then click the Clone... button.

image

A dialog will be opened, where you can enter the new virtual machine details.

image

I always write the date as part of the name, to remember when I created it. But the most important thing to do here is to make sure that the Storage option is set to Clone this disk. Otherwise, you will be sharing the disk with the pristine machine, which will pollute its state, the exact thing we are trying to avoid.

This will take some time while the whole disk is copied. But not long, and once it’s done you can freely play and pollute this clone, without affecting the pristine machine.

Read more
Leo Arias

Today we had our first testing day. We will keep doing this every Friday, and at the end of the session I will post a summary with links to follow up and learn more about the subject in case somebody is interested.

You can watch the video by clicking the image below.

Alt text

For this session we had Kyle Fazzari talking about his work as the maintainer of the Nextcloud snap.

You can find more info about the project in the Nextcloud main page and more info about the snap in Kyle's blog.

We tested the snap using a virtual machine. To set one up you can use the following guides:

After the virtual machine is ready, Nextcloud can be installed in there by just running in the terminal:

$ sudo snap install nextcloud

If you want to test an unreleased and unstable version to help the Nextcloud developers, you can add the flag --edge or --beta to that command.

I wrote some steps to guide you getting started with the testing in this gist.

And finally, if you want to join our community, we are usually in Rocket Chat. You can join the #community, #qa or #snapcraft channels, or any other that catches your attention.

Special thanks to Julia and CoderEurope for joining us during the session. And of course to Kyle and the Nextcloud community for this amazing piece of free software that encourages everybody to take control over their data, in a really simple way.

You are all invited to join us the next time, which will be Friday, December 2nd, again at Ubuntu On-Air. The time and theme will be announced soon. Or at least before it starts.

Read more