Canonical Voices

Posts tagged with 'testing'

Leo Arias

Travis CI offers a great continuous integration service for the projects hosted on GitHub. With it, you can run tests, deliver artifacts and deploy applications every time you push a commit, on pull requests, after they are merged, or with some other frequency.

Last week Travis CI updated the Ubuntu 14.04 (Trusty) machines that run your tests and deployment steps. This update came with a nice surprise for everybody working to deliver software to Linux users, because it is now possible to install snaps in Travis!

I've been excited all week telling people about all the doors that this opens; but if you have been following my adventures in the Ubuntu world, by now you can probably guess that I'm mostly thinking about all the potential this has for automated testing. For the automation of user acceptance tests.

User acceptance tests are executed from the point of view of the user, with your software presented as a black box to them. The tests can only interact with the software through the entry points you define for your users. If it's a CLI application, then the tests will call commands and subcommands and check the outputs. If it's a website or a desktop application, the tests will click things, enter text and check the changes on this GUI. If it's a service with an HTTP API, the tests will make requests and check the responses. On these tests, the closer you can get to simulate the environment and behaviour of your real users, the better.

Snaps are great for the automation of user acceptance tests because they are immutable and they bundle all their dependencies. With this we can make sure that your snap will work the same on any of the operating systems and architectures that support snaps. The snapd service takes care of hiding the differences and presenting a consistent execution environment for the snap. So, getting a green execution of these tests in the Trusty machine of Travis is a pretty good indication that it will work on all the active releases of Ubuntu, Debian, Fedora and even on a Raspberry Pi.

Let me show you an example of what I'm talking about, obviously using my favourite snap called IPFS. There is more information about IPFS in my previous post.

Check below the packaging metadata for the IPFS snap, a single snapcraft.yaml file:

name: ipfs
version: master
summary: global, versioned, peer-to-peer filesystem
description: |
  IPFS combines good ideas from Git, BitTorrent, Kademlia, SFS, and the Web.
  It is like a single bittorrent swarm, exchanging git objects. IPFS provides
  an interface as simple as the HTTP web, but with permanence built in. You
  can also mount the world at /ipfs.
confinement: strict

apps:
  ipfs:
    command: ipfs
    plugs: [home, network, network-bind]

parts:
  ipfs:
    source: https://github.com/ipfs/go-ipfs.git
    plugin: nil
    build-packages: [make, wget]
    prepare: |
      mkdir -p ../go/src/github.com/ipfs/go-ipfs
      cp -R . ../go/src/github.com/ipfs/go-ipfs
    build: |
      env GOPATH=$(pwd)/../go make -C ../go/src/github.com/ipfs/go-ipfs install
    install: |
      mkdir $SNAPCRAFT_PART_INSTALL/bin
      mv ../go/bin/ipfs $SNAPCRAFT_PART_INSTALL/bin/
    after: [go]
  go:
    source-tag: go1.7.5

It's not the most simple snap because they use their own build tool to get the go dependencies and compile; but it's also not too complex. If you are new to snaps and want to understand every detail of this file, or you want to package your own project, the tutorial to create your first snap is a good place to start.

What's important here is that if you run snapcraft using the snapcraft.yaml file above, you will get the IPFS snap. If you install that snap, then you can test it from the point of view of the user. And if the tests work well, you can push it to the edge channel of the Ubuntu store to start the crowdtesting with your community.

We can automate all of this with Travis. The snapcraft.yaml for the project must be already in the GitHub repository, and we will add there a .travis.yml file. They have good docs to prepare your Travis account. First, let's see what's required to build the snap:

sudo: required
services: [docker]

script:
  - docker run -v $(pwd):$(pwd) -w $(pwd) snapcore/snapcraft sh -c "apt update && snapcraft"

For now, we build the snap in a docker container to keep things simple. We have work in progress to be able to install snapcraft in Trusty as a snap, so soon this will be even nicer running everything directly in the Travis machine.

This previous step will leave the packaged .snap file in the current directory. So we can install it adding a few more steps to the Travis script:

[...]

script:
  - docker [...]
  - sudo apt install --yes snapd
  - sudo snap install *.snap --dangerous

And once the snap is installed, we can run it and check that it works as expected. Those checks are our automated user acceptance test. IPFS has a CLI client, so we can just run commands and verify outputs with grep. Or we can get fancier using shunit2 or bats. But the basic idea would be to add to the Travis script something like this:

[...]

script:
  [...]
  - /snap/bin/ipfs init
  - /snap/bin/ipfs cat /ipfs/QmVLDAhCY3X9P2uRudKAryuQFPM5zqA3Yij1dY8FpGbL7T/readme | grep -z "^Hello and Welcome to IPFS!.*$"
  - [...]

If one of those checks fail, Travis will mark the execution as failed and stop our release process until we fix them. If instead, all of the checks pass, then this version is good enough to put into the store, where people can take it and run exploratory tests to try to find problems caused by weird scenarios that we missed in the automation. To help with that we have the snapcraft enable-ci travis command, and a tutorial to guide you step by step setting up the continuous delivery from Travis CI.

For the IPFS snap we had for a long time a manual smoke suite, that our amazing community of testers have been executing over and over again, every time we want to publish a new release. I've turned it into a simple bash script that from now on will be executed frequently by Travis, and will tell us if there's something wrong before anybody gives it a try manually. With this our community of testers will have more time to run new and interesting scenarios, trying to break the application in clever ways, instead of running the same repetitive steps many times.

Thanks to Travis and snapcraft we no longer have to worry about a big part of or release process. Continuous integration and delivery can be fully automated, and we will have to take a look only when something breaks.

As for IPFS, it will keep being my guinea pig to guide new features for snapcraft and showcase them when ready. It has many more commands that have to be added to the automated test suite, and it also has a web UI and an HTTP API. Lots of things to play with! If you would like to help, and on the way learn about snaps, automation and the decentralized web, please let me know. You can take a look on my IPFS snap repo for more details about testing snaps in Travis, and other tricks for the build and deployment.

screenshot of the IPFS smoke test running in travis

Read more
Leo Arias

Here at Ubuntu we are working hard on the future of free software distribution. We want developers to release their software to any Linux distro in a way that's safe, simple and flexible. You can read more about this at snapcraft.io.

This work is extremely fun because we have to work constantly with a wild variety of free software projects to make sure that the tools we write are usable and that the workflow we are proposing makes sense to developers and gives them a lot of value in return. Today I want to talk about one of those projects: IPFS.

IPFS is the permanent and decentralized web. How cool is that? You get a peer-to-peer distributed file system where you store and retrieve files. They have a nice demo in their website, and you can give it a try on Ubuntu Trusty, Xenial or later by running:

$ sudo snap install ipfs

screenshot of the IPFS peers

So, here's one of the problems we are trying to solve. We have millions of users on the Trusty version of Ubuntu, released during 2014. We also have millions of users on the Xenial version, released during 2016. Those two versions are stable now, and following the Ubuntu policies, they will get only security updates for 5 years. That means that it's very hard, almost impossible, for a young project like IPFS to get into the Ubuntu archives for those releases. There will be no simple way for all those users to enjoy IPFS, they would have to use a Personal Package Archive or install the software from a tarball. Both methods are complex with high security risks, and both require the users to put a lot of trust on the developers, more than what they should ever trust anybody.

We are closing the Zesty release cycle which will go out in April, so it's too late there too. IPFS could make a deb, put it into Debian, wait for it to sync to Ubuntu, and then it's likely that it will be ready for the October release. Aside from the fact that we have to wait until October, there are a few other problems. First, making a deb is not simple. It's not too hard either, but it requires quite some time to learn to do it right. Second, I mentioned that IPFS is young, they are on the 0.4.6 version. So, it's very unlikely that they will want to support this early version for such a long time as Debian and Ubuntu require. And they are not only young, they are also fast. They add new features and bug fixes every day and make new releases almost every week, so they need a feedback loop that's just as fast. A 6 months release cycle is way too slow. That works nicely for some kinds of free software projects, but not for one like IPFS.

They have been kind enough to let me play with their project and use it as a test subject to verify our end-to-end workflow. My passion is testing, so I have been focusing on continuous delivery to get happy early adopters and constant feedback about the most recent changes in the project.

I started by making a snapcraft.yaml file that contains all the metadata required for the snap package. The file is pretty simple and to make the first version it took me just a couple of minutes, true story. Since then I've been slowly improving and updating it with small changes. If you are interested in doing the same for your project, you can read the tutorial to create a snap.

I built and tested this snap locally on my machines. It worked nicely, so I pushed it to the edge channel of the Ubuntu Store. Here, the snap is not visible on user searches, only the people who know about the snap will be able to install it. I told a couple of my friends to give it a try, and they came back telling me how cool IPFS was. Great choice for my first test subject, no doubt.

At this point, following the pace of the project by manually building and pushing new versions to the store was too demanding, they go too fast. So, I started working on continuous delivery by translating everything I did manually into scripts and hooking them to travis-ci. After a few days, it got pretty fancy, take a look at the github repo of the IPFS snap if you are curious. Every day, a new version is packaged from the latest state of the master branch of IPFS and it is pushed to the edge channel, so we have a constant flow of new releases for hardcore early adopters. After they install IPFS from the edge channel once, the package will be automatically updated in their machines every day, so they don't have to do anything else, just use IPFS as they normally would.

Now with this constant stream of updates, me and my two friends were not enough to validate all the new features. We could never be sure if the project was stable enough to be pushed to the stable channel and make it available to the millions and millions of Ubuntu users out there.

Luckily, the Ubuntu community is huge, and they are very nice people. It was time to use the wisdom of the crowds. I invited the most brave of them to keep the snap installed from edge and I defined a simple pipeline that leads to the stable release using the four available channels in the Ubuntu store:

  • When a revision is tagged in the IPFS master repo, it is automatically pushed to edge channel from travis, just as with any other revision.
  • Travis notifies me about this revision.
  • I install this tagged revision from edge, and run a super quick test to make sure that the IPFS server starts.
  • If it starts, I push the snap to the beta channel.
  • With a couple of my friends, we run a suite of smoke tests.
  • If everything goes well, I push the snap to the candidate channel.
  • I notify the community of Ubuntu testers about a new version in the candidate channel. This is were the magic of crowd testing happens.
  • The Ubuntu testers run the smoke tests in all their machines, which gives us the confidence we need because we are confirming that the new version works on different platforms, distros, distro releases, countries, network topologies, you name it.
  • This candidate release is left for some time in this channel, to let the community run thorough exploratory tests, trying to find weird usage combinations that could break the software.
  • If the tag was for a final upstream release, the community also runs update tests to make sure that the users with the stable snap installed will get this new version without issues.
  • After all the problems found by the community have been resolved or at least acknowledged and triaged as not blockers, I move the snap from candidate to the stable channel.
  • All the users following the stable channel will automatically get a very well tested version, thanks to the community who contributed with the testing and accepted a higher level of risk.
  • And we start again, the never-ending cycle of making free software :)

Now, let's go back to the discussion about trust. Debian and Ubuntu, and most of the other distros, rely on maintainers and distro developers to package and review every change on the software that they put in their archives. That is a lot of work, and it slows down the feedback loop a lot, as we have seen. In here we automated most of the tasks of a distro maintainer, and the new revisions can be delivered directly to the users without any reviews. So the users are trusting directly their upstream developers without intermediaries, but it's very different from the previously existing and unsafe methods. The code of snaps is installed read-only, very well constrained with access only to their own safe space. Any other access needs to be declared by the snap, and the user is always in control of which access is permitted to the application.

This way upstream developers can go faster but without exposing their users to unnecessary risks. And they just need a simple snapcraft.yaml file and to define their own continuous delivery pipeline, on their own timeline.

By removing the distro as the intermediary between the developers and their users, we are also making a new world full of possibilities for the Ubuntu community. Now they can collaborate constantly and directly with upstream developers, closing this quick feedback loop. In the future we will tell our children of the good old days when we had to report a bug in Ubuntu, which would be copied to Debian, then sent upstream to the developers, and after 6 months, the fix would arrive. It was fun, and it lead us to where we are today, but I will not miss it at all.

Finally, what's next for IPFS? After this experiment we got more than 200 unique testers and almost 300 test installs. I now have great confidence on this workflow, new revisions were delivered on time, existing Ubuntu testers became new IPFS contributors and I now can safely recommend IPFS users to install the stable snap. But there's still plenty of work ahead. There are still manual steps in the pipeline that can be scripted, the smoke tests can be automated to leave more free time for exploratory testing, we can release also to armhf and arm64 architectures to get IPFS into the IoT world, and well, of course the developers are not stopping, they keep releasing new interesting features. As I said, plenty of opportunities for us as distro contributors.

screenshot of the IPFS snap stats

I'd like to thank everybody who tested the IPFS snap, specially the following people for their help and feedback:

  • freekvh
  • urcminister
  • Carla Sella
  • casept
  • Colin Law
  • ventrical
  • cariboo
  • howefield

<3

If you want to release your project to the Ubuntu store, take a look at the snapcraft docs, the Ubuntu tutorials, and come talk to us in Rocket Chat.

Read more
Leo Arias

Call for testing: MySQL

I promised that more interesting things were going to be available soon for testing in Ubuntu. There's plenty coming, but today here is one of the greatest:

$ sudo snap install mysql --channel=8.0/beta

screenshot of mysql snap running

Lars Tangvald and other people at MySQL have been working on this snap for some time, and now they are ready to give it to the community for crowd testing. If you have some minutes, please give them a hand.

We have a testing guide to help you getting started.

Remember that this should run in trusty, xenial, yakkety, zesty and in all flavours of Ubuntu. It would be great to get a diverse pool of platforms and test it everywhere.

In here we are introducing a new concept: tracks. Notice that we are using --channel=8.0/beta, instead of only --beta as we used to do before. That's because mysql has two different major versions currently active. In order to try the other one:

$ sudo snap install mysql --channel=5.7/beta

Please report back your results. Any kind of feedback will be highly appreciated, and if you have doubts or need a hand to get started, I'm hanging around in Rocket Chat.

Read more
Leo Arias

There is a huge announcement coming: snaps now run in Ubuntu 14.04 Trusty Tahr.

Take a moment to note how big this is. Ubuntu 14.04 is a long-term release that will be supported until 2019. Ubuntu 16.04 is also a long-term release that will be supported until 2021. We have many many many users in both releases, some of which will stay there until we drop the support. Before this snappy new world, all those users were stuck with the versions of all their programs released in 2014 or 2016, getting only updates for security and critical issues. Just try to remember how your favorite program looked 5 years ago; maybe it didn't even exist. We were used to choose between stability and cool new features.

Well, a new world is possible. With snaps you can have a stable base system with frequent updates for every program, without the risk of breaking your machine. And now if you are a Trusty user, you can just start taking advantage of all this. If you are a developer, you have to prepare only one release and it will just work in all the supported Ubuntu releases.

Awesome, right? The Ubuntu devs have been doing a great job. snapd has already landed in the Trusty archive, and we have been running many manual and automated tests on it. So we would like now to invite the community to test it, explore weird paths, try to break it. We will appreciate it very much, but all of those Trusty users out there will love it, when they receive loads of new high quality free software on their oldie machines.

So, how to get started?

If you are already running Trusty, you will just have to install snapd:

$ sudo apt update && sudo apt install snapd

Reboot your system after that in case you had a kernel update pending, and to get the paths for the new snap binaries set up.

If you are running a different Ubuntu release, you can Install Ubuntu in a virtual machine. Just make sure that you install the http://releases.ubuntu.com/14.04/ubuntu-14.04.5-desktop-amd64.iso.

Once you have Trusty with snapd ready, try a few commands:

$ snap list
$ sudo snap install hello-world
$ hello-world
$ snap find something

screenshot of snaps running in Trusty

Keep searching for snaps until you find one that's interesting. Install it, try it, and let us know how it goes.

If you find something wrong, please report a bug with the trusty tag. If you are new to the Ubuntu community or get lost on the way, come and join us in Rocket Chat.

And after a good session of testing, sit down, relax, and get ohmygiraffe. With love from popey:

$ sudo snap install ohmygiraffe
$ ohmygiraffe

screenshot of ohmygiraffe

Read more
Leo Arias

After a little break, on the first Friday of February we resumed the Ubuntu Testing Days.

This session was pretty interesting, because after setting some of the bases last year we are now ready to dig deep into the most important projects that will define the future of Ubuntu.

We talked about Ubuntu Core, a snap package that is the base of the operating system. Because it is a snap, it gets the same benefits as all the other snaps: automatic updates, rollbacks in case of error during installation, read-only mount of the code, isolation from other snaps, multiple channels on the store for different levels of stability, etc.

The features, philosophy and future of Core were presented by Michael Vogt and Zygmunt Krynicki, and then Federico Giménez did a great demo of how to create an image and test it in QEMU.

Click the image below to watch the full session.

Alt text

There are plenty of resources in the Ubuntu websites related to Ubuntu Core.

To get started, we recommend to follow this guide to run the operating system in a virtual machine.

After that, and if you are feeling brave and want to help Michael, Zygmund and Federico, you can download the candidate image instead, from http://cdimage.ubuntu.com/ubuntu-core/16/candidate/pending/ubuntu-core-16-amd64.img.xz This is the image that's being currently tested, so if you find something wrong or weird, please report a bug in Launchpad.

Finally, if you want to learn more about the snaps that compose the image and take a peek at the things that we'll cover in the following testing days, you can follow the tutorial to create your own Core image.

On this session we were also accompanied by Robert Wolff who works on 96boards at Linaro. He has an awesome show every Thursday called Open Hours. At 96boards they are building open Linux boards for prototyping and embedded computing. Anybody can jump into the Open Hours to learn more about this cool work.

The great news that Robert brought is that both Open Hours and Ubuntu Testing Days will be focused on Ubuntu Core this month. He will be our guest again next Friday, February 10th, where he will be talking about the DragonBoard 410c. Also my good friend Oliver Grawert will be with us, and he will talk about the work he has been doing to enable Ubuntu in this board.

Great topics ahead, and a full new world of possiblities now that we are mixing free software with open hardware and affordable prototyping tools. Remember, every Friday in http://ubuntuonair.com/, no se lo pierda.

Read more
Leo Arias

Happy new year Ubunteros and Ubunteras!

If you have been following our testing days, you will know by now that our intention is to get more people contributing to Ubuntu and free software projects, and to help them getting started through testing and related tasks. So, we will be making frequent calls for testing where you can contribute and learn. Educational AND fun ^_^

To start the year, I would like to invite you to test the IPFS candidate snap. IPFS is a really interesting free project for distributed storage. You can read more about it and watch a demo in the IPFS website.

We have pushed a nice snap with their latest stable version to the candidate channel in the store. But before we publish it to the stable channel we would like to get more people testing it.

You can get a clean and safe environment to test following some of the guides you'll find on the summaries of the past testing days.

Or, if you want to use your current system, you can just do:

$ sudo snap install ipfs --candidate

I have written a gist with a simple guide to get started testing it

If you finish that successfully and still have more time, or are curious about ipfs, please continue with an exploratory testing session. The idea here is just to execute random commands, try unusual inputs and just play around.

You can get ideas from the IPFS docs.

When you are done, please send me an email with your results and any comments. And if you get stuck or have any kind of question, please don't hesitate to ask. Remember that we welcome everybody.

Read more
Leo Arias

Today we had the last Ubuntu Testing Day of the year.

We invited Sergio Schvezov and Joe Talbott, to join Kyle and myself. Together we have been working on Snapcraft the whole year.

Sergio did a great introduction of snapcraft, and showed some of the new features that will land next week in Ubuntu. And because it was the last day of work for everybody (except Kyle), we threw some beers into the hang out and made it our team end of year party.

You can watch the full recording by clicking the image below.

Alt text

Snapcraft is one of the few projects that have an exception to land new features into released versions of Ubuntu. So every week we are landing new things in Xenial and Yakkety. This means that we need to constantly test that we are not breaking anything for all the people using stable Ubuntu releases; and it means that we would love to have many more hands helping us with those tests.

If you would like to help, all you have to do is set up a virtual machine and enable the proposed pocket in there.

This is the active bug for the Stable Release Update of snapcraft 2.24: bug #1650632

Before I shut down my computer and start my holidays, I would like to thank all the Ubuntu community for one more year, it has been quite a ride. And I would like to thank Sergio, Kyle and Joe in particular. They are the best team a QA Engineer could ask for <3.

See you next year for more testing days.

Read more
Leo Arias

Ubuntu has a six month cycle for releases. This means that we work for six months updating software, adding new features and making sure that it all works consistently together when it’s integrated into the operating system, and then we release it. Once we make a release, it is considered stable and then we almost exclusively add to that release critical bug fixes and patches for security vulnerabilities. The exceptions are just a few, and are for software that’s important for Ubuntu and that we want to keep up-to-date with the latest features even in stable releases.

These bug fixes, security patches and exceptional new features require a lot of testing, because right after they are published they will reach all the Ubuntu users that are in the stable release. And we want the release to remain stable, never ever introduce a regression that will make those users unhappy.

We call all of them Stable Release Updates, and we test them in the proposed pocket of the Ubuntu archive. This is obviously not enabled by default, so the brave souls that want to help us testing the changes in proposed need to enable it.

Before we go on, I would recommend to test SRUs in a virtual machine. Once you enable proposed following this guide you will get constant and untested updates from many packages, and these updates will break parts of your system every now and then. It’s not likely to be critical, but it can bother you if it happens on the machine you need to do your work, or other stuff. And if somebody makes a mistake, you might need to reinstall the system.

You will also have to find a package that needs testing. Snapcraft is one of the few exceptions allowed to land in a stable release every week. So lets use it as an example. Lets say you want to help us testing the upcoming release of snapcraft in Ubuntu 16.04.

With your machine ready and before enabling proposed, install the version already released of the package you want to test. This way you’ll test later an update to the newer version, just what a normal user would get once the update is approved and lands in the archive. So in a terminal, write:

  $ sudo apt update
  $ sudo apt install snapcraft

Or replace snapcraft with whatever package you are testing. If you are doing it just during the weekend after I am writing this, the released version of snapcraft will be 2.23. You can check it with:

  $ dpkg -s snapcraft | grep Version
  Version: 2.23

Now, to enable proposed, open the Ubuntu Software application, and select Software & Updates from the menu in the top of the window.

image

From the Software & Updates window, select the Developer Options tab. And check the item that says Pre-released updates.

image

This will prompt for your password. Enter it, and then click the Close button. You will be asked if you want to reload your sources, so yes, click the Reload button.

image

Finally try to upgrade. If there is a newer version available in the proposed pocket for the packet you are testing, now you will get it.

  $ sudo apt install snapcraft
  $ dpkg -s snapcraft | grep Version
  Version: 2.24

Every time there is a proposed update, the package will have corresponding SRU bugs with the tag “verification-needed”. In the case of snapcraft this weekend, this is the bug for the 2.24 proposed update: https://bugs.launchpad.net/ubuntu/+source/snapcraft/+bug/1650632

The SRU bugs will have instructions about how to test that the fix or the new features released are correct. Follow those steps, and if you are confident that they work as expected, change the tag to “verification-done”. If you find that the fix is not correct, change the tag to “verification-failed”. In case of doubt, you can leave a comment in the bug explaining what you tried and what you found.

You can read more about SRUs in the Stable Release Updates wiki page, and also in the wiki page explaining how to perform verifications. This last page includes a section to find packages and bugs that need verification. If you want to help the Ubuntu community, you can just jump in and start verifying some of the pending bugs. It will be highly appreciated.

If you have questions or find a problem, join us in the Ubuntu Rocket Chat.

Read more
Leo Arias

For the third session of the Ubuntu Testing Days, Kevin Gunn joined us to talk about the Unity8 snap.

This is a thriving project with lots of things to do and bugs to uncover, so it's the perfect candidate for newcomers eager to help Ubuntu.

If you are interested and didn't attend our meeting on Friday, click the image below to watch the recording.

Alt text

To install Unity 8 in the Ubuntu 16.04 cloned virtual machine that we prepared on the past session, enter the following commands in the terminal:

$ sudo add-apt-repository -u ppa:ci-train-ppa-service/stable-phone-overlay
$ sudo apt install unity8-session-snap
$ unity8-snap-install
$ sudo reboot

After reboot, on the login screen click the Ubuntu icon next to your user name and change the selection to Unity 8.

Kevin and his team have a google doc with the instructions to build and install the snap, and the current status. If you find a bug, you can talk to the developers joining the #ubuntu-touch IRC channel on freenode

While preparing the testing environment on this session we had a crash, and explained how easy it is to send the report to the Ubuntu developers by just clicking the Report problem... button. The reports from all our users are in the error tracker, along with a frequency graph and the links to bug reports where those crashes are being fixed.

We also showed two tricks to make faster and more pleasant the testing session in a virtual machine:

Thanks to Julia, Kyle and Kevin for the nice session.

Join us again next Friday at Ubuntu On-Air.

Read more
Leo Arias

All Linux distributions are constantly updating the versions of the packages in their archives. That’s what makes them great, lots of people working in a distributed way to let you easily update your software and get the latest features or critical bug fixes.

And you should constatly update your operating system. Otherwise you’ll become an easy target for criminals exploiting known vulnerabilities.

The problem, at least for me, is that I have many many Ubuntu machines in the house and my badwidth is really bad. So keeping all my real machines, virtual machines and various devices up-to-date every day has become a slow problem.

The solution is to cache the downloaded deb packages. So only one machine has to make the downloads from the internet, and they will be kept in my local network to make much faster to get the packages in the other machines.

So let me introduce you to Apt-Cacher NG.

Setting it up is simple. First, choose a machine to run the cacher and store the packages. Ideally, this machine should be running all the time, and should have a good amount of storage space. I’m using my desktop as the cacher; but as soon as I update my router to one that runs Ubuntu, I will make that one the cacher.

On that machine, install apt-cacher-ng:

  sudo apt install apt-cacher-ng

And that’s it. The cacher is installed and configured. Now we need the name of this machine to use it on the other ones:

  $ hostname
  calchas

In this example, calchas is the name of the machine I’m using as the cacher. Take note of the name of your machine, and now, in all the other machines:

  $ sudo gedit /etc/apt/apt.conf.d/02proxy

That will create a new empty configuration file for apt, and open it to be edited with gedit, the default graphical editor in Ubuntu. In the editor, write this:

  Acquire::http::proxy "http://calchas.lan:3142";

replacing calchas with the name of your cacher machine, collected above. The .lan part is really only needed when you are setting it up in a virtual machine and the host is the same as the cacher, but it doesn’t hurt to add it on real machines. That number, 3142, is the network port where the caching service is running, leave it unchanged.

After that, the first time you update a package in your network it will be slow just as before. But all the other machines updating the same package will be very fast. I have to thank apt-cacher-ng for saving me many hours during my updates of the past years.

Read more
Leo Arias

We have survived two testing days, and now we can safely say that it will become a Friday tradition :)

Last Friday our nice guest was Aaron Ogle, from Rocket Chat. He gave us a tour on the Rocket Chat UI and we discussed about how they packaged it as a snap.

If you missed it, click the image below to watch it.

Alt text

Building from what we saw on the first session, we tested the snap using a virtual machine again. But this time, we cloned it to keep a pristine machine and make following testing sessions faster. If you want to help the Ubuntu and Rocket Chat communities, this is an easy way to prepare your environment:

Once you have your clone ready, install the most recent and bleeding edge version of rocket chat with:

$ sudo snap install rocketchat-server --edge

Then you can follow this gist with the initial steps to start testing the Rocket Chat snap

Also you can test a real installation of Rocket Chat, joining our community channel, where we are available all day, every day. If you have a question, just ask. I am elopio in there.

During the session we took a look at the GitHub website, where many free software communities do their development in the open. They have a great guide to start contributing to open source projects. Go on and spread your love for free software in the form of bug reports :)

The gratitude this week goes to our newly acquired staff members Julia and Kyle, and of course to Aaron for letting us have a funny Friday evening. Make sure to take a look at the cool things he and his teammates are doing; and if you have some free time and want to join an exciting, open and nice community, give them a hand. Also try the Jitsi integration for video conference, it's mind-blowing that there are no closed components anywhere.

See you next Friday at Ubuntu On-Air.

Read more
Leo Arias

The basic idea of using virtual machines for testing is to always start from a clean state, as close as possible to what a user would get after freshly installing the system on their real machines.

So, every time we need to test something, we could create a new virtual machine and intall Ubuntu from scratch there, but that would take a considerable amount of time.

An alternative is to install a machine once, and keep it clean. Never test in that one, and instead clone it every time you need to test something. Use the clone to experiment, and delete it when you are done. Cloning the machine is a lot faster than reinstalling each time. The base machine, the one you clone everytime, is called a pristine machine. If you are careful with it, the only time you will need to reinstall is when you are testing the installer.

Cloning the pristine machine using Virtual Machine Manager is really simple, it’s done in like two clicks.

First, of course, you will need to install your pristine machine. I always put pristine in the name, to make it less likely that I will start using it for testing. Every time I forget and play with the pristine machine, I’m polluting its state, and have to recreate it.

With the pristine machine ready, I then recommend to open it and run in a terminal:

  sudo apt update && sudo apt upgrade

That will update all the packages installed, so your clone starts also in the most up-to-date state. If you do this step every time, you will never have to wait for long while your machine downloads and installs lots of packages, just wait a little every day.

image

Now shut down the pristine machine and don’t touch it again, only to update it.

To clone it, right click on it and then click the Clone... button.

image

A dialog will be opened, where you can enter the new virtual machine details.

image

I always write the date as part of the name, to remember when I created it. But the most important thing to do here is to make sure that the Storage option is set to Clone this disk. Otherwise, you will be sharing the disk with the pristine machine, which will pollute its state, the exact thing we are trying to avoid.

This will take some time while the whole disk is copied. But not long, and once it’s done you can freely play and pollute this clone, without affecting the pristine machine.

Read more
Leo Arias

Today we had our first testing day. We will keep doing this every Friday, and at the end of the session I will post a summary with links to follow up and learn more about the subject in case somebody is interested.

You can watch the video by clicking the image below.

Alt text

For this session we had Kyle Fazzari talking about his work as the maintainer of the Nextcloud snap.

You can find more info about the project in the Nextcloud main page and more info about the snap in Kyle's blog.

We tested the snap using a virtual machine. To set one up you can use the following guides:

After the virtual machine is ready, Nextcloud can be installed in there by just running in the terminal:

$ sudo snap install nextcloud

If you want to test an unreleased and unstable version to help the Nextcloud developers, you can add the flag --edge or --beta to that command.

I wrote some steps to guide you getting started with the testing in this gist.

And finally, if you want to join our community, we are usually in Rocket Chat. You can join the #community, #qa or #snapcraft channels, or any other that catches your attention.

Special thanks to Julia and CoderEurope for joining us during the session. And of course to Kyle and the Nextcloud community for this amazing piece of free software that encourages everybody to take control over their data, in a really simple way.

You are all invited to join us the next time, which will be Friday, December 2nd, again at Ubuntu On-Air. The time and theme will be announced soon. Or at least before it starts.

Read more
pitti

Historically, the “adt-run” command line has allowed multiple tests; as a consequence, arguments like --binary or --override-control were position dependent, which confused users a lot (#795274, #785068, #795274, LP #1453509). On the other hand I don’t know anyone or any CI system which actually makes use of the “multiple tests on a single command line” feature.

The command line also was a bit confusing in other ways, like the explicit --built-tree vs. --unbuilt-tree and the magic / vs. // suffixes, or option vs. positional arguments to specify tests.

The other long-standing confusion is the pervasive “adt” acronym, which is still from the very early times when “autopkgtest” was called “autodebtest” (this was changed one month after autodebtest’s inception, in 2006!).

Thus in some recent night/weekend hack sessions I’ve worked on a new command line interface and consistent naming. This is now available in autopkgtest 4.0 in Debian unstable and Ubuntu Yakkety. You can download and use the deb package on Debian jessie and Ubuntu ≥ 14.04 LTS as well. (I will provide official backports after the first bug fix release after this got some field testing.)

New “autopkgtest” command

The adt-run program is now superseded by autopkgtest:

  • It accepts only exactly one tested source package, and gives a proper error if none or more than one (often unintend) is given. Binaries to be tested, --override-control, etc. can now be specified in any order, making the arguments position independent. So you now can do things like:
    autopkgtest *.dsc *.deb [...]

    Before, *.deb only applied to the following test.

  • The explicit --source, --click-source etc. options are gone, the type of tested source/binary packages, including built vs. unbuilt tree, is detected automatically. Tests are now only specified with positional arguments, without the need (or possibility) to explicitly specify their type. The one exception is --installed-click com.example.myapp as possible names are the same as for apt source package names.
    # Old:
    adt-run --unbuilt-tree pkgs/foo-2 [...]
    # or equivalently:
    adt-run pkgs/foo-2// [...]
    
    # New:
    autopkgtest pkgs/foo-2
    # Old:
    adt-run --git-source http://example.com/foo.git [...]
    # New:
    autopkgtest http://example.com/foo.git [...]
    
  • The virtualization server is now separated with a double instead of a tripe dash, as the former is standard Unix syntax.
  • It defaults to the current directory if that is a Debian source package. This makes the command line particularly simple for the common case of wanting to run tests in the package you are just changing:
    autopkgtest -- schroot sid

    Assuming the current directory is an unbuilt Debian package, this will build the package, and run the tests in ./debian/tests against the built binaries.

  • The virtualization server must be specified with its “short” name only, e. g. “ssh” instead of “adt-virt-ssh”. They also don’t get installed into $PATH any more, as it’s hardly useful to call them directly.

README.running-tests got updated to the new CLI, as usual you can also read the HTML online.

The old adt-run CLI is still available with unchanged behaviour, so it is safe to upgrade existing CI systems to that version.

Image build tools

All adt-build* tools got renamed to autopkgtest-build*, and got changed to build images prefixed with “autopkgtest” instead of “adt”. For example, adt-build-lxc ubuntu xenial now produces an autopkgtest-xenial container instead of adt-xenial.

In order to not break existing CI systems, the new autopkgtest package contains symlinks to the old adt-build* commands, and when being called through them, also produce images with the old “adt-” prefix.

Environment variables in tests

Finally there is a set of environment variables that are exported by autopkgtest for using in tests and image customization tools, which now got renamed from ADT_* to AUTOPKGTEST_*:

  • AUTOPKGTEST_APT_PROXY
  • AUTOPKGTEST_ARTIFACTS
  • AUTOPKGTEST_AUTOPILOT_MODULE
  • AUTOPKGTEST_NORMAL_USER
  • AUTOPKGTEST_REBOOT_MARK
  • AUTOPKGTEST_TMP

As these are being used in existing tests and tools, autopkgtest also exports/checks those under their old ADT_* name. So tests can be converted gradually over time (this might take several years).

Feedback

As usual, if you find a bug or have a suggestion how to improve the CLI, please file a bug in Debian or in Launchpad. The new CLI is recent enough that we still have some liberty to change it.

Happy testing!

Read more
Nicholas Skaggs

Reflections

The joys of Spring (or Fall for our friends in the Southern Hemisphere) are now upon us. The change of seasons spurs us to implement our own changes, to start anew. It's a time to reflect on the past, appreciate it, and then do a little Spring cleaning.

As I write this post to you, I'm doing my own reflecting. It's been quite a journey we've undertaken within the QA community. It's not always been easy, but I think we are poised for even greater success with Xenial than Trusty and Precise LTS's. We have continued ramping up our quality efforts to test new platforms, such as the phone and IOT devices, while also implementing automated testing via things like autopkgtest and autopilot. Nevertheless, the desktop images have continued to release like clockwork. We're testing more things, more often, while still managing to raise our quality bar.

I want to thank all of the volunteers who've helped make each of those releases a reality. Oftentimes quality can be a background job, with thank you's going unsaid, while complaints are easy to find. Truly, it's been wonderful learning and hacking on quality efforts with you. So thank you!

So if this post sounds a bit like a farewell, that's because it is. At least in a way. Moving forward, I'll be transitioning to working on a new challenge. Don't worry, I'm keeping my QA hat on, and staying firmly within the realm of ubuntu! However, the time has come to try my hand at a different side of ubuntu. That's right, it's time to head to the last frontier, juju!

I'll be working on improving the quality story for juju, but I believe juju has real opportunities to enable the testing story within ubuntu too. I'm looking forward to the new challenges, and sharing best practices. We're all working on ubuntu at it's heart, no matter our focus.

Moving forward, I'll still be around in my usual haunts. You'll still be able to poke me on IRC, or send me a mail, and I'm certainly still going to be watching what happens within quality with interest. That said, you are much more likely to find me discussing juju, servers and charms in #juju.

As with anything, please feel free to contact me directly if you have any concerns or questions. I plan to wind down my involvement during the next few weeks. I'll be handing off any lingering project roles, and stepping down gracefully. Ubuntu 'Y' will begin anew, with fresh challenges and fresh opportunities. I know there are folks waiting to tackle them!

Read more
pitti

The last two major autopkgtest releases (3.18 from November, and 3.19 fresh from yesterday) bring some new features that are worth spreading.

New LXD virtualization backend

3.19 debuts the new adt-virt-lxd virtualization backend. In case you missed it, LXD is an API/CLI layer on top of LXC which introduces proper image management, seamlessly use images and containers on remote locations, intelligently caching them locally, automatically configure performant storage backends like zfs or btrfs, and just generally feels really clean and much simpler to use than the “classic” LXC.

Setting it up is not complicated at all. Install the lxd package (possibly from the backports PPA if you are on 14.04 LTS), and add your user to the lxd group. Then you can add the standard LXD image server with

  lxc remote add lco https://images.linuxcontainers.org:8443

and use the image to run e. g. the libpng test from the archive:

  adt-run libpng --- lxd lco:ubuntu/trusty/i386
  adt-run libpng --- lxd lco:debian/sid/amd64

The adt-virt-lxd.1 manpage explains this in more detail, also how to use this to run tests in a container on a remote host (how cool is that!), and how to build local images with the usual autopkgtest customizations/optimizations using adt-build-lxd.

I have btrfs running on my laptop, and LXD/autopkgtest automatically use that, so the performance really rocks. Kudos to Stéphane, Serge, Tycho, and the other LXD authors!

The motivation for writing this was to make it possible to move our armhf testing into the cloud (which for $REASONS requires remote containers), but I now have a feeling that soon this will completely replace the existing adt-virt-lxc virt backend, as its much nicer to use.

It is covered by the same regression tests as the LXC runner, and from the perspective of package tests that you run in it it should behave very similar to LXC. The one problem I’m aware of is that autopkgtest-reboot-prepare is broken, but hardly anything is using that yet. This is a bit complicated to fix, but I expect it will be in the next few weeks.

MaaS setup script

While most tests are not particularly sensitive about which kind of hardware/platform they run on, low-level software like the Linux kernel, GL libraries, X.org drivers, or Mir very much are. There is a plan for extending our automatic tests to real hardware for these packages, and being able to run autopkgtests on real iron is one important piece of that puzzle.

MaaS (Metal as a Service) provides just that — it manages a set of machines and provides an API for installing, talking to, and releasing them. The new maas autopkgtest ssh setup script (for the adt-virt-ssh backend) brings together autopkgtest and real hardware. Once you have a MaaS setup, get your API key from the web UI, then you can run a test like this:

  adt-run libpng --- ssh -s maas -- \
     --acquire "arch=amd64 tags=touchscreen" -r wily \
     http://my.maas.server/MAAS 123DEADBEEF:APIkey

The required arguments are the MaaS URL and the API key. Without any further options you will get any available machine installed with the default release. But usually you want to select a particular one by architecture and/or tags, and install a particular distro release, which you can do with the -r/--release and --acquire options.

Note that this is not wired into Ubuntu’s production CI environment, but it will be.

Selectively using packages from -proposed

Up until a few weeks ago, autopkgtest runs in the CI environment were always seeing/using the entirety of -proposed. This often led to lockups where an application foo and one of its dependencies libbar got a new version in -proposed at the same time, and on test regressions it was not clear at all whose fault it was. This often led to perfectly good packages being stuck in -proposed for a long time, and a lot of manual investigation about root causes.

.

These days we are using a more fine-grained approach: A test run is now specific for a “trigger”, that is, the new package in -proposed (e. g. a new version of libbar) that caused the test (e. g. for “foo”) to run. autopkgtest sets up apt pinning so that only the binary packages for the trigger come from -proposed, the rest from -release. This provides much better isolation between the mush of often hundreds of packages that get synced or uploaded every day.

This new behaviour is controlled by an extension of the --apt-pocket option. So you can say

  adt-run --apt-pocket=proposed=src:foo,libbar1,libbar-data ...

and then only the binaries from the foo source, libbar1, and libbar-data will come from -proposed, everything else from -release.

Caveat:Unfortunately apt’s pinning is rather limited. As soon as any of the explicitly listed packages depends on a package or version that is only available in -proposed, apt falls over and refuses the installation instead of taking the required dependencies from -proposed as well. In that case, adt-run falls back to the previous behaviour of using no pinning at all. (This unfortunately got worse with apt 1.1, bug report to be done). But it’s still helpful in many cases that don’t involve library transitions or other package sets that need to land in lockstep.

Unified testbed setup script

There is a number of changes that need to be made to testbeds so that tests can run with maximum performance (like running dpkg through eatmydata, disabling apt translations, or automatically using the host’s apt-cacher-ng), reliable apt sources, and in a minimal environment (to detect missing dependencies and avoid interference from unrelated services — these days the standard cloud images have a lot of unnecessary fat). There is also a choice whether to apply these only once (every day) to an autopkgtest specific base image, or on the fly to the current ephemeral testbed for every test run (via --setup-commands). Over time this led to quite a lot of code duplication between adt-setup-vm, adt-build-lxc, the new adt-build-lxd, cloud-vm-setup, and create-nova-image-new-release.

I now cleaned this up, and there is now just a single setup-commands/setup-testbed script which works for all kinds of testbeds (LXC, LXD, QEMU images, cloud instances) and both for preparing an image with adt-buildvm-ubuntu-cloud, adt-build-lx[cd] or nova, and with preparing just the current ephemeral testbed via --setup-commands.

While this is mostly an internal refactorization, it does impact users who previously used the adt-setup-vm script for e. g. building Debian images with vmdebootstrap. This script is now gone, and the generic setup-testbed entirely replaces it.

Misc

Aside from the above, every new version has a handful of bug fixes and minor improvements, see the git log for details. As always, if you are interested in helping out or contributing a new feature, don’t hesitate to contact me or file a bug report.

Read more
Nicholas Skaggs

Final Beta Testing for Wily


Another cycle draws to a close, and it's time to test our images and make sure Wily is in good shape. We're entering crunch time.

How can I help? 
To help test, visit the iso tracker milestone page for final beta.  The goal is to verify the images in preparation for the release. Find those bugs! The information at the top of the page will help you if you need help reporting a bug or understanding how to test. 

Isotracker? 
There's a first time for everything! Check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help.

How long is this going on?
The testing runs through tomorrow, Thursday September 24th, when the the images for final beta will be released. If you miss the deadline we still love getting result Test against the daily image milestone instead.

Thanks and happy testing everyone!

Read more
Colin Ian King

The Canonical Hardware Enablement Team and myself are continuing the work to add more ACPI table tests to the Firmware Test Suite (fwts).  The latest 15.08.00 release added sanity checks for the following tables:

The release also added a test for the ACPI _CPC revision 2 control method and we updated the ACPICA core to version 20150717.

Our aim is to continue to add support for existing and new ACPI tables to make fwts a comprehensive firmware test tool.  For more information about fwts, please refer to the fwts jump start wiki page.

Read more
Daniel Holbach

snappy

Snappy is evolving, becoming more robust and is getting loads of new users. This week will see a new stable release of Snappy. For us that’s reason enough to invite you all to our first Snappy Open House today.

Starting from 14:00 UTC today (2015-07-07), we are going to be on Ubuntu-on-Air, introducing team, talking about what’s new and talking about testing Snappy. If you want to get involved or wanted to get to know snappy, this is a great opportunity.

Hope to see you later on!

Read more
Nicholas Skaggs

Snappy Open House!

Introducing Snappy Open Houses! Snappy represents some new and exciting possibilities for ubuntu. A snappy open house is your chance to get familiar with the technology while helping test and break things ! We plan to do an open house before each release as a chance for everyone to interact and provide feedback and help with testing. As such, this is a great way to get started in the snappy world!

So what exactly do I mean by an open house? We want to encourage the community to test with us, explore new features, check for possible regressions and exercise the documentation.  An open house is a chance to come and meet the snappy team developers and help QA test the new image.

During the open house, we'll host a live broadcast on ubuntuonair.com. As part of the broadcast, we'll speak with some of the developers behind snappy and show off new features in the upcoming release. We'll also demonstrate how to flash and test the new release so you can follow along and help test. Finally we'll answer any questions you have and stick around on IRC for a bit to discuss any issues found during testing.

In other words, it's time intended for you to come and try out snappy! I know what you are thinking, "I don't have a cool IoT device to run snappy with". You are in luck! You can run snappy on your desktop and laptop. You don't need a device as you can install snappy on your local machine via kvm. If you do have a device, bring it and prepare to have some fun!

The first of these snappy open houses will be July 7th at 1400 UTC. Please stop by and help test with us, try out snappy, and meet the snappy team!

You can find out more information on the wiki. Mark your calendars and see you next Tuesday!

Read more