Canonical Voices

Posts tagged with 'lava'

Chris Johnston

This week I will be attending Linaro Connect Q1.12 in Redwood City, California. Infact, I’m in an American Airlines plane at 34,000 feet heading there now. In-flight WiFi is awesome!

Over the past two months Michael Hall and myself have been doing a large amount of work on The Summit Scheduler to get it ready for Connect this week including modifying more than 2,400 lines of Summit code. You can find out more about that in my previous post.

I have a few things that I want to get out of Connect. The first is that I want to get feedback on the changes to Summit, as well as figure out what other things we may need to change. The second thing that I want to do is to learn more about the Beagleboard-xM that I have and how to use it for the many different things it can be used for. The third thing that I want to do is to learn about Linaro’s LAVA.

LAVA is an automated validation suite used to run all sorts of different tests on the products that Linaro produces. The things that I would like to get out of Connect in relation to LAVA are how to setup and run LAVA, how to set it up to run tests, and how to produce results and display those results the way that I want them.

If you are at Linaro Connect, and would be willing to talk with me about Summit and the way you use it and your thoughts on the changes, please contact me and we will set aside a time to meet.

Read more
Michael Hudson-Doyle

Viewing lava results in android-build

It seems like it’s taken a long time to get all the pieces hooked up, but I’m very happy to report that finally you can see the results of testing an android build in LAVA directly in the build page!  If you go to a build page for a build that is automatically tested such as (after a little while for some ajax to happen), you should see a table listing the test runs we ran and the summary of the pass/fail counts:

It may not seem all that earth shattering, but there have been many bits and pieces that needed to be put together to get this to work:

  • We needed to build the scheduler, so that we could submit this job to run on the first Panda board to become available.
  • We needed to build the infrastructure to allow the job to be submitted with some measure of security from a remote host.
  • We needed to work on the build system scripts to submit the job without disclosing the authorization token.
  • We needed to add views to the scheduler and dashboard that present data in a ajax-friendly way.
  • We needed to work on the build system frontend to make uses of these views and format the data.
  • And finally, we needed to test and deploy all of these changes.

So I think I’m justified in being happy to have this finally working in production </p>
            <a href=Read more

Michael Hudson-Doyle

A few weeks ago now, most of the Linaro engineers met at “Linaro Connect”, the new name for our get-together.  Linaro bootstrapped its processes by borrow heavily from Ubuntu, including the “two planning meetings, two hacking meetings” pattern. Over the last year though it’s become clear that this isn’t totally appropriate for Linaro and while we’re sticking to the same number of meetings, 4 a year, each meeting now has the same status and will be a mix of planning and hacking.  Based on a sample size of 1, this seems to be a good idea – last week’s meeting was excellent.  Very intense, which is why I never got around to blogging during the event, but also very productive.

The validation team had a dedicated hacking room, and on Monday we set up a “mini-Lab” that we could run tests on.  This took a surprisingly (and pleasingly) short amount of time, although we weren’t as neat about the cabling as we are in the real lab:

The main awkwardness in this kind of setup where you are connecting to the serial ports via USB rather than a console server is that the device names of the usb serial dongles is not predictable, and so naming boards becomes a challenge.  Dave worked out a set of hacks to mostly make this work, although I know nothing about the details.

Now that a few weeks have passed I can’t really remember what we did next </p>
            <a href=Read more

Michael Hudson-Doyle

This is the first of a hopefully weekly series of posts describing the work my team is doing.  This means that this post is probably mostly background about the team’s goals, but in the coming weeks I intend to outline what we’ve done in the past week and plans for the next week.

We’re all about validation obviously – telling whether the code the other Linaro engineers are producing “works” in whatever sense that means.  It could be a simple compile or boot test for the kernel, testing whether the code produced by gcc is smaller or faster, whether a kernel scheduler change reduces power consumption for a certain workload, or many other things.

Beyond simple validation though, what we’re really about is automated validation.  We want to build and test the kernel on all supported boards every day.  We want to build and test proposed android changes in gerrit before they are landed, and the same for the gcc work.

We have built up a validation lab in Cambridge – the boards from the Linaro members we want to test on, but also Cyclades serial console servers, routers, and a few servers.  It looks a bit like this:

The thing that makes our task more complicated than “just install jenkins” is the hardware we run on, of course, and the fact that for many of our tests we need to boot a custom kernel on said hardware.  We’ve written a program (called “lava-dispatcher” or just “the dispatcher”) that knows how to install a custom hwpack and root filesystem by manipulating a board over a serial link, another (“lava-scheduler”) that handles incoming requests to run tests and runs the dispatcher as appropriate and yet another (“lava-dashboard”, aka “launch-control”) that displays the results from the tests.  We’ve also built up a number of infrastructure projects that help us run these main three, and command line clients for most things.  You can see all the code at and the running instance is at

So, what are we working on this week?  The main areas are to improve the UI of the scheduler – currently it runs jobs, but is very opaque about what they are doing, improving the front page to make it clearer to the uninitiated what validation is happening and improving the reliability of the dispatcher.  We’re also hard at work “joining the dots” so that the daily builds of Android that are already being produced can be tested daily, and have the build output and test results all visible from the same page.

Next week I’ll be at Linaro Connect in the UK, but I’ll try to update here on what we get done this week and what our plans are for the Connect.

Read more