Canonical Voices

Posts tagged with 'continuous integration'

David Murphy (schwuk)

As part of our self-improvement and knowledge sharing within Canonical, within our group (Professional and Engineering Services) we regularly – at least once a month – run what we call an “InfoSession”. Basically it is Google Hangout on Air with a single presenter on a topic that is of interest/relevance to others, and one of my responsibilities is organising them. Previously we have had sessions on:

  • Go (a couple of sessions in fact)
  • SystemTap
  • Localization (l10n) and internationalization (i18n)
  • Juju
  • Graphviz
  • …and many others…

Today the session was on continuous integration with Tarmac and Vagrant, presented by Daniel Manrique from our certification team. In his own words:

Merge requests and code reviews are a fact of life in Canonical. Most projects start by manually merging approved requests, including running a test suite prior to merging.

This infosession will talk about tools that automate this workflow (Tarmac), while leveraging your project’s test suite to ensure quality, and virtual machines (using Vagrant) to provide multi-release, repeatable testing.

Like most of our sessions it is publicly available, here it is is for your viewing pleasure:

Read more
Michael Foord

After the release of Ubuntu Maverick Meerkat we have a brief time of housekeeping in the ISD team, where we work on clearing our technical debt and implementing improvements to our development processes. One of these improvements has been getting continuous integration setup for some of our projects. Continuous integration means not just having an automated test suite, but having an automated system for running the tests (continuously)

We have settled on Hudson as our CI server. Despite being a Java based tool it is a popular choice in the Python world, mainly because Hudson is both easy to install / configure and provides an attractive and powerful web based interface out-of-the-box.

You can use Hudson to report test run pass or fail, and view the console output from a test run, with virtually no work at all. Simply provide a shell command for running your tests and off you go. Hudson works best when your test run generates an XML description of the test run in a poorly specified (but widely implemented) format called JUnit-XML. This is the same JUnit that was the original inspiration for the Python unittest testing library.

With JUnit XML describing your test run you can use the Hudson UI to view individual test failures and generate pretty graphs for how the time taken by tests changes over successive builds.

Our projects are based on Django, which in turn uses the standard unittest test runner for running tests. Python has a wealth of different choices for coaxing JUnit-XML out of a unittest test run. As we’re deploying on Ubuntu Lucid servers we needed a solution easy to integrate with the Lucid distribution of Django (approximately version 1.1.2).

After trying several alternatives we settled on the pyjunitxml package, which just happens to be the creation of Robert Collins, a fellow Canonical developer.

For a suite (unittest terminology for a collection) of tests, getting a junit-xml report from pyjunitxml is gloriously easy. Here’s the code:

    import junitxml
    with open('xmlresults.xml', 'w') as report:
        result = junitxml.JUnitXmlResult(report)
        result.startTestRun()
        suite.run(result)
        result.stopTestRun()

If you’re familiar with unittest code you may be surprised that there is no test runner involved. The unittest TextTestRunner class is useful for generating console output of a test run, but as this isn’t needed for a run under continuous integration we only need a test suite and the test result object from junitxml.

To integrate this into a standard django test run we had to copy the run_tests function from the django.test.simple module and add a parameter to use this code when running under Hudson.

Unsurprisingly our projects are managed on Launchpad and use Bazaar for version control. Although not enabled by default Hudson ships with a plugin for Bazaar integration. Here’s a guide for setting up Hudson for a Launchpad / Bazaar based project. It assumes you have a script “run_tests.sh” for running your test suite with the junitxml code active:

First install hudson. For debian and Ubuntu this link gives the details:

http://hudson-ci.org/debian/

(You will also need a version of Java and the jre installed which I don’t think that guide covers.)

Once installed Hudson will start on boot. Immediately after install you may need to start it manually:

sudo /etc/init.d/hudson restart

You can view the locally running Hudson from this URL:

http://localhost:8080/

You need to enable the bzr plugin in Hudson. You can do this through the Hudson plugin manager:

http://localhost:8080/pluginManager/

Find it in the ‘available’ tab of the plugin manager. You will need to restart Hudson after installing the plugin.

Next you need to give the Hudson user access to your repository. You can this step if your repository is public and can be fetched by an anonymous user.

  • Switch to the Hudson user: sudo su – hudson
  • Tell bzr your launchpad user: bzr lp-login your-username
  • Generate ssh keys without a passphrase: ssh-keygen -t rsa
  • Add the public key to your user on launchpad. https://launchpad.net/~your-username/+editsshkeys

Next create a new Hudson job, with whatever name you want.

  • Select Build a free-style software project
  • Source code management: Bazaar
  • Repository URL: lp:~your-username/project-name/branch
  • Add build step -> Execute Shell: run_test.sh
  • Post-build action: Check Publish JUnit test result report
  • Test report XMLs: xmlresults.xml

Finally: Build Now

You can watch the build in progress via the console output.

There is no console output during the test run itself, the results go to the junit xml file used by Hudson to report the results. The console output is still useful as any build errors or unhandled exceptions will show up there.

At the moment Hudson is happily running scheduled builds on my local machine. The next step is to run it on our private cloud and decide whether or not to use a commit hook to trigger the builds. I have a personal preference for scheduled builds (as close to back-to-back as possible). Multiple runs per revision gives you the opportunity to shake out fragilities in your tests and collect better data (more data == better data) for performance tests.

Read more