Archive for October, 2010

Ricardo Kirkner

configglue: a configuration library on steroids

What is configglue?

configglue is a library that glues together Python’s optparse.OptionParser and
ConfigParser.ConfigParser, so that you don’t have to repeat yourself when you
want to export the same options to a configuration file and a commandline
interface.

The main features of configglue are:

  • ini-style configuration files
  • schema-based configuration
  • commandline integration
  • configuration validation

Why would I want to use configglue?

Some of the benefits of using configglue are that it allows you to:

  • separate configuration declaration (which options are available) from
    definition (what value does each option take)
  • validate configuration files (there are no required options missing, prevent
    typos in option names, assert each option value is of the correct type)
  • use standards-compatible configuration files (standard ini-files)
  • use standard types out of the box (integer, string, bool, tuple, list, dict)
  • create your own custom types beyond what’s provided in the library
  • easily support commandline integration
  • override options locally by using several configuration files (useful for
    separating configuration files for different environments)

configglue and django-configglue are already available in Ubuntu 10.10 (Maverick), so they can be installed via apt-get. configglue should already be installed if you have the desktop edition, as it’s being used by Ubuntu One’s client.

Who else is using configglue?

  • Ubuntu Pay
  • Ubuntu Software Center
  • Ubuntu Single Sign On
  • Ubuntu One

Got curious?

You can find a quickstart guide for configglue on
http://packages.python.org/configglue and you can get its code at
http://launchpad.net/configglue.

As an additional bonus, there is another project called django-configglue
which allows you to use all the benefits of configglue on your Django
projects. You can find a quickstart guide for django-configglue on
http://packages.python.org/django-configglue and you can get its code at
http://launchpad.net/django-configglue.

Michael Foord

Continuous Integration with Django and Hudson

After the release of Ubuntu Maverick Meerkat we have a brief time of housekeeping in the ISD team, where we work on clearing our technical debt and implementing improvements to our development processes. One of these improvements has been getting continuous integration setup for some of our projects. Continuous integration means not just having an automated test suite, but having an automated system for running the tests (continuously)

We have settled on Hudson as our CI server. Despite being a Java based tool it is a popular choice in the Python world, mainly because Hudson is both easy to install / configure and provides an attractive and powerful web based interface out-of-the-box.

You can use Hudson to report test run pass or fail, and view the console output from a test run, with virtually no work at all. Simply provide a shell command for running your tests and off you go. Hudson works best when your test run generates an XML description of the test run in a poorly specified (but widely implemented) format called JUnit-XML. This is the same JUnit that was the original inspiration for the Python unittest testing library.

With JUnit XML describing your test run you can use the Hudson UI to view individual test failures and generate pretty graphs for how the time taken by tests changes over successive builds.

Our projects are based on Django, which in turn uses the standard unittest test runner for running tests. Python has a wealth of different choices for coaxing JUnit-XML out of a unittest test run. As we’re deploying on Ubuntu Lucid servers we needed a solution easy to integrate with the Lucid distribution of Django (approximately version 1.1.2).

After trying several alternatives we settled on the pyjunitxml package, which just happens to be the creation of Robert Collins, a fellow Canonical developer.

For a suite (unittest terminology for a collection) of tests, getting a junit-xml report from pyjunitxml is gloriously easy. Here’s the code:

    import junitxml
    with open('xmlresults.xml', 'w') as report:
        result = junitxml.JUnitXmlResult(report)
        result.startTestRun()
        suite.run(result)
        result.stopTestRun()

If you’re familiar with unittest code you may be surprised that there is no test runner involved. The unittest TextTestRunner class is useful for generating console output of a test run, but as this isn’t needed for a run under continuous integration we only need a test suite and the test result object from junitxml.

To integrate this into a standard django test run we had to copy the run_tests function from the django.test.simple module and add a parameter to use this code when running under Hudson.

Unsurprisingly our projects are managed on Launchpad and use Bazaar for version control. Although not enabled by default Hudson ships with a plugin for Bazaar integration. Here’s a guide for setting up Hudson for a Launchpad / Bazaar based project. It assumes you have a script “run_tests.sh” for running your test suite with the junitxml code active:

First install hudson. For debian and Ubuntu this link gives the details:

http://hudson-ci.org/debian/

(You will also need a version of Java and the jre installed which I don’t think that guide covers.)

Once installed Hudson will start on boot. Immediately after install you may need to start it manually:

sudo /etc/init.d/hudson restart

You can view the locally running Hudson from this URL:

http://localhost:8080/

You need to enable the bzr plugin in Hudson. You can do this through the Hudson plugin manager:

http://localhost:8080/pluginManager/

Find it in the ‘available’ tab of the plugin manager. You will need to restart Hudson after installing the plugin.

Next you need to give the Hudson user access to your repository. You can this step if your repository is public and can be fetched by an anonymous user.

  • Switch to the Hudson user: sudo su – hudson
  • Tell bzr your launchpad user: bzr lp-login your-username
  • Generate ssh keys without a passphrase: ssh-keygen -t rsa
  • Add the public key to your user on launchpad. https://launchpad.net/~your-username/+editsshkeys

Next create a new Hudson job, with whatever name you want.

  • Select Build a free-style software project
  • Source code management: Bazaar
  • Repository URL: lp:~your-username/project-name/branch
  • Add build step -> Execute Shell: run_test.sh
  • Post-build action: Check Publish JUnit test result report
  • Test report XMLs: xmlresults.xml

Finally: Build Now

You can watch the build in progress via the console output.

There is no console output during the test run itself, the results go to the junit xml file used by Hudson to report the results. The console output is still useful as any build errors or unhandled exceptions will show up there.

At the moment Hudson is happily running scheduled builds on my local machine. The next step is to run it on our private cloud and decide whether or not to use a commit hook to trigger the builds. I have a personal preference for scheduled builds (as close to back-to-back as possible). Multiple runs per revision gives you the opportunity to shake out fragilities in your tests and collect better data (more data == better data) for performance tests.