Canonical Voices

What Canonical ISD talks about

Posts tagged with 'django'

Anthony Lenton

The great Django command-extensions app gives you an easy way to integrate the Werkzeug debugger into your Django site.

For if you’ve never used this debugger before, imagine the regular Django technical details debugging page, with all same super useful information — environment variables, sql queries, and a full traceback with even the local variables and a bit of code context for each frame.
Now imagine you could have a pdb prompt running within each frame of the traceback, actually running interactively via ajax within the frames that you see.  That’s the kind of raw power Werkzeug gives you.

A big disclaimer though: this is not for running on production sites.  You’ll be giving anybody that happens to stumble upon a failure page an interactive (unrestricted) python terminal running with the web server’s permissions on your box.

So, this is just what out the box django-command-extension’s runserver_plus gives you.  With a bit of tweaking, here are a couple of other neat things you can do:

Make your server multithreaded

There are times when you need a multithreaded server in your development environment.  I’ve seen arguments about this being a smell for bad design, so here ar two reasonable scenarios I’ve come across:

  • Developing a web API, you might have a view that provides a test consumer for the API, not intended to be available on production.  The test consumer view will make a (blocking) call out to the API that’s provided by the same server, hanging everything if you don’t have multithreadedness.
  • Or, in larger systems where you have more than one application server, you might have a view or two that do a bit of communication between servers (to see if everything’s ok, to provide a neat server status screen), that you’ll need to test calling out to the same server during development.

For whatever reason it may be, if you need to enable threading, you can just set threaded=True when you call Werkzeug’s run_simple function:

runserver_threaded.py

runserver_threaded.py

Debug your whole wsgi app instead of just Django

If you’re running Django as part of a larger wsgi stack (a non-Django web api, or some authentication wsgi middleware), you probably would love to be able to run the whole stack in your development server, and even have those same great debugging features available for everything, not only Django.
You can do this by modifying the wsgi app that Werkzeug debugs:

myrunserver_plus.py

myrunserver_plus.py

It’s something with raw power that keeps getting better the more you look…

Read more
Michael Foord

After the release of Ubuntu Maverick Meerkat we have a brief time of housekeeping in the ISD team, where we work on clearing our technical debt and implementing improvements to our development processes. One of these improvements has been getting continuous integration setup for some of our projects. Continuous integration means not just having an automated test suite, but having an automated system for running the tests (continuously)

We have settled on Hudson as our CI server. Despite being a Java based tool it is a popular choice in the Python world, mainly because Hudson is both easy to install / configure and provides an attractive and powerful web based interface out-of-the-box.

You can use Hudson to report test run pass or fail, and view the console output from a test run, with virtually no work at all. Simply provide a shell command for running your tests and off you go. Hudson works best when your test run generates an XML description of the test run in a poorly specified (but widely implemented) format called JUnit-XML. This is the same JUnit that was the original inspiration for the Python unittest testing library.

With JUnit XML describing your test run you can use the Hudson UI to view individual test failures and generate pretty graphs for how the time taken by tests changes over successive builds.

Our projects are based on Django, which in turn uses the standard unittest test runner for running tests. Python has a wealth of different choices for coaxing JUnit-XML out of a unittest test run. As we’re deploying on Ubuntu Lucid servers we needed a solution easy to integrate with the Lucid distribution of Django (approximately version 1.1.2).

After trying several alternatives we settled on the pyjunitxml package, which just happens to be the creation of Robert Collins, a fellow Canonical developer.

For a suite (unittest terminology for a collection) of tests, getting a junit-xml report from pyjunitxml is gloriously easy. Here’s the code:

    import junitxml
    with open('xmlresults.xml', 'w') as report:
        result = junitxml.JUnitXmlResult(report)
        result.startTestRun()
        suite.run(result)
        result.stopTestRun()

If you’re familiar with unittest code you may be surprised that there is no test runner involved. The unittest TextTestRunner class is useful for generating console output of a test run, but as this isn’t needed for a run under continuous integration we only need a test suite and the test result object from junitxml.

To integrate this into a standard django test run we had to copy the run_tests function from the django.test.simple module and add a parameter to use this code when running under Hudson.

Unsurprisingly our projects are managed on Launchpad and use Bazaar for version control. Although not enabled by default Hudson ships with a plugin for Bazaar integration. Here’s a guide for setting up Hudson for a Launchpad / Bazaar based project. It assumes you have a script “run_tests.sh” for running your test suite with the junitxml code active:

First install hudson. For debian and Ubuntu this link gives the details:

http://hudson-ci.org/debian/

(You will also need a version of Java and the jre installed which I don’t think that guide covers.)

Once installed Hudson will start on boot. Immediately after install you may need to start it manually:

sudo /etc/init.d/hudson restart

You can view the locally running Hudson from this URL:

http://localhost:8080/

You need to enable the bzr plugin in Hudson. You can do this through the Hudson plugin manager:

http://localhost:8080/pluginManager/

Find it in the ‘available’ tab of the plugin manager. You will need to restart Hudson after installing the plugin.

Next you need to give the Hudson user access to your repository. You can this step if your repository is public and can be fetched by an anonymous user.

  • Switch to the Hudson user: sudo su – hudson
  • Tell bzr your launchpad user: bzr lp-login your-username
  • Generate ssh keys without a passphrase: ssh-keygen -t rsa
  • Add the public key to your user on launchpad. https://launchpad.net/~your-username/+editsshkeys

Next create a new Hudson job, with whatever name you want.

  • Select Build a free-style software project
  • Source code management: Bazaar
  • Repository URL: lp:~your-username/project-name/branch
  • Add build step -> Execute Shell: run_test.sh
  • Post-build action: Check Publish JUnit test result report
  • Test report XMLs: xmlresults.xml

Finally: Build Now

You can watch the build in progress via the console output.

There is no console output during the test run itself, the results go to the junit xml file used by Hudson to report the results. The console output is still useful as any build errors or unhandled exceptions will show up there.

At the moment Hudson is happily running scheduled builds on my local machine. The next step is to run it on our private cloud and decide whether or not to use a commit hook to trigger the builds. I have a personal preference for scheduled builds (as close to back-to-back as possible). Multiple runs per revision gives you the opportunity to shake out fragilities in your tests and collect better data (more data == better data) for performance tests.

Read more