Canonical Voices

What James Westby talks about

At UDS last week there was another "Testing in Ubuntu" session. During the event I gave a brief presentation on monitoring and testability. The thesis was that there are a lot of parallels between monitoring and testing, so many that it's worth thinking of monitoring as a type of testing at times. Due to that great monitoring requires a testable system, as well as thinking about monitoring right at the start to build a monitorable system as well as a testable one.

You can watch a video of the talk here. (Thanks to the video team for recording it and getting it online quickly.)

I have two main questions. Firstly, what are the conventional names for the "passive" and "active" monitoring that I describe? Seecondly, do you agree with me about monitoring?

Read more

I recently finished reading "All Art is Propaganda" by George Orwell, a collection of some of his critical essays. It was a fascinating read, and would recommended it. Each of the essays is thought-provoking and enlightening, and the topics covered are numerous and varied.

The most interesting feature of the book though wasn't the subject matter of the essays, but the organisation of them. The editor decided to put them in chronological order, meaning that you see some development of ideas over the essays, and different topics rise and fall in prominence.

While that's certainly not novel, the effect of structuring the book in this way was very noticeable in this case for me. I saw a lot of parallels to the impressions I've had from following @RealTimeWWII on twitter. This account is "live tweeting" the Second World War as if it were happening today (currently in 1940).

This artifice brings a whole fresh appreciation of this period that I have learnt so much about. Consuming the events at the pace they occured gives time to reflect on each one, and forgetting that I know the events that followed allows one to get a greater understanding of what it would have been like to live at that time. The time-compressing effect of looking back tends to obscure the uncertainty and fear of that time, the slowness with which some events were unfolding accentuating it. Consuming this via twitter, with its headline-like format mixing in with the news of today, heightens this effect.

While it's something of a loss that we aren't able to know what Orwell would say about the events of today, or what he would have changed in these essays with the addition of hindsight, there's an undeniable value in reading this primary source. While hindsight adds, it also takes away, blurring memories and changing perspectives. Reading the essays allows you to pick up on the thinking of those living through the events that we think we know so well.

For instance Orwell seemed to believe that Soviet Russia was a greater threat than Nazism. The essays in the book run from 1940 to 1949, and there are many more words devoted to the Soviets than the Nazis throughout. His writing suggests that he thought the techniques the Soviets used to achieve and maintain power were less well known and understood, and would be more effective over a long period.

After better understanding these benefits I plan to redouble my efforts to choose books from a varied set of sources, including from different times, and avoid falling in to a trap of thinking that more recent must be better as knowledge is always on the increase.

Read more

Today is the first day of the Linaro Connect in Cambridge. Linaro has gathered to spend a week talking, coding and having fun.

The Infrastructure team is spending most of the week coding, on a few select topics, chosen to make good use of the time that we have together.

In order to help us focus on our goals for the week I've put together a hard copy version of status.linaro.org.

/images/connect-progress-start.jpg

We'll be updating it during the week as we make progress. I'll report back on how it looks at the end of the week.

Read more

If you are an application developer and you want to distribute your new application for a linux distribution, then you currently have several hurdles in your path. Beyond picking which one to start with, you either have a learn a packaging format well enough that you can do the work yourself, or find someone that can do it for you.

At the early stages though neither of these options is particularly compelling. You don't want to learn a packaging format, as there is lots of code to write, and that's what you want to focus on. Finding someone to do the work for you would be great, but there are far more applications than skilled packagers, and convincing someone to help you with something larval is tough: there are going to be a lot of updates, with plenty of churn, to stay on top of, and it may be too early for them to tell if the application will be any good.

This is where pkgme comes in. This is a tool that can take care of the packaging for you, so that you can focus on writing the code, and skilled packagers can focus on packages that need high-quality packaging as they will have lots of users.

This isn't a new idea, and there are plenty of tools out there to generate the packaging for e.g. a Python application. I don't think it is a particularly good use of developer time to produce tools like that for every language/project type out there.

Instead, a few of us created pkgme. This is a tool in two parts. The first part knows about packaging, and how to create the necessary files to build a working package, but it doesn't know anything about your application. This knowledge is delegated to a backend, which doesn't need to understand packaging, and just needs to be able to tell pkgme certain facts about the application.

pkgme is now at a stage where we would like to work with people to develop backends for whatever application type you would like (Python/ Ruby On Rails/GNOME/KDE/CMake/Autotools/Vala etc.) You don't have to be an expert on packaging, or indeed on the project type you want to work on. All it takes is writing a few scripts (in whatever language makes sense), which can introspect an application and report things such as the name, version, dependencies, etc.

If this sounds like something that you would like to do then please take a look at the documentation, write the scripts, and then submit your backend for inclusion in pkgme.

You can also contact the developers, see the nascent website at pkgme.net, or visit the Launchpad page. (We are also very interested in help with the website and documentation if that is where you skills or interests lie.)

Read more

This was the confusing part when I first ran couchapp to create a new app, I couldn't really see where the "entry point" of the app was. In the hope that it might help someone else I'm going to present a quick overview of the default setup.

index.html

The index.html page is a static attachement, and the user starts by requesting it with their browser.

It has some small amount of static HTML, part of which creates a div for the javascript to put the data in.

Either inline, or in an included file, there is a small bit of javascript that will initialise the couchapp.

By default this will use the div with the id items, and will attach an evently widget to it.

evently

The evently widget that is attached will then either have an _init event, or a _changes event, either of which will be immediately run by evently.

This event will usually make a couchdb query to get data to transform to HTML and present to the user (see part three for how this works.)

Once that data has been displayed the user any combination of evently widgets or javascript can be used to make further queries and build an app that works however you like.

Previous installments

See part one, part two, and part three.

Read more

Introducing soupmatchers

jml just announced testtools 0.9.8 and in it mentioned the soupmatchers project that I started. Given that I haven't talked about it here before, I wanted to do a post to introduce it, and explain some of the rationale behind it.

soupmatchers is a library for unit testing HTML, allowing you to assert that certain things are present or not within an HTML string. Asserting this based on substring matching is going to be too fragile to be usable, and so soupmatchers works on a parsed representation of the HTML. It uses the wonderful BeautifulSoup library for parsing the HTML, and allows you to assert the presence or not of tags based on the attributes that you care about.

self.assertThat(some_html,
                HTMLContains(Tag('testtools link', 'a',
                attrs={'href': 'https://launchpad.net/testtools'})))

You can see more examples in the README.

Basing this on the testtools matchers frameworks allows you to do this in a semi-declarative way. I think there is a lot of potential here to improve your unit tests. For instance you can start to build a suite of matchers tailored to talking about the HTML that your application outputs. You can have matchers that match areas of the page, and then talk about other elements relative to them ("This link is placed within the sidebar"). One thing that particularly interests me is to create a class hierarchy that allows you test particular things across your application. For instance, you could have an ExternalLink class that asserts that a particular class is set on all of your external links. Assuming that you use this at the appropriate places in your tests then you will know that the style that is applied to class will be on all external links. Should you wish to change the way that external links are represented in the HTML you can change the one class and your tests should tell you all the places that the code has to be updated.

Please go ahead and try the library and let me know how it could be improved.

Read more

[ Apologies to those that saw this half-finished when I published rather than saving a draft ]

This is the part that it took me a long time to understand: how the different parts of the default couchapp collaborate to present data to the user.

In this post I'm just going to deal with client-side couchapps using the default technologies. As explained in the previous post you can use any combination of HTML and javascript in a couchapp, and you can also do some of the work server-side in couchdb. However, I'm going to explain what the couchapp tool gives you when you create a new project, as that is where you are likely to be starting, and once you understand that you can choose where to deviate from that model.

jQuery events

Our first detour is in to a little bit of background, the excellent javascript libarary that is heavily used in couchapps.

jQuery allows for events on elements in the DOM. There are standard events, such as "click" and "submit", but you are free to define your own.

These events are given a name, and you can then "trigger" them, and bind handlers to act when they are triggered.

By building up events from low level ones such as "click", to more-complex and app-specific ones such as "item purchased", you can break down your code in to smaller chunks, and have different parts of the page react to the same event, such as having the "buy" link disappear from the item that the user just bought, as well as having the total of the shopping cart update.

Events can also have data, or arguments, that travels with them. For instance the "item purchased" event could have the item that was purchased as the data, so that handlers can make use of it when they run.

evently

Now that we know something about jQuery events, we can look at something built on top of them, the "evently" library. This is a layer on top of jQuery that allows you build up your app from pieces that have a specific function, and communicate through events.

An evently "widget" can be bound to an element (or several elements if you want). The widget is a bunch of event handlers which can do anything you like, but have some conveniences built in for fetching data and updating the page based on the result.

When an event is triggered the handler you defined is run. If it is a simple javascript function then that function is run, and can do anything you like.

{click: function() {
        alert("You clicked!");
    }
}

Often though you want to update the element based on the action. evently has built in support for the "mustache" templating language, and if you specify a template in that syntax it will replace the current HTML of the element that it is attached to with the result of rendering that template.

{click:
    {
        mustache: "<div>You clicked!</div>"
    }
}

Which will put "You clicked!" in to the page instead of in an alert. What if you don't want to replace the current content, and just want to append another line? For that use the "render" option.

{click:
    {
        mustache: "<div>You clicked!</div>",
        render: "append"
    }
}

Which would put another "You clicked!" on the page every time you click. As well as "append" there is also "prepend", or really any jQuery method that you want to call.

Simply rendering a static template isn't going to be very useful though, usually you want something dynamic. For that use the "data" option, which can just be an object if you want, but that's still not going to be very dynamic either, so it can be a function that returns an object.

{click:
    {
        data: function(e) {
            return {name, "Bob"};
        },
        mustache: "<div>Hi {{name}}!</div>"
    }
}

The data function gets passed the event object from jQuery (so you can e.g. get the target of the event), and any data for the event too (so it could see what item you just bought).

That's all well and good, but it doesn't help us get data from couchdb in to the page. For that we need the opportunity to make a request to couchdb. We could just fall back to using one function to handle the event, but then we lose the integration with mustache. Therefore there is an "async" key that allows us to make an AJAX request and then use mustache on the result.

{click:
    {
        async: function(callback) {
            /* some code that does an async request, and then calls callback with the result */
        },
        data: function(resp) {
            /* Some code that processes the data from the async function to ready it for the template */
        },
        mustache: "A tempate that will be rendered with the result of the data function"
    }
}

Now, writing an async method to query a couchdb view is so common in couchapps that eventy has special support for it. The query key can either be a json structure that specifies a view and the arguments to it, or a function that returns such a structure based on things such as the query string in the URL.

There are two further functions that you will find helpful from time to time. The first is before that allows you to run some code before the rest of the process starts, and may do something such as trigger another event. The other is its partner after, which can do much the same things as before, but can also do things such as modify the HTML that is output.

Lastly there is another thing that can be done with the HTML that is output, specified with they selectors key. This allows you to perform an action on particular parts of the html. The keys of this structure are jQuery selectors that specify which elements the function will be applied to. For instance you can do something with all the divs in the output, or all the spans with a certain class, or the form with a particular id.

What you can do to those elements is basically unlimited, as you can run arbitrary javascript. However, there is built in support for specifying an evently widget, which will automatically be bound to each element that matches the selector. This nesting is one of the most powerful and useful features of evently, and one you should generally be using often. I will probably talk more about what nested widgets are useful for later.

Special evently events

evently has two special events. The first of these is _init. This event is triggered when the widget is created. This means you can dynamically pre-populate the element, or at least keep the inital state of the element with the rest of your code, rather than putting some in the HTML file and the rest in evently code.

The other special event is tied to couchdb, and is the _changes event, and is triggered whenever the database that the couchapp is in changes. This means that you can have elements on the page that dynamically update whenever the database changes, whether that is through user action, another user doing something, external scripts, or couchdb replication. This makes it very easy to write "live" pages that show updates without refreshes, and is very useful for some applications.

Currently _changes doesn't receive the modified documents, so it is normally just used to make another request to get the updated information, whether that be through async or view. If you wish to get the modified documents in order to update the page directly and reduce requests then you can write some custom code to do this.

Conclusion

As you have seen, evently is just a thin layer on top of jQuery concepts such as events and asynchronous events, with some conveniences for templating and interacting with couchdb.

This combination is well suited to the needs of at least simple and moderately complex couchapps, while still being very powerful, and allowing you to fall back to custom javascript at any point.

You can find more about evently at the couchapp site.

See part one and part two.

Read more

This is the part that it took me a long time to understand: how the different parts of the default couchapp collaborate to present data to the user.

In this post I'm just going to deal with client-side couchapps using the default technologies. As explained in the previous post you can use any combination of HTML and javascript in a couchapp, and you can also do some of the work server-side in couchdb. However, I'm going to explain what the couchapp tool gives you when you create a new project, as that is where you are likely to be starting, and once you understand that you can choose where to deviate from that model.

jQuery events

Our first detour is in to a little bit of background, the excellent javascript libarary that is heavily used in couchapps.

jQuery allows for events on elements in the DOM. There are standard events, such as "click" and "submit", but you are free to define your own.

These events are given a name, and you can then "trigger" them, and bind handlers to act when they are triggered.

By building up events from low level ones such as "click", to more-complex and app-specific ones such as "item purchased", you can break down your code in to smaller chunks, and have different parts of the page react to the same event, such as having the "buy" link disappear from the item that the user just bought, as well as having the total of the shopping cart update.

Events can also have data, or arguments, that travels with them. For instance the "item purchased" event could have the item that was purchased as the data, so that handlers can make use of it when they run.

evently

Now that we know something about jQuery events, we can look at something built on top of them, the "evently" library. This is a layer on top of jQuery that allows you build up your app from pieces that have a specific function, and communicate through events.

An evently "widget" can be bound to an element (or several elements if you want). The widget is a bunch of event handlers which can do anything you like, but have some conveniences built in for fetching data and updating the page based on the result.

When an event is triggered the handler you defined is run. If it is a simple javascript function then that function is run, and can do anything you like.

::
{click: function() {
alert("You clicked!");

System Message: WARNING/2 (<string>, line 57)

Block quote ends without a blank line; unexpected unindent.

}

System Message: WARNING/2 (<string>, line 58)

Definition list ends without a blank line; unexpected unindent.

}

Often though you want to update the element based on the action. evently has built in support for the "mustache" templating language, and if you specify a template in that syntax it will replace the current HTML of the element that it is attached to with the result of rendering that template.

::
{click:
{
mustache: "<div>You clicked!</div>"

System Message: WARNING/2 (<string>, line 70)

Definition list ends without a blank line; unexpected unindent.

}

System Message: WARNING/2 (<string>, line 71)

Definition list ends without a blank line; unexpected unindent.

}

Which will put "You clicked!" in to the page instead of in an alert. What if you don't want to replace the current content, and just want to append another line? For that use the "render" option.

::
{click:
{
mustache: "<div>You clicked!</div>", render: "append"

System Message: WARNING/2 (<string>, line 82)

Definition list ends without a blank line; unexpected unindent.

}

System Message: WARNING/2 (<string>, line 83)

Definition list ends without a blank line; unexpected unindent.

}

Which would put another "You clicked!" on the page every time you click. As well as "append" there is also "prepend", or really any jQuery method that you want to call.

Simply rendering a static template isn't going to be very useful though, usually you want something dynamic. For that use the "data" option, which can just be an object if you want, but that's still not going to be very dynamic either, so it can be a function that returns an object.

::
{click:
{
data: function(e) {
return {name, "Bob"};

System Message: WARNING/2 (<string>, line 99)

Definition list ends without a blank line; unexpected unindent.

}, mustache: "<div>Hi {{name}}!</div>"

System Message: WARNING/2 (<string>, line 101)

Definition list ends without a blank line; unexpected unindent.

}

System Message: WARNING/2 (<string>, line 102)

Definition list ends without a blank line; unexpected unindent.

}

The data function gets passed the event object from jQuery (so you can e.g. get the target of the event), and any data for the event too (so it could see what item you just bought).

That's all well and good, but it doesn't help us get data from couchdb in to the page. For that we need the opportunity to make a request to couchdb. We could just fall back to using one function to handle the event, but then we lose the integration with mustache. Therefore there is an "async" key that allows us to make an AJAX request and then use mustache on the result.

See part one and part two.

Read more

Today I would like to talk about the couchapp tool. This is something that you can use when working on couchapps, and provides a way to quickly iterate your design.

However, rather confusingly, the couchapp tool isn't actually required for couchapps. If you get a design document with HTML attachments in to your database then you have a couchapp.

Why would you want to use such a tool then? Firstly because it will generate a skeleton couchapp for you, so that you don't have to remember how to organise it, and if it is your first couchapp it's good to start from something working.

More importantly though, the couchapp tool is useful as you develop as it allows you to edit the parts of your app in their native format. This means that if you are writing a HTML snippet you can just put some HTML in a file, rather than having to write it as part of a deeply nested JSON structure and deal with the errors that you would make if you did that. Also it means that you can use things like syntax highlighting in your preferred text editor without any special magic.

How it works

At it's core couchapp is a rather simple tool. It walks a filesystem tree and assembles the things that it finds there in to a JSON document.

For instance, if it finds a directory at the root of the tree called _attachments it puts the content of each file there in to a list which it puts in to the dict with an "_attachments" key, Which is one of the ways that couchdb accepts document attachments. Therefore if you have an _attachments/index.html file in your tree it will be attached to your design document when the JSON structure is sent to couchdb.

This continues across the tree, so the contents of the "views" directory will become the "views" key of the documents, which is how you do map/reduce queries on the database.

couchapp has various conventions for dealing with files. For instance if it finds a ".js" file it treats it as a javascript snippet which will be encoded in to a string in the resulting document. ".html" files outside of the "_attachments" directory will also be encoded as strings. If it finds a ".json" file then it treats it as literal JSON that will be embebedded.

This way it builds up the JSON structure that a couchapp expects, and will send it to the couchdb of your choice when it is done.

In addition to this functionality the tool can also generate you a skeleton app, and also add new pieces to your app, such as new views.

Getting It

couchapp is a python tool, so you can install it using pip or similar. However, Ubuntu users can install it from a PPA (yay for daily builds with recipes!).

Using It

To use it run

couchapp generate myapp

which will create you a new skeleton in myapp.

cd myapp
ls

You will see for instance the _attachments and views directories, and an _attachments/index.html.

To get your app in to couchdb you can run

couchapp push http://localhost:5984/mydb

and it will tell you the URL to visit to see your new app.

If you want to use desktopcouch you can run

couchapp push desktopcouch://

though I think it has a bug that it prints the wrong URLs when pushing to desktopcouch.

Once you have looked at the HTML generated by your app you should look at the design document that couchapp created. Go to

http://localhost:5984/_utils

or

xdg-open ~/.local/share/desktop-couch/couchdb.html

if you are using desktopcouch.

Click to the mydb database and you will see a document called _design/myapp. Click on this and you will see the content of the design document; you are looking at a couchapp in its raw form.

If you compare what is in that design document with what is in the myapp directory that the tool created you should start to see how it generates it from the filesystem.

Now try making a change on the filesystem, for instance edit _attachments/index.html and put your name somewhere in the body. Then push again, running

couchapp push http://localhost:5984/mydb

and refresh the page in your browser and you should see the change. (Just click on index.html in the design document to get back to viewing your app from there).

I will go in to more detail about the content of the couchapp that was generated for you in another post.

See the previous installment.

Read more

Couchapps are a particular way of using couchdb that allow you to serve web applications directly from the database. These applications generate HTML and javascript to present data from couchdb to the user, and then update the database and the UI based on their actions.

Of course there are plenty of frameworks out there that do this sort of thing, and more and more of them are adding couchdb support. What makes couchapps particularly interesting are two things. Firstly, the ease with which they can be developed and deployed. As they are served directly from couchdb they require little infrastructure, and the couchapp tool allows for a rapid iteration. In addition, the conveniences that are provided mean that simple things can be done very quickly with little code.

The other thing that makes couchapps attractive is that they live inside the database. This means that the code lives alongside the data, and will travel with it as it is replicated. This means that you can easily have an app that you have fast, local access to on your desktop, while at the same time replicating to a server so that you can access the same data from your phone while you are out. Again, this doesn't require couchapps, and they won't be suitable for all needs, but they are certainly an interesting idea.

You can read more about couchapps at http://couchapp.org.

Intrigued by couchapps I set out to play with them over a weekend. Unfortunately the documentation is rather lacking currently, so I wouldn't recommend experimenting yourself if you are not happy digging around for answers, and sometimes not finding them outside the code. In order to go a little way to rectifying this, I intend to write a few posts about the things I wish I had known when I started out. I found everything to be a little strange at first, and it wasn't even clear where the entry point of a couchapp was for instance. Hopefully these posts will be found using google by others who are struggling in a similar way.

Architecture

Firstly something about the pieces that make up a couchapp (or at least those that the tool and documentation recommend,) and the way that they all fit together.

At the core is the couchdb database itself. It is a collection of "documents", each of which can have attachments. Some of these documents are known as "design documents," and they start with a prefix of "_design." Design documents can have "view" functions, and various other special fields that can be used to query or manipulate other documents.

A couchapp is a design document with an attachment, usually called index.html. These attachments are served directly by couchdb and can be accessed at a known URL. You can put anything you like in that html file, and you could just have a static page if you wanted. Usually however though it is is an HTML page that uses javascript in order to display the results of queries on the database. The user will then access the attachment on the design document, and will interact with the resulting page.

In theory you can do anything you like in that page, but it is usual to make use of standard tools in order to query the database and provide information and opportunity for interaction to the user.

The first standard tool is jQuery, with a couple of plugins for working with couchdb and couchapps specifically. These allow for querying views in the database and acting on the results, retrieving and updating documents, and plenty more.

In addition the couchapp tool sets you up with another jQuery plugin called "evently", which is a way to structure interactions with jQuery, and change the page based on various events. I will go in to more detail about how evently works in a later post.

In addition to all the client-side tools for interacting with the database, it is also possible to make use of couchdb features such as shows, lists, update handlers validation functions in order to move some of the processing server-side. This is useful for various reasons, including being more accessible, allowing search engines to index the content, and not having to trust the client not to take malicious actions.

The two approaches can be combined, and you can prototype with the client-side tools, and then move some of the work to the server-side facilities later.

Stay tuned for more on how a simple couchapp generates content based on what is in the db.

Read more

The examples for Django testing point you towards hardcoding a username and password for a user to impersonate in tests, and the API of the test client encourages this too.

However, Django has a nice pluggable authentication system that means you can easily use something such as OpenID instead of passwords.

Putting the passwords in your tests ties you to having the password support enabled, and while you could do this for just the tests, it's completely out of the scope of most tests (I'm not talking about any tests for the actual login process here.)

When I saw this while reviewing code recently I worked with Zygmunt to write a Client subclass that didn't have this restriction. With this subclass you can just choose a User object, and have that client login as that user, without them having to have a password at all. Doing this decoples your tests from the implementation of the authentication system, and makes them target the code you want to test more precisely.

Here's the code:

from django.conf import settings
from django.contrib.auth import login
from django.http import HttpRequest
from django.test.client import Client


class TestClient(Client):

    def login_user(self, user):
        """
        Login as specified user, does not depend on auth backend (hopefully)

        This is based on Client.login() with a small hack that does not
        require the call to authenticate()
        """
        if not 'django.contrib.sessions' in settings.INSTALLED_APPS:
            raise AssertionError("Unable to login without django.contrib.sessions in INSTALLED_APPS")
        user.backend = "%s.%s" % ("django.contrib.auth.backends",
                                  "ModelBackend")
        engine = import_module(settings.SESSION_ENGINE)

        # Create a fake request to store login details.
        request = HttpRequest()
        if self.session:
            request.session = self.session
        else:
            request.session = engine.SessionStore()
        login(request, user)

        # Set the cookie to represent the session.
        session_cookie = settings.SESSION_COOKIE_NAME
        self.cookies[session_cookie] = request.session.session_key
        cookie_data = {
            'max-age': None,
            'path': '/',
            'domain': settings.SESSION_COOKIE_DOMAIN,
            'secure': settings.SESSION_COOKIE_SECURE or None,
            'expires': None,
        }
        self.cookies[session_cookie].update(cookie_data)

        # Save the session values.
        request.session.save()

Then you can use it in your tests like this:

from django.contrib.auth.models import User


client = TestClient()
user = User(username="eve")
user.save()
client.login_user(user)

Then any requests you make with that client will be authenticated as the user that was created.

Ticket submitted with Django to have this available for everyone in future.

Read more

What I work on

I'm keen to try and write more about the things that I work on as part of my job at Canonical. In order to get started I wanted to write a summary of some of the things that I have done, as well as a little about what I am working on now.

Ubuntu Distributed Development

This isn't the catchiest name for a project ever, and has an unfortunate collision with an Debian project, also shortened to "UDD." However, the aim is for this title to become a thing of the past, and this just to be the way things are done.

This effort is firstly about getting Ubuntu to use Bazaar, and a suite of associated tools, to get the packaging work done. There are multiple reasons for this.

First, and most simply, is to give developers the power of version control as they are working on Ubuntu packages. This is useful for both the large things and the small. For instance I sometimes appreciate being able to walk through the history of a package, comparing diffs here and files there when debugging a complex problem. Sometimes though it's just being able to "bzr revert" a file, rather than having to unpack the source again somewhere else, extracting the file and copying it over the top.

There are higher purposes with the work too. The goal is to link the packaging with the upstream code at the version control level, so that one flows in to the other. This has practical uses, such as being able to follow changes as they flow upstream and back down again, or better merging of new upstream versions. I believe it has some other benefits too, such as being able to see the packages more clearly as what they are, a branch of upstream. We won't just talk about them being that, but they truly will be.

Some of you will be thinking "that's all well and good, but <project> uses git," and you are absolutely right. Throughout this work we have had two principles in mind, to work with multiple systems outside of Ubuntu, and to provide a consistent interface within Ubuntu.

Due to the way that Ubuntu works an Ubuntu developer could be working on any package next. I would really like it if the basics of working with that package were the same regardless of what it was. We have a lot of work to do on the packaging level to get there, but this project gets this consistency on the version control level.

We can't get everyone outside of Ubuntu to follow us in this though. We have to work with the system that upstream uses, and also to work with Debian in the middle. This means that we have to design systems that can interface between the two, so we rely a lot on Launchpad's bzr code imports. We also want to interface at the other end as well, at "push" time. This means that if an Ubuntu developer produces a patch that they want to send upstream they can do that without having to reach for a possibly different VCS.

Thanks mainly to the work of Jelmer Vernooij we are doing fairly well at being able to produce patches in the format appropriate for the upstream VCS, but we still have a way to go to close the loop. The difficultly here is more around the hundreds of ways that projects like to have patches submitted, whether it is a mailing list or a bug tracker, or in some other form. At this stage I'd like to provide the building blocks that developers can put together as appropriate for that project.

Daily package builds

Relatedly, but with slightly different aims, I have been working on a project in conjunction with the Launchpad developers to allow people to have daily builds of their projects as packages.

Currently there is too often a gap between using packaged versions of a project, and running the tip of that project daily. I believe that there are lots of people that would like to follow the development of their favourite projects closely, but either don't feel comfortable building from the VCS, or don't want to go through the hassle.

Packages are of course a great way to distribute pre-compiled software, so it was natural to want to provide builds in this format, but I'm not aware of many projects doing that, aside from those which fta provides builds for. Now that Launchpad provides PPAs and code imports, and the previous project provides imports of the packaging of all Debian and Ubuntu packages in to bzr, all the pieces are there in order to allow you to produce packages of a project automatically every day.

This is currently available in beta in Launchpad, so you can go and try it out, though there are a few known problems that we are working on until it will be as pleasant as we want.

This has the potential to do great things for projects if used correctly. It can increase the number of people testing fresh code and giving feedback by orders of magnitude. Also, just building the packages acts as a kind of continuous integration, and can provide early warning of problems that will affect the packaging of the project. Also, they provide an easy way for people to test the latest code if a bug is believed to be fixed.

Obviously there are some dangers associated with automatic builds, but if they are used by people who know what they are doing then it can help to close the loop between users and developers.

There are also many more things that can be done with this feature by people with imagination, so I'm excited to see what directions people will take it in.

Fixing things

Aside from these projects, I was also given some time to work on Ubuntu itself, but without long-term projects to ship. That meant that I was able to fix things that were standing in my way, either in the way of the above projects, or just hampering my use of Ubuntu, or fix important bugs in the release.

In addition I took on smaller projects, such as getting kerneloops enabled by default in Ubuntu. While doing this I realised that the user experience of that tool could be improved a lot for Ubuntu users, as well as allowing us to report the problems caught by the tool as bugs in Launchpad if we wished.

I really enjoyed having this flexibility, as it allowed me to learn about many areas of the Ubuntu system, and beyond, and also played to my strengths of being able to quickly dive in to a new codebase and diagnose problems.

I think that in my own small way, each of these helped to improve Ubuntu releases, and in turn the projects that Ubuntu is built from.

Sponsoring

While I'm sorry to say that other demands have pulled my code review time in to other projects, I did used to spend a lot of time reviewing and sponsoring changes in to Ubuntu.

I highlight this mainly as another chance to emphasise how important I think code review is, especially when it is review of code from people new to the project. It improves code quality, but is also a great opportunity for mentoring, encouraging good habits, and helping new developers join the project. I hope that my efforts in this are had a few of these characteristics and helped increase the number of free software developers. Oh how I wish there were more time to continue doing this.

Linaro

I've now been started working on the Linaro project, specifically in the Infrastructure team, working on tools and Infrastructure for Linaro developers and beyond. I'm not one to be all talk and no action, so I won't talk to much about what I am working on, but I would like to talk about why it is important.

Firstly I think that Linaro is an important project for Free Software, as it has the opportunity to lead to more devices being sold that are built on or entirely free software, some in areas that have historically been home to players that have not been good open source citizens.

Also, I think tools are an important area to work on, not just in Linaro. They pervade the development experience, and can be a huge pain to work with. It's important that we have great tools for developing free software so as not to put people off. Developers, volunteers and paid, aren't going to carry on too long with tools that cause them more problems than they are worth, and not all are going to persist because they value Free Software over their own enjoyment of what they do.

Read more

Normally when you write some code using launchpadlib you end up with Launchpad showing your users something like this:

/images/lplib-before.png

This isn't great, how is the user supposed to know which option to click? What do you do if they don't choose the option you want?

Instead it's possible to limit the choices that the user has to make to only those that your application can use, plus the option to deny all access, by changing the way you create your Launchpad object.

from launchpadlib.launchpad import Launchpad

lp = Launchpad.get_token_and_login("testing", allow_access_levels=["WRITE_PUBLIC"])

This will present your users with something like this:

/images/lplib-after.png

which is easier to understand. There could be further improvements, but they would happen on the Launchpad side.

This approach works for both Launchpad.get_token_and_login and Launchpad.login_with.

The values that you can pass here aren't documented, and should probably be constants in launchpadlib, rather than hardcoded in every application, but for now you can use:

  • READ_PUBLIC
  • READ_PRIVATE
  • WRITE_PUBLIC
  • WRITE_PRIVATE

Read more

Dear Mr Neary, thanks for your thought provoking post, I think it is a problem we need to be aware of as Free Software matures.

Firstly though I would like to say that the apparent ageism present in your argument isn't helpful to your point. Your comments appear to diminish the contributions of a whole generation of people. In addition, we shouldn't just be concerned with attracting young people to contribute, the same changes will have likely reduced the chances that people of all ages will get involved.

Aside from that though there is much to discuss. You talk about the changes in Free Software since you got involved, and it mirrors my observations. While these changes may have forced fewer people to learn all the details of how the system works, they have certainly allowed more people to use the software, bringing many different skills to the party with them.

I would contend that often the experience for those looking to do the compilation that you rate as important has parallels to the experience of just using the software you present from a few years ago. If we can change that experience as much as we have the installation and first use experience then we will empower more people to take part in those activities.

It is instructive then to look at how the changes came about to see if there are any pointers for us. I think there are two causes of the change that are of interest to this discussion.

Firstly, one change has been an increased focus on user experience. Designing and building software that serves the users needs has made it much more palatable for people, and reduced the investment that people have to make before using it. In the same way I think we should focus on developer experience, making it more pleasant to perform some of the tasks needed to be a hobbyist. Yes, this means hiding some of the complexity to start with, but that doesn't mean that it can't be delved in to later. Progressive exposure will help people to learn by not requiring them to master the art before being able to do anything.

Secondly, there has been a push to make informed decisions on behalf of the user when providing them with the initial experience. You no longer get a base system after installation, upon which you are expected to select from the thousands of packages to build your perfect environment. Neither are you led to download multiple CDs that contain the entire contents of a distribution, much of which is installed by default. Instead you are given an environment that is already equipped to do common tasks, where each task is covered by an application that has been selected by experts on your behalf.

We should do something similar with developer tools, making opinionated decisions for the new developer, and allowing them to change things as they learn, similar to the way in which you are still free to choose from the thousands of packages in the distribution repositories. Doing this makes documentation easier to write, allows for knowledge sharing, and reduces the chances of paralysis of choice.

There are obviously difficulties with this given that often the choice of tool that one person makes on a project dicatates or heavily influences the choice other people have to make. If you choose autotools for your projects then I can't build it with CMake. Our development tools are important to us as they shape the environment in which we work, so there are strong opinions, but perhaps consistency could become more of a priority. There are also things we can do with libraries, format specifications and wrappers to allow choice while still providing a good experience for the fledgling developer.

Obviously as we are talking about free software the code will always be available, but that isn't enough in my mind. It needs to be easier to go from code to something you can install and remove, allowing you to dig deeper once you have achieved that.

I believe that our effort around things like https://dev.launchpad.net/BuildBranchToArchive will go some way to helping with this.

Read more

The deadline for students to submit their applications to Google for Summer of Code is imminent.

If you were waiting for the last minute to submit, that is now!

If you are mentor and have the perfect student you have been working with, check with them that they have submitted the application to Google, otherwise you will be stuck.

Next week we'll start to process the huge number of applications that we have for Ubuntu.

Read more

If you don't want to read this article, then just steer clear of python-multiprocessing, threads and glib in the same application. Let me explain why.

There's a rather famous bug in Gwibber in Ubuntu Lucid, where a gwibber-service process will start taking 100% of the CPU time of one of your cores if it can. While looking in to why this bug happened I learnt a lot about how multiprocessing and GLib work, and wanted to record some of this so that others may avoid the bear traps.

Python's multiprocessing module is a nice module to allow you to easily run some code in a subprocess, to get around the restriction of the GIL for example. It makes it really easy to run a particular function in a subprocess, which is a step up from what you had to do before it existed. However, when using it you should be aware how the way it works can interact with the rest of your app, because there are some possible nasties lurking there.

GLib is a set of building blocks for apps, most notably used by GTK+. It provides an object system, a mainloop and lots more besides. What we are most interested here is the mainloop, signals, and thread integration that it provides.

Let's start the explanation by looking at how multiprocessing does its thing. When you start a subprocess using multiprocessing.Process, or something that uses it, it causes a fork(2), which starts a new process with a copy of the programs current memory, with some exceptions. This is really nice for multiprocessing, as you can just run any code from that program in the subprocess and pass the result back without too much difficulty.

The problems occur because there isn't an exec(3) to accompany the fork(2). This is what makes multiprocessing so easy to use, but doesn't insert a clean process boundary between the processes. Most notably for this example, it means the child inherits the file descriptors of the parent (critically even those marked FD_CLOEXEC).

The other piece to this puzzle is how the GLib mainloop communicates between threads. It requires some mechanism where one thread can alert another that something of interest happened. To do this when you tell GLib that you will be using threads in your app by calling g_thread_init (gobject.threads_init() in Python) then it will create a pipe for use by glib to alert other threads. It also creates a watcher thread that polls one end of this pipe so that it can act when a thread wishes to pass something on to the mainloop.

The final part of the puzzle is what your app does in a subprocess with mutliprocessing. If you purely do something such as number crunching then you won't have any issues. If however you use some glib functions that will cause the child to communicate with the mainloop then you will see problems.

As the child inherits the file descriptors of the parent it will use the same pipe for communication. Therefore if a function in the child writes to this pipe then it can put the parent in to a confused state. What happens in gwibber is that it uses some gnome-keyring functions and that puts the parent in to a state where the watcher thread created by g_thread_init busy-polls on the pipe, taking up as much CPU time as it can get from one core.

In summary, you will see issues if you use python-multiprocessing from a thread and use some glib functions in the children.

There are some ways to fix this, but no silver bullet:

  • Don't use threads, just use multiprocessing. However, you can't communicate with glib signals between subprocesses, and there's no equivalent built in to multiprocessing.
  • Don't use glib functions from the children.
  • Don't use multiprocessing to run the children, use exec(3) a script that does what you want, but this isn't as flexible or as convenient.

It may be possible to use the support for different GMainContexts for different threads to work around this, but:

  • You can't access this from Python, and
  • I'm not sure that every library you use will correctly implement it, and so you may still get issues.

Note that none of the parties here are doing anything particularly wrong, it's a bad interaction caused by some decisions that are known to cause issues with concurrency. I also think there are issues when using DBus from multiprocessing children, but I haven't thoroughly investigated that. I'm not entirely sure why the multiprocessing child seems to have to be run from a non-main thread in the parent to trigger this, any insight would be welcome. You can find a small script to reproduce the problem here.

Or, to put it another way, global state bad for concurrency.

Read more

As you've probably heard by now, Ubuntu has been accepted to Google Summer of Code this year. We're currently at the point where we are looking for students to take part and the mentors to pair with them to make the proposal. We have some ideas on the wiki, but there's nothing to stop you coming up with your own if you have a great idea. The only requirement is that you find a mentor to help you with it.

The best way to do this is to write up a proposal on your wiki page on the Ubuntu wiki, and then to email the Ubuntu Summer of Code mailing list about it. You can also ask for possible mentors on IRC and on other Ubuntu mailing lists related to your idea.

I have a couple of ideas on the wiki page, but I am happy to consider ideas from students that fall in my area of expertise.

I spend most of my time working on developer tools and infrastructure. These are things that users of Ubuntu won't see, but are used every day by developers of Ubuntu. Improvements we can make in this area can in turn improve Ubuntu by giving us happier, more productive, developers. It's also an interesting area to work in, as there are usually different constraints to developing user software, as developers have different demands.

If you think that sounds interesting and you have a great idea that falls in to that area, or you like one of my ideas on the wiki page, then get in touch with me. I will be happy to discuss your ideas and help you flesh them out in to a possible proposal, but I won't be able to mentor everyone.

I would consider mentoring any idea that either improved existing tools used by Ubuntu developers (bzr, pbuilder, devscripts, ubuntu-dev-tools, etc.) or created a new one that would make things easier. In the same spirit, anything that makes it easier for someone to get started with Ubuntu development, such as Harvest, helpers for creating packages, etc. could be a possible project. The last category would be infrastructure-type projects such as the idea to automate test-merging-and-building of new upstreams, or similar ideas.

I've also posted about some ideas that I would like to see previously on my blog, which might be a source of inspriation.

If this interests you then you can find out how to contact me on my Launchpad profile.

Read more

As my contribution to Ada Lovelace Day 2010 I would like to mention Emma Jane Hogbin.

Emma is an Ubuntu Member, published author, documentation evangelist, conference organiser, Drupal theming expert, tireless conference presenter, and many more things as well.

Her enthusiasm is infectious, and her passion for solving problems for people is admirable. She is a constant source of inspiration to me, and that continues even as she branches out in to new things.

(Hat tip for the title to the ever excellent Sharrow)

Read more

The Bazaar package importer is a service that we run to allow people to use Bazaar for Ubuntu development by importing any source package uploads in to bzr. It's not something that most Ubuntu developers will interact with directly, but is of increasing importance.

I've spent a lot of time working in the background on this project, and while the details have never been secret, and in fact the code has been available for a while, I'm sure most people don't know what goes on. I wanted to rectify that, and so started with some wiki documentation on the internals. This post is more abstract, talking about the archtecture.

While it has a common pattern of requirements, and so those familiar with the architecture of job systems will recognise the solution, the devil is in the details. I therefore present this as a case-study of one such system that can be used to constrast other similar sytstems as an aid to learning how differing requirements affect the finished product.

The Problem

For the Ubuntu Distributed Development initative we have a need for a process that imports packages in to bzr on an ongoing basis as they are uploaded to Ubuntu. This is so that we can have a smooth transition rather than a flag day where everyone switches. For those that are familiar with them think Launchpad's code imports but with Debian/Ubuntu packages as the source, rather than a foreign VCS.

This process is required to watch for uploads to Debian and Ubuntu and trigger a run to import that upload to the bzr branches, pushing the result to LP. It should be fast, though we currently have a publication delay in Ubuntu that means we are used to latencies of an hour, so it doesn't have to be greased lightning to get acceptance. It is more important to be reliable, so that the bzr branches can be assumed to be up to date, that is crucial for acceptance.

It should also keep an audit trail of what it thinks is in the branches. As we open up write access to the resulting branches to Ubuntu developers we can not rely on the content of the branches not being tampered with. I don't expect this will ever be a problem, but I wanted to ensure that we could at least detect tampering, even if we couldn't know exactly what had happened by keeping private copies of everything.

The Building Blocks

The first building block of the solution is the import script for a single package. You can run this at any time and it will figure out what is unimported, and do the import of the rest, so you can trigger it as many times as you like without worrying that it will cause problems. Therefore the requirement is only to trigger it at least once when there has been an upload since the last time it was run, which is a nicer requirement than "exactly once per upload" or similar.

However, as it may import to a number of branches (both lucid and karmic-security in the case of a security upload, say), and these must be consistent on Launchpad, only one instance can run at once. There is no way to do atomic operations on sets of branches on Launchpad, therefore we use locks to ensure that only one process is running per-package at any one time. I would like to explore ways to remove this requirement, such as avoiding race conditions by operating on the Launchpad branches in a consistent manner, as this would give more freedom to scale out.

The other part of the system is a driver process. We use separate processes so that any faults in the import script can be caught in the supervisor process, with the errors being logged. The driver process picks a package to import and triggers a run of the script for it. It uses something like the following to do that:

write_failure(package, "died")
try:
    import(package)
except:
    write_failure(packge, stderr)
finally:
    remove_failure(package)

write_failure creates a record that the package failed to import with a reason. This provides a list of problems to work through, and also means that we can avoid trying to import a package if we know it has failed. This ensures that previous failures are dealt with properly without giving them a chance to corrupt things later.

Queuing

I said that the driver picks a package and imports it. To do this it simply queries the database for the highest priority job waiting, dispatching the result, or sleeping if there are no waiting jobs. It can actually dispatch multiple jobs in parallel as it uses processes to do the work.

The queue is filled by a couple of other processes triggered by cron. This is useful as it means that further threads are not required, and there is less code running in the monitor process, and so less chance that bugs will bring it down.

The first process is one that checks for new uploads since the last check and adds a job for them, see below for the details. The second is one that looks at the current list of failures and retries some of them automatically, if the failure looks like it was likely to be transient, such as a timeout error trying to reach Launchpad. It only retries after a timeout of a couple of hours has elapsed, and also if that package hasn't failed in that same way several times in a row (to protect against e.g. the data that job is sending to LP causing it to crash and so give timeout errors.)

It may be better to use an AMQP broker or a job server such as Gearman for this task, rather that just using the database. However, we don't really need any of the more advanced features that these provide, and already have some degree of loose-coupling, so using fewer moving parts seems sensible.

Reacting to new uploads

I find this to be a rather neat solution, thanks to the Launchpad team. We use the API for this, notably a method on IArchive called getPublishedSources(). They key here is the parameter "created_since_date". We keep track of this and pass it to the API calls to get the uploads since the last time we ran, and then act on those. Once we processed them all then we update the stored date and go around again.

This has some nice properties, it is a poll interface, but has some things in common with an event-based one. Key in my eyes is that we don't have to have perfect uptime in order to ensure we never miss events.

However, I am not convinced that we will never get a publication that appears later than one that we have dealt with, but that reports an earlier time. If this happens we would never see it. The times we use always come from LP, so don't require synchronised clocks between the machine where this runs and the LP machines, but it could still happen inside LP. To avoid this I subtract a delta when I send the request, so assuming the skew would not be greater than that delta we won't get hit. This does mean that you repeatedly try and import the same things, but that is just a mild inefficiency.

Synchronisation

There is a synchronisation point when we push to Launchpad. Before and after this critical period we can blow away what we are doing with no issues. During it though we will have an inconsistent state of the world if we did that. Therefore I used a protocol to ensure that we guard this section.

As we know locking ensures that only one process runs at a time, meaning that the only way to race is with "yourself." All the code is written to assume that things can go down at any time as I said, the supervisor catches this and marks the failures, and even guards against itself dying. Therefore when it picks back up and restarts the jobs that it was processing before dying it needs to ensure that it wasn't in the critical section.

To do this we use a three-phase commit on the audit data to accomany the push. When we are doing the import we track the additions to the audit data separately from the committed data. Then if we die before we reach the critical section we can just drop it again, returning to the inital state.

The next phase marks in the database that the critical section has begun. We then start the push back. If we die here we know we were in the critical section and can restart the push. Only once the push has fully completed do we move the new audit data in to place.

The next step cleans up the local branches, dying here means we can just carry on with the cleanup. Finally the mark that we are in the critical section is removed, and we are back to the start state, indicating that the last run was clean, and any subsequent run can proceed.

All of this means that if the processes go down for any reason, they will clean up or continue as they restart as normal.

Dealing with Launchpad API issues

The biggest area of operational headaches I have tends to come from using the Launchpad API. Overall the API is great to have, and generally a pleasure to use, but I find that it isn't as robust as I would like. I have spent quite some time trying to deal with that, and I would like to share some tips from my experience. I'm also keen to help diagnose the issues further if any Launchpad developers would like so that it can be more robust off the bat.

The first tip is: partition the data. Large datasets combined with fluctuating load may mean that you suddenly hit a timeout error. Some calls allow you to partition the data that you request. For instance, getPublishedSources that I spoke about above allows you to specify a distro_series parameter. Doing

distro.main_archive.getPublishedSources()

is far far more likely to timeout than

for s in distro.series:
    distro.main_archive.getPublishedSources(distro_series=s)

in fact, for Ubuntu, the former is guaranteed to timeout, it is a lot of data.

This is more coding, and not the natural way to do it, therefore it would be great if launchpadlib automatically partioned and recombined the data.

The second tip is: expect failure. This one should be obvious, but the API doesn't make it clear, unlike something like python-couchdb. It is a webservice, so you will sometimes get HTTP exceptions, such as when LP goes offline for a rollout. I've implemented randomized exponential backoff to help with this, as I tend to get frequent errors that don't apparently correspond to service issues. I very frequently see 502 return codes, on both edge and production, which I believe means that apache can't reach the appservers in time.

Summary

Overall, I think this architecture is good, given the synchronisation requirements we have for pushing to LP, without those it could be more loosely coupled.

The amount of day-to-day hand-holding required has reduced as I have learnt about the types of issues that are encountered and changed the code to recognise and act on them.

Read more

Dry Rub Barbeque Trout

Made this up after buying a nice piece of locally caught freshwater trout. I think that it would be even better if you were to hot-smoke it. Apply the rub between two and twelve hours before cooking.

Mix up the following then rub on to the flesh of the fish (enough for four servings):

  • 1 tbsp sea/rock salt.
  • 1 tbsp black peppercorns crushed.
  • 1 tbsp ground cumin.
  • 1 tbsp ground coriander.
  • 2 tsp caraway seed.
  • 2 tsp dried tarragon.
  • 2 tsp dried thyme.
  • 2 tsp chilli powder.
  • Zest of one lemon.

To drizzle on top when cooked melt some butter in a pan, add the juice of the lemon you used above, a pinch of salt, one crushed clove of garlic, and a handful of chopped coriander. Simmer for a couple of minutes.

Enjoy!

Read more