Canonical Voices

What James Westby talks about

The examples for Django testing point you towards hardcoding a username and password for a user to impersonate in tests, and the API of the test client encourages this too.

However, Django has a nice pluggable authentication system that means you can easily use something such as OpenID instead of passwords.

Putting the passwords in your tests ties you to having the password support enabled, and while you could do this for just the tests, it's completely out of the scope of most tests (I'm not talking about any tests for the actual login process here.)

When I saw this while reviewing code recently I worked with Zygmunt to write a Client subclass that didn't have this restriction. With this subclass you can just choose a User object, and have that client login as that user, without them having to have a password at all. Doing this decoples your tests from the implementation of the authentication system, and makes them target the code you want to test more precisely.

Here's the code:

from django.conf import settings
from django.contrib.auth import login
from django.http import HttpRequest
from django.test.client import Client


class TestClient(Client):

    def login_user(self, user):
        """
        Login as specified user, does not depend on auth backend (hopefully)

        This is based on Client.login() with a small hack that does not
        require the call to authenticate()
        """
        if not 'django.contrib.sessions' in settings.INSTALLED_APPS:
            raise AssertionError("Unable to login without django.contrib.sessions in INSTALLED_APPS")
        user.backend = "%s.%s" % ("django.contrib.auth.backends",
                                  "ModelBackend")
        engine = import_module(settings.SESSION_ENGINE)

        # Create a fake request to store login details.
        request = HttpRequest()
        if self.session:
            request.session = self.session
        else:
            request.session = engine.SessionStore()
        login(request, user)

        # Set the cookie to represent the session.
        session_cookie = settings.SESSION_COOKIE_NAME
        self.cookies[session_cookie] = request.session.session_key
        cookie_data = {
            'max-age': None,
            'path': '/',
            'domain': settings.SESSION_COOKIE_DOMAIN,
            'secure': settings.SESSION_COOKIE_SECURE or None,
            'expires': None,
        }
        self.cookies[session_cookie].update(cookie_data)

        # Save the session values.
        request.session.save()

Then you can use it in your tests like this:

from django.contrib.auth.models import User


client = TestClient()
user = User(username="eve")
user.save()
client.login_user(user)

Then any requests you make with that client will be authenticated as the user that was created.

Ticket submitted with Django to have this available for everyone in future.

Read more

What I work on

I'm keen to try and write more about the things that I work on as part of my job at Canonical. In order to get started I wanted to write a summary of some of the things that I have done, as well as a little about what I am working on now.

Ubuntu Distributed Development

This isn't the catchiest name for a project ever, and has an unfortunate collision with an Debian project, also shortened to "UDD." However, the aim is for this title to become a thing of the past, and this just to be the way things are done.

This effort is firstly about getting Ubuntu to use Bazaar, and a suite of associated tools, to get the packaging work done. There are multiple reasons for this.

First, and most simply, is to give developers the power of version control as they are working on Ubuntu packages. This is useful for both the large things and the small. For instance I sometimes appreciate being able to walk through the history of a package, comparing diffs here and files there when debugging a complex problem. Sometimes though it's just being able to "bzr revert" a file, rather than having to unpack the source again somewhere else, extracting the file and copying it over the top.

There are higher purposes with the work too. The goal is to link the packaging with the upstream code at the version control level, so that one flows in to the other. This has practical uses, such as being able to follow changes as they flow upstream and back down again, or better merging of new upstream versions. I believe it has some other benefits too, such as being able to see the packages more clearly as what they are, a branch of upstream. We won't just talk about them being that, but they truly will be.

Some of you will be thinking "that's all well and good, but <project> uses git," and you are absolutely right. Throughout this work we have had two principles in mind, to work with multiple systems outside of Ubuntu, and to provide a consistent interface within Ubuntu.

Due to the way that Ubuntu works an Ubuntu developer could be working on any package next. I would really like it if the basics of working with that package were the same regardless of what it was. We have a lot of work to do on the packaging level to get there, but this project gets this consistency on the version control level.

We can't get everyone outside of Ubuntu to follow us in this though. We have to work with the system that upstream uses, and also to work with Debian in the middle. This means that we have to design systems that can interface between the two, so we rely a lot on Launchpad's bzr code imports. We also want to interface at the other end as well, at "push" time. This means that if an Ubuntu developer produces a patch that they want to send upstream they can do that without having to reach for a possibly different VCS.

Thanks mainly to the work of Jelmer Vernooij we are doing fairly well at being able to produce patches in the format appropriate for the upstream VCS, but we still have a way to go to close the loop. The difficultly here is more around the hundreds of ways that projects like to have patches submitted, whether it is a mailing list or a bug tracker, or in some other form. At this stage I'd like to provide the building blocks that developers can put together as appropriate for that project.

Daily package builds

Relatedly, but with slightly different aims, I have been working on a project in conjunction with the Launchpad developers to allow people to have daily builds of their projects as packages.

Currently there is too often a gap between using packaged versions of a project, and running the tip of that project daily. I believe that there are lots of people that would like to follow the development of their favourite projects closely, but either don't feel comfortable building from the VCS, or don't want to go through the hassle.

Packages are of course a great way to distribute pre-compiled software, so it was natural to want to provide builds in this format, but I'm not aware of many projects doing that, aside from those which fta provides builds for. Now that Launchpad provides PPAs and code imports, and the previous project provides imports of the packaging of all Debian and Ubuntu packages in to bzr, all the pieces are there in order to allow you to produce packages of a project automatically every day.

This is currently available in beta in Launchpad, so you can go and try it out, though there are a few known problems that we are working on until it will be as pleasant as we want.

This has the potential to do great things for projects if used correctly. It can increase the number of people testing fresh code and giving feedback by orders of magnitude. Also, just building the packages acts as a kind of continuous integration, and can provide early warning of problems that will affect the packaging of the project. Also, they provide an easy way for people to test the latest code if a bug is believed to be fixed.

Obviously there are some dangers associated with automatic builds, but if they are used by people who know what they are doing then it can help to close the loop between users and developers.

There are also many more things that can be done with this feature by people with imagination, so I'm excited to see what directions people will take it in.

Fixing things

Aside from these projects, I was also given some time to work on Ubuntu itself, but without long-term projects to ship. That meant that I was able to fix things that were standing in my way, either in the way of the above projects, or just hampering my use of Ubuntu, or fix important bugs in the release.

In addition I took on smaller projects, such as getting kerneloops enabled by default in Ubuntu. While doing this I realised that the user experience of that tool could be improved a lot for Ubuntu users, as well as allowing us to report the problems caught by the tool as bugs in Launchpad if we wished.

I really enjoyed having this flexibility, as it allowed me to learn about many areas of the Ubuntu system, and beyond, and also played to my strengths of being able to quickly dive in to a new codebase and diagnose problems.

I think that in my own small way, each of these helped to improve Ubuntu releases, and in turn the projects that Ubuntu is built from.

Sponsoring

While I'm sorry to say that other demands have pulled my code review time in to other projects, I did used to spend a lot of time reviewing and sponsoring changes in to Ubuntu.

I highlight this mainly as another chance to emphasise how important I think code review is, especially when it is review of code from people new to the project. It improves code quality, but is also a great opportunity for mentoring, encouraging good habits, and helping new developers join the project. I hope that my efforts in this are had a few of these characteristics and helped increase the number of free software developers. Oh how I wish there were more time to continue doing this.

Linaro

I've now been started working on the Linaro project, specifically in the Infrastructure team, working on tools and Infrastructure for Linaro developers and beyond. I'm not one to be all talk and no action, so I won't talk to much about what I am working on, but I would like to talk about why it is important.

Firstly I think that Linaro is an important project for Free Software, as it has the opportunity to lead to more devices being sold that are built on or entirely free software, some in areas that have historically been home to players that have not been good open source citizens.

Also, I think tools are an important area to work on, not just in Linaro. They pervade the development experience, and can be a huge pain to work with. It's important that we have great tools for developing free software so as not to put people off. Developers, volunteers and paid, aren't going to carry on too long with tools that cause them more problems than they are worth, and not all are going to persist because they value Free Software over their own enjoyment of what they do.

Read more

What I work on

I'm keen to try and write more about the things that I work on as part of my job at Canonical. In order to get started I wanted to write a summary of some of the things that I have done, as well as a little about what I am working on now.

Ubuntu Distributed Development

This isn't the catchiest name for a project ever, and has an unfortunate collision with an Debian project, also shortened to "UDD." However, the aim is for this title to become a thing of the past, and this just to be the way things are done.

This effort is firstly about getting Ubuntu to use Bazaar, and a suite of associated tools, to get the packaging work done. There are multiple reasons for this.

First, and most simply, is to give developers the power of version control as they are working on Ubuntu packages. This is useful for both the large things and the small. For instance I sometimes appreciate being able to walk through the history of a package, comparing diffs here and files there when debugging a complex problem. Sometimes though it's just being able to "bzr revert" a file, rather than having to unpack the source again somewhere else, extracting the file and copying it over the top.

There are higher purposes with the work too. The goal is to link the packaging with the upstream code at the version control level, so that one flows in to the other. This has practical uses, such as being able to follow changes as they flow upstream and back down again, or better merging of new upstream versions. I believe it has some other benefits too, such as being able to see the packages more clearly as what they are, a branch of upstream. We won't just talk about them being that, but they truly will be.

Some of you will be thinking "that's all well and good, but <project> uses git," and you are absolutely right. Throughout this work we have had two principles in mind, to work with multiple systems outside of Ubuntu, and to provide a consistent interface within Ubuntu.

Due to the way that Ubuntu works an Ubuntu developer could be working on any package next. I would really like it if the basics of working with that package were the same regardless of what it was. We have a lot of work to do on the packaging level to get there, but this project gets this consistency on the version control level.

We can't get everyone outside of Ubuntu to follow us in this though. We have to work with the system that upstream uses, and also to work with Debian in the middle. This means that we have to design systems that can interface between the two, so we rely a lot on Launchpad's bzr code imports. We also want to interface at the other end as well, at "push" time. This means that if an Ubuntu developer produces a patch that they want to send upstream they can do that without having to reach for a possibly different VCS.

Thanks mainly to the work of Jelmer Vernooij we are doing fairly well at being able to produce patches in the format appropriate for the upstream VCS, but we still have a way to go to close the loop. The difficultly here is more around the hundreds of ways that projects like to have patches submitted, whether it is a mailing list or a bug tracker, or in some other form. At this stage I'd like to provide the building blocks that developers can put together as appropriate for that project.

Daily package builds

Relatedly, but with slightly different aims, I have been working on a project in conjunction with the Launchpad developers to allow people to have daily builds of their projects as packages.

Currently there is too often a gap between using packaged versions of a project, and running the tip of that project daily. I believe that there are lots of people that would like to follow the development of their favourite projects closely, but either don't feel comfortable building from the VCS, or don't want to go through the hassle.

Packages are of course a great way to distribute pre-compiled software, so it was natural to want to provide builds in this format, but I'm not aware of many projects doing that, aside from those which fta provides builds for. Now that Launchpad provides PPAs and code imports, and the previous project provides imports of the packaging of all Debian and Ubuntu packages in to bzr, all the pieces are there in order to allow you to produce packages of a project automatically every day.

This is currently available in beta in Launchpad, so you can go and try it out, though there are a few known problems that we are working on until it will be as pleasant as we want.

This has the potential to do great things for projects if used correctly. It can increase the number of people testing fresh code and giving feedback by orders of magnitude. Also, just building the packages acts as a kind of continuous integration, and can provide early warning of problems that will affect the packaging of the project. Also, they provide an easy way for people to test the latest code if a bug is believed to be fixed.

Obviously there are some dangers associated with automatic builds, but if they are used by people who know what they are doing then it can help to close the loop between users and developers.

There are also many more things that can be done with this feature by people with imagination, so I'm excited to see what directions people will take it in.

Fixing things

Aside from these projects, I was also given some time to work on Ubuntu itself, but without long-term projects to ship. That meant that I was able to fix things that were standing in my way, either in the way of the above projects, or just hampering my use of Ubuntu, or fix important bugs in the release.

In addition I took on smaller projects, such as getting kerneloops enabled by default in Ubuntu. While doing this I realised that the user experience of that tool could be improved a lot for Ubuntu users, as well as allowing us to report the problems caught by the tool as bugs in Launchpad if we wished.

I really enjoyed having this flexibility, as it allowed me to learn about many areas of the Ubuntu system, and beyond, and also played to my strengths of being able to quickly dive in to a new codebase and diagnose problems.

I think that in my own small way, each of these helped to improve Ubuntu releases, and in turn the projects that Ubuntu is built from.

Sponsoring

While I'm sorry to say that other demands have pulled my code review time in to other projects, I did used to spend a lot of time reviewing and sponsoring changes in to Ubuntu.

I highlight this mainly as another chance to emphasise how important I think code review is, especially when it is review of code from people new to the project. It improves code quality, but is also a great opportunity for mentoring, encouraging good habits, and helping new developers join the project. I hope that my efforts in this are had a few of these characteristics and helped increase the number of free software developers. Oh how I wish there were more time to continue doing this.

Linaro

I've now been started working on the Linaro project, specifically in the Infrastructure team, working on tools and Infrastructure for Linaro developers and beyond. I'm not one to be all talk and no action, so I won't talk to much about what I am working on, but I would like to talk about why it is important.

Firstly I think that Linaro is an important project for Free Software, as it has the opportunity to lead to more devices being sold that are built on or entirely free software, some in areas that have historically been home to players that have not been good open source citizens.

Also, I think tools are an important area to work on, not just in Linaro. They pervade the development experience, and can be a huge pain to work with. It's important that we have great tools for developing free software so as not to put people off. Developers, volunteers and paid, aren't going to carry on too long with tools that cause them more problems than they are worth, and not all are going to persist because they value Free Software over their own enjoyment of what they do.

Read more

Normally when you write some code using launchpadlib you end up with Launchpad showing your users something like this:

/images/lplib-before.png

This isn't great, how is the user supposed to know which option to click? What do you do if they don't choose the option you want?

Instead it's possible to limit the choices that the user has to make to only those that your application can use, plus the option to deny all access, by changing the way you create your Launchpad object.

from launchpadlib.launchpad import Launchpad

lp = Launchpad.get_token_and_login("testing", allow_access_levels=["WRITE_PUBLIC"])

This will present your users with something like this:

/images/lplib-after.png

which is easier to understand. There could be further improvements, but they would happen on the Launchpad side.

This approach works for both Launchpad.get_token_and_login and Launchpad.login_with.

The values that you can pass here aren't documented, and should probably be constants in launchpadlib, rather than hardcoded in every application, but for now you can use:

  • READ_PUBLIC
  • READ_PRIVATE
  • WRITE_PUBLIC
  • WRITE_PRIVATE

Read more

Normally when you write some code using launchpadlib you end up with Launchpad showing your users something like this:

/images/lplib-before.png

This isn't great, how is the user supposed to know which option to click? What do you do if they don't choose the option you want?

Instead it's possible to limit the choices that the user has to make to only those that your application can use, plus the option to deny all access, by changing the way you create your Launchpad object.

from launchpadlib.launchpad import Launchpad

lp = Launchpad.get_token_and_login("testing", allow_access_levels=["WRITE_PUBLIC"])

This will present your users with something like this:

/images/lplib-after.png

which is easier to understand. There could be further improvements, but they would happen on the Launchpad side.

This approach works for both Launchpad.get_token_and_login and Launchpad.login_with.

The values that you can pass here aren't documented, and should probably be constants in launchpadlib, rather than hardcoded in every application, but for now you can use:

  • READ_PUBLIC
  • READ_PRIVATE
  • WRITE_PUBLIC
  • WRITE_PRIVATE

Read more

Dear Mr Neary, thanks for your thought provoking post, I think it is a problem we need to be aware of as Free Software matures.

Firstly though I would like to say that the apparent ageism present in your argument isn't helpful to your point. Your comments appear to diminish the contributions of a whole generation of people. In addition, we shouldn't just be concerned with attracting young people to contribute, the same changes will have likely reduced the chances that people of all ages will get involved.

Aside from that though there is much to discuss. You talk about the changes in Free Software since you got involved, and it mirrors my observations. While these changes may have forced fewer people to learn all the details of how the system works, they have certainly allowed more people to use the software, bringing many different skills to the party with them.

I would contend that often the experience for those looking to do the compilation that you rate as important has parallels to the experience of just using the software you present from a few years ago. If we can change that experience as much as we have the installation and first use experience then we will empower more people to take part in those activities.

It is instructive then to look at how the changes came about to see if there are any pointers for us. I think there are two causes of the change that are of interest to this discussion.

Firstly, one change has been an increased focus on user experience. Designing and building software that serves the users needs has made it much more palatable for people, and reduced the investment that people have to make before using it. In the same way I think we should focus on developer experience, making it more pleasant to perform some of the tasks needed to be a hobbyist. Yes, this means hiding some of the complexity to start with, but that doesn't mean that it can't be delved in to later. Progressive exposure will help people to learn by not requiring them to master the art before being able to do anything.

Secondly, there has been a push to make informed decisions on behalf of the user when providing them with the initial experience. You no longer get a base system after installation, upon which you are expected to select from the thousands of packages to build your perfect environment. Neither are you led to download multiple CDs that contain the entire contents of a distribution, much of which is installed by default. Instead you are given an environment that is already equipped to do common tasks, where each task is covered by an application that has been selected by experts on your behalf.

We should do something similar with developer tools, making opinionated decisions for the new developer, and allowing them to change things as they learn, similar to the way in which you are still free to choose from the thousands of packages in the distribution repositories. Doing this makes documentation easier to write, allows for knowledge sharing, and reduces the chances of paralysis of choice.

There are obviously difficulties with this given that often the choice of tool that one person makes on a project dicatates or heavily influences the choice other people have to make. If you choose autotools for your projects then I can't build it with CMake. Our development tools are important to us as they shape the environment in which we work, so there are strong opinions, but perhaps consistency could become more of a priority. There are also things we can do with libraries, format specifications and wrappers to allow choice while still providing a good experience for the fledgling developer.

Obviously as we are talking about free software the code will always be available, but that isn't enough in my mind. It needs to be easier to go from code to something you can install and remove, allowing you to dig deeper once you have achieved that.

I believe that our effort around things like https://dev.launchpad.net/BuildBranchToArchive will go some way to helping with this.

Read more

Dear Mr Neary, thanks for your thought provoking post, I think it is a problem we need to be aware of as Free Software matures.

Firstly though I would like to say that the apparent ageism present in your argument isn't helpful to your point. Your comments appear to diminish the contributions of a whole generation of people. In addition, we shouldn't just be concerned with attracting young people to contribute, the same changes will have likely reduced the chances that people of all ages will get involved.

Aside from that though there is much to discuss. You talk about the changes in Free Software since you got involved, and it mirrors my observations. While these changes may have forced fewer people to learn all the details of how the system works, they have certainly allowed more people to use the software, bringing many different skills to the party with them.

I would contend that often the experience for those looking to do the compilation that you rate as important has parallels to the experience of just using the software you present from a few years ago. If we can change that experience as much as we have the installation and first use experience then we will empower more people to take part in those activities.

It is instructive then to look at how the changes came about to see if there are any pointers for us. I think there are two causes of the change that are of interest to this discussion.

Firstly, one change has been an increased focus on user experience. Designing and building software that serves the users needs has made it much more palatable for people, and reduced the investment that people have to make before using it. In the same way I think we should focus on developer experience, making it more pleasant to perform some of the tasks needed to be a hobbyist. Yes, this means hiding some of the complexity to start with, but that doesn't mean that it can't be delved in to later. Progressive exposure will help people to learn by not requiring them to master the art before being able to do anything.

Secondly, there has been a push to make informed decisions on behalf of the user when providing them with the initial experience. You no longer get a base system after installation, upon which you are expected to select from the thousands of packages to build your perfect environment. Neither are you led to download multiple CDs that contain the entire contents of a distribution, much of which is installed by default. Instead you are given an environment that is already equipped to do common tasks, where each task is covered by an application that has been selected by experts on your behalf.

We should do something similar with developer tools, making opinionated decisions for the new developer, and allowing them to change things as they learn, similar to the way in which you are still free to choose from the thousands of packages in the distribution repositories. Doing this makes documentation easier to write, allows for knowledge sharing, and reduces the chances of paralysis of choice.

There are obviously difficulties with this given that often the choice of tool that one person makes on a project dicatates or heavily influences the choice other people have to make. If you choose autotools for your projects then I can't build it with CMake. Our development tools are important to us as they shape the environment in which we work, so there are strong opinions, but perhaps consistency could become more of a priority. There are also things we can do with libraries, format specifications and wrappers to allow choice while still providing a good experience for the fledgling developer.

Obviously as we are talking about free software the code will always be available, but that isn't enough in my mind. It needs to be easier to go from code to something you can install and remove, allowing you to dig deeper once you have achieved that.

I believe that our effort around things like https://dev.launchpad.net/BuildBranchToArchive will go some way to helping with this.

Read more

The deadline for students to submit their applications to Google for Summer of Code is imminent.

If you were waiting for the last minute to submit, that is now!

If you are mentor and have the perfect student you have been working with, check with them that they have submitted the application to Google, otherwise you will be stuck.

Next week we'll start to process the huge number of applications that we have for Ubuntu.

Read more

The deadline for students to submit their applications to Google for Summer of Code is imminent.

If you were waiting for the last minute to submit, that is now!

If you are mentor and have the perfect student you have been working with, check with them that they have submitted the application to Google, otherwise you will be stuck.

Next week we'll start to process the huge number of applications that we have for Ubuntu.

Read more

If you don't want to read this article, then just steer clear of python-multiprocessing, threads and glib in the same application. Let me explain why.

There's a rather famous bug in Gwibber in Ubuntu Lucid, where a gwibber-service process will start taking 100% of the CPU time of one of your cores if it can. While looking in to why this bug happened I learnt a lot about how multiprocessing and GLib work, and wanted to record some of this so that others may avoid the bear traps.

Python's multiprocessing module is a nice module to allow you to easily run some code in a subprocess, to get around the restriction of the GIL for example. It makes it really easy to run a particular function in a subprocess, which is a step up from what you had to do before it existed. However, when using it you should be aware how the way it works can interact with the rest of your app, because there are some possible nasties lurking there.

GLib is a set of building blocks for apps, most notably used by GTK+. It provides an object system, a mainloop and lots more besides. What we are most interested here is the mainloop, signals, and thread integration that it provides.

Let's start the explanation by looking at how multiprocessing does its thing. When you start a subprocess using multiprocessing.Process, or something that uses it, it causes a fork(2), which starts a new process with a copy of the programs current memory, with some exceptions. This is really nice for multiprocessing, as you can just run any code from that program in the subprocess and pass the result back without too much difficulty.

The problems occur because there isn't an exec(3) to accompany the fork(2). This is what makes multiprocessing so easy to use, but doesn't insert a clean process boundary between the processes. Most notably for this example, it means the child inherits the file descriptors of the parent (critically even those marked FD_CLOEXEC).

The other piece to this puzzle is how the GLib mainloop communicates between threads. It requires some mechanism where one thread can alert another that something of interest happened. To do this when you tell GLib that you will be using threads in your app by calling g_thread_init (gobject.threads_init() in Python) then it will create a pipe for use by glib to alert other threads. It also creates a watcher thread that polls one end of this pipe so that it can act when a thread wishes to pass something on to the mainloop.

The final part of the puzzle is what your app does in a subprocess with mutliprocessing. If you purely do something such as number crunching then you won't have any issues. If however you use some glib functions that will cause the child to communicate with the mainloop then you will see problems.

As the child inherits the file descriptors of the parent it will use the same pipe for communication. Therefore if a function in the child writes to this pipe then it can put the parent in to a confused state. What happens in gwibber is that it uses some gnome-keyring functions and that puts the parent in to a state where the watcher thread created by g_thread_init busy-polls on the pipe, taking up as much CPU time as it can get from one core.

In summary, you will see issues if you use python-multiprocessing from a thread and use some glib functions in the children.

There are some ways to fix this, but no silver bullet:

  • Don't use threads, just use multiprocessing. However, you can't communicate with glib signals between subprocesses, and there's no equivalent built in to multiprocessing.
  • Don't use glib functions from the children.
  • Don't use multiprocessing to run the children, use exec(3) a script that does what you want, but this isn't as flexible or as convenient.

It may be possible to use the support for different GMainContexts for different threads to work around this, but:

  • You can't access this from Python, and
  • I'm not sure that every library you use will correctly implement it, and so you may still get issues.

Note that none of the parties here are doing anything particularly wrong, it's a bad interaction caused by some decisions that are known to cause issues with concurrency. I also think there are issues when using DBus from multiprocessing children, but I haven't thoroughly investigated that. I'm not entirely sure why the multiprocessing child seems to have to be run from a non-main thread in the parent to trigger this, any insight would be welcome. You can find a small script to reproduce the problem here.

Or, to put it another way, global state bad for concurrency.

Read more

If you don't want to read this article, then just steer clear of python-multiprocessing, threads and glib in the same application. Let me explain why.

There's a rather famous bug in Gwibber in Ubuntu Lucid, where a gwibber-service process will start taking 100% of the CPU time of one of your cores if it can. While looking in to why this bug happened I learnt a lot about how multiprocessing and GLib work, and wanted to record some of this so that others may avoid the bear traps.

Python's multiprocessing module is a nice module to allow you to easily run some code in a subprocess, to get around the restriction of the GIL for example. It makes it really easy to run a particular function in a subprocess, which is a step up from what you had to do before it existed. However, when using it you should be aware how the way it works can interact with the rest of your app, because there are some possible nasties lurking there.

GLib is a set of building blocks for apps, most notably used by GTK+. It provides an object system, a mainloop and lots more besides. What we are most interested here is the mainloop, signals, and thread integration that it provides.

Let's start the explanation by looking at how multiprocessing does its thing. When you start a subprocess using multiprocessing.Process, or something that uses it, it causes a fork(2), which starts a new process with a copy of the programs current memory, with some exceptions. This is really nice for multiprocessing, as you can just run any code from that program in the subprocess and pass the result back without too much difficulty.

The problems occur because there isn't an exec(3) to accompany the fork(2). This is what makes multiprocessing so easy to use, but doesn't insert a clean process boundary between the processes. Most notably for this example, it means the child inherits the file descriptors of the parent (critically even those marked FD_CLOEXEC).

The other piece to this puzzle is how the GLib mainloop communicates between threads. It requires some mechanism where one thread can alert another that something of interest happened. To do this when you tell GLib that you will be using threads in your app by calling g_thread_init (gobject.threads_init() in Python) then it will create a pipe for use by glib to alert other threads. It also creates a watcher thread that polls one end of this pipe so that it can act when a thread wishes to pass something on to the mainloop.

The final part of the puzzle is what your app does in a subprocess with mutliprocessing. If you purely do something such as number crunching then you won't have any issues. If however you use some glib functions that will cause the child to communicate with the mainloop then you will see problems.

As the child inherits the file descriptors of the parent it will use the same pipe for communication. Therefore if a function in the child writes to this pipe then it can put the parent in to a confused state. What happens in gwibber is that it uses some gnome-keyring functions and that puts the parent in to a state where the watcher thread created by g_thread_init busy-polls on the pipe, taking up as much CPU time as it can get from one core.

In summary, you will see issues if you use python-multiprocessing from a thread and use some glib functions in the children.

There are some ways to fix this, but no silver bullet:

  • Don't use threads, just use multiprocessing. However, you can't communicate with glib signals between subprocesses, and there's no equivalent built in to multiprocessing.
  • Don't use glib functions from the children.
  • Don't use multiprocessing to run the children, use exec(3) a script that does what you want, but this isn't as flexible or as convenient.

It may be possible to use the support for different GMainContexts for different threads to work around this, but:

  • You can't access this from Python, and
  • I'm not sure that every library you use will correctly implement it, and so you may still get issues.

Note that none of the parties here are doing anything particularly wrong, it's a bad interaction caused by some decisions that are known to cause issues with concurrency. I also think there are issues when using DBus from multiprocessing children, but I haven't thoroughly investigated that. I'm not entirely sure why the multiprocessing child seems to have to be run from a non-main thread in the parent to trigger this, any insight would be welcome. You can find a small script to reproduce the problem here.

Or, to put it another way, global state bad for concurrency.

Read more

As you've probably heard by now, Ubuntu has been accepted to Google Summer of Code this year. We're currently at the point where we are looking for students to take part and the mentors to pair with them to make the proposal. We have some ideas on the wiki, but there's nothing to stop you coming up with your own if you have a great idea. The only requirement is that you find a mentor to help you with it.

The best way to do this is to write up a proposal on your wiki page on the Ubuntu wiki, and then to email the Ubuntu Summer of Code mailing list about it. You can also ask for possible mentors on IRC and on other Ubuntu mailing lists related to your idea.

I have a couple of ideas on the wiki page, but I am happy to consider ideas from students that fall in my area of expertise.

I spend most of my time working on developer tools and infrastructure. These are things that users of Ubuntu won't see, but are used every day by developers of Ubuntu. Improvements we can make in this area can in turn improve Ubuntu by giving us happier, more productive, developers. It's also an interesting area to work in, as there are usually different constraints to developing user software, as developers have different demands.

If you think that sounds interesting and you have a great idea that falls in to that area, or you like one of my ideas on the wiki page, then get in touch with me. I will be happy to discuss your ideas and help you flesh them out in to a possible proposal, but I won't be able to mentor everyone.

I would consider mentoring any idea that either improved existing tools used by Ubuntu developers (bzr, pbuilder, devscripts, ubuntu-dev-tools, etc.) or created a new one that would make things easier. In the same spirit, anything that makes it easier for someone to get started with Ubuntu development, such as Harvest, helpers for creating packages, etc. could be a possible project. The last category would be infrastructure-type projects such as the idea to automate test-merging-and-building of new upstreams, or similar ideas.

I've also posted about some ideas that I would like to see previously on my blog, which might be a source of inspriation.

If this interests you then you can find out how to contact me on my Launchpad profile.

Read more

As my contribution to Ada Lovelace Day 2010 I would like to mention Emma Jane Hogbin.

Emma is an Ubuntu Member, published author, documentation evangelist, conference organiser, Drupal theming expert, tireless conference presenter, and many more things as well.

Her enthusiasm is infectious, and her passion for solving problems for people is admirable. She is a constant source of inspiration to me, and that continues even as she branches out in to new things.

(Hat tip for the title to the ever excellent Sharrow)

Read more

The Bazaar package importer is a service that we run to allow people to use Bazaar for Ubuntu development by importing any source package uploads in to bzr. It's not something that most Ubuntu developers will interact with directly, but is of increasing importance.

I've spent a lot of time working in the background on this project, and while the details have never been secret, and in fact the code has been available for a while, I'm sure most people don't know what goes on. I wanted to rectify that, and so started with some wiki documentation on the internals. This post is more abstract, talking about the archtecture.

While it has a common pattern of requirements, and so those familiar with the architecture of job systems will recognise the solution, the devil is in the details. I therefore present this as a case-study of one such system that can be used to constrast other similar sytstems as an aid to learning how differing requirements affect the finished product.

The Problem

For the Ubuntu Distributed Development initative we have a need for a process that imports packages in to bzr on an ongoing basis as they are uploaded to Ubuntu. This is so that we can have a smooth transition rather than a flag day where everyone switches. For those that are familiar with them think Launchpad's code imports but with Debian/Ubuntu packages as the source, rather than a foreign VCS.

This process is required to watch for uploads to Debian and Ubuntu and trigger a run to import that upload to the bzr branches, pushing the result to LP. It should be fast, though we currently have a publication delay in Ubuntu that means we are used to latencies of an hour, so it doesn't have to be greased lightning to get acceptance. It is more important to be reliable, so that the bzr branches can be assumed to be up to date, that is crucial for acceptance.

It should also keep an audit trail of what it thinks is in the branches. As we open up write access to the resulting branches to Ubuntu developers we can not rely on the content of the branches not being tampered with. I don't expect this will ever be a problem, but I wanted to ensure that we could at least detect tampering, even if we couldn't know exactly what had happened by keeping private copies of everything.

The Building Blocks

The first building block of the solution is the import script for a single package. You can run this at any time and it will figure out what is unimported, and do the import of the rest, so you can trigger it as many times as you like without worrying that it will cause problems. Therefore the requirement is only to trigger it at least once when there has been an upload since the last time it was run, which is a nicer requirement than "exactly once per upload" or similar.

However, as it may import to a number of branches (both lucid and karmic-security in the case of a security upload, say), and these must be consistent on Launchpad, only one instance can run at once. There is no way to do atomic operations on sets of branches on Launchpad, therefore we use locks to ensure that only one process is running per-package at any one time. I would like to explore ways to remove this requirement, such as avoiding race conditions by operating on the Launchpad branches in a consistent manner, as this would give more freedom to scale out.

The other part of the system is a driver process. We use separate processes so that any faults in the import script can be caught in the supervisor process, with the errors being logged. The driver process picks a package to import and triggers a run of the script for it. It uses something like the following to do that:

write_failure(package, "died")
try:
    import(package)
except:
    write_failure(packge, stderr)
finally:
    remove_failure(package)

write_failure creates a record that the package failed to import with a reason. This provides a list of problems to work through, and also means that we can avoid trying to import a package if we know it has failed. This ensures that previous failures are dealt with properly without giving them a chance to corrupt things later.

Queuing

I said that the driver picks a package and imports it. To do this it simply queries the database for the highest priority job waiting, dispatching the result, or sleeping if there are no waiting jobs. It can actually dispatch multiple jobs in parallel as it uses processes to do the work.

The queue is filled by a couple of other processes triggered by cron. This is useful as it means that further threads are not required, and there is less code running in the monitor process, and so less chance that bugs will bring it down.

The first process is one that checks for new uploads since the last check and adds a job for them, see below for the details. The second is one that looks at the current list of failures and retries some of them automatically, if the failure looks like it was likely to be transient, such as a timeout error trying to reach Launchpad. It only retries after a timeout of a couple of hours has elapsed, and also if that package hasn't failed in that same way several times in a row (to protect against e.g. the data that job is sending to LP causing it to crash and so give timeout errors.)

It may be better to use an AMQP broker or a job server such as Gearman for this task, rather that just using the database. However, we don't really need any of the more advanced features that these provide, and already have some degree of loose-coupling, so using fewer moving parts seems sensible.

Reacting to new uploads

I find this to be a rather neat solution, thanks to the Launchpad team. We use the API for this, notably a method on IArchive called getPublishedSources(). They key here is the parameter "created_since_date". We keep track of this and pass it to the API calls to get the uploads since the last time we ran, and then act on those. Once we processed them all then we update the stored date and go around again.

This has some nice properties, it is a poll interface, but has some things in common with an event-based one. Key in my eyes is that we don't have to have perfect uptime in order to ensure we never miss events.

However, I am not convinced that we will never get a publication that appears later than one that we have dealt with, but that reports an earlier time. If this happens we would never see it. The times we use always come from LP, so don't require synchronised clocks between the machine where this runs and the LP machines, but it could still happen inside LP. To avoid this I subtract a delta when I send the request, so assuming the skew would not be greater than that delta we won't get hit. This does mean that you repeatedly try and import the same things, but that is just a mild inefficiency.

Synchronisation

There is a synchronisation point when we push to Launchpad. Before and after this critical period we can blow away what we are doing with no issues. During it though we will have an inconsistent state of the world if we did that. Therefore I used a protocol to ensure that we guard this section.

As we know locking ensures that only one process runs at a time, meaning that the only way to race is with "yourself." All the code is written to assume that things can go down at any time as I said, the supervisor catches this and marks the failures, and even guards against itself dying. Therefore when it picks back up and restarts the jobs that it was processing before dying it needs to ensure that it wasn't in the critical section.

To do this we use a three-phase commit on the audit data to accomany the push. When we are doing the import we track the additions to the audit data separately from the committed data. Then if we die before we reach the critical section we can just drop it again, returning to the inital state.

The next phase marks in the database that the critical section has begun. We then start the push back. If we die here we know we were in the critical section and can restart the push. Only once the push has fully completed do we move the new audit data in to place.

The next step cleans up the local branches, dying here means we can just carry on with the cleanup. Finally the mark that we are in the critical section is removed, and we are back to the start state, indicating that the last run was clean, and any subsequent run can proceed.

All of this means that if the processes go down for any reason, they will clean up or continue as they restart as normal.

Dealing with Launchpad API issues

The biggest area of operational headaches I have tends to come from using the Launchpad API. Overall the API is great to have, and generally a pleasure to use, but I find that it isn't as robust as I would like. I have spent quite some time trying to deal with that, and I would like to share some tips from my experience. I'm also keen to help diagnose the issues further if any Launchpad developers would like so that it can be more robust off the bat.

The first tip is: partition the data. Large datasets combined with fluctuating load may mean that you suddenly hit a timeout error. Some calls allow you to partition the data that you request. For instance, getPublishedSources that I spoke about above allows you to specify a distro_series parameter. Doing

distro.main_archive.getPublishedSources()

is far far more likely to timeout than

for s in distro.series:
    distro.main_archive.getPublishedSources(distro_series=s)

in fact, for Ubuntu, the former is guaranteed to timeout, it is a lot of data.

This is more coding, and not the natural way to do it, therefore it would be great if launchpadlib automatically partioned and recombined the data.

The second tip is: expect failure. This one should be obvious, but the API doesn't make it clear, unlike something like python-couchdb. It is a webservice, so you will sometimes get HTTP exceptions, such as when LP goes offline for a rollout. I've implemented randomized exponential backoff to help with this, as I tend to get frequent errors that don't apparently correspond to service issues. I very frequently see 502 return codes, on both edge and production, which I believe means that apache can't reach the appservers in time.

Summary

Overall, I think this architecture is good, given the synchronisation requirements we have for pushing to LP, without those it could be more loosely coupled.

The amount of day-to-day hand-holding required has reduced as I have learnt about the types of issues that are encountered and changed the code to recognise and act on them.

Read more

Dry Rub Barbeque Trout

Made this up after buying a nice piece of locally caught freshwater trout. I think that it would be even better if you were to hot-smoke it. Apply the rub between two and twelve hours before cooking.

Mix up the following then rub on to the flesh of the fish (enough for four servings):

  • 1 tbsp sea/rock salt.
  • 1 tbsp black peppercorns crushed.
  • 1 tbsp ground cumin.
  • 1 tbsp ground coriander.
  • 2 tsp caraway seed.
  • 2 tsp dried tarragon.
  • 2 tsp dried thyme.
  • 2 tsp chilli powder.
  • Zest of one lemon.

To drizzle on top when cooked melt some butter in a pan, add the juice of the lemon you used above, a pinch of salt, one crushed clove of garlic, and a handful of chopped coriander. Simmer for a couple of minutes.

Enjoy!

Read more

Project Cambria

David, it's interesting that you posted about that, as it's something I've been toying with for the last couple of years. For the last few months I've been (very) slowly experimenting in my free time with an approach that I think works well, and I think it's time to tell more people about it and to ask for contributions.

Opportunistic programmers are useful to cater for here, as Debian/Ubuntu development isn't trivial, and so we are simplifying something existing, which means that it will still be powerful, which is also important. I'm not only interested in improving the experience for the opportunistic programmer though, why should they get all the cool stuff? I'm interested in producing something that I can use for doing Ubuntu development too (though not every last detail).

The project I am talking about has been christened "cambria" and is now available on Launchpad. It's a library that aims to provide great APIs for working with packages throughout the lifecycle, including things like Bazaar, PPAs, local builds, testing, lintian, etc. It should be pleasurable to use and also allow you to build tools on top that are also pleasurable. It should also allow for easy extension in to different GUI toolkits and for command-line tools, though I've only been working with GTK so far.

In addition, there is a gedit plugin that allows you to perform common tasks from within gedit. I chose gedit as it has a pleasant Python API for plugins, isn't so complicated that it takes much learning, and will already be installed on most Ubuntu desktop systems. As I said though, the libarary allows you to implement in anything you like (that can use a python library.)

I've put together some mockups that suggest some of the things that I would like to do:

A mockup of an inteface for building packages within gedit. There is a button to build the active package, and a box that shows the output of the build.

Build

A mockup of an inteface for jumping to work on packages already downloaded in gedit. There is a list of packages that have previously been worked on, and the user can choose any to open a dialog of the contents of that package to choose a file to edit from within.

Package list

A mockup of an inteface for downloading the source of packages within gedit. The main point conveyed is that the user should be asked what they intend to work on (bug fix, merge, etc.) so that the tools can do some of the work for them, and wizards and the like can be used to do the rest.

Download

The RATIONALE file includes some more reasons for the project:

Project cambria is about wrapping the existing tools for Debian/Ubuntu development to allow a more task-based workflow. Depending on the task the developer is doing there may be several things that must be done, but they must currently work each one out individually. We have documentation to help with this, but it's much simpler if your tools can take care of it for you.

Project cambria aims to make Ubuntu development easier to get started with. There are several ways that it will help. Providing a task-based workflow where you are prompted for the information that is needed to complete the task, and other things are done automatically, or defaults chosen helps as it means you can concentrate on completing the task, rather than learning about all the possible changes you could make and deciding which applies.

Project cambria aims to make Ubuntu development easier for everyone by automating common tasks, and alleviating some of the tool tax that we pay. It won't just be a beginner tool, but will provide tools and APIs that experienced developers can use, or can build upon to build tools that suit them.

Project cambria will help to take people from novice to experienced developer by providing documentation that allows you to learn about the issues related to your current task. This provides an easier way in to the documentation than a large individual document (but it can still be read that way if you like).

Project cambria will make Ubuntu development more pleasurable by focusing on the user experience. It will aim to pull together disparate interfaces in to a single pleasing one. Where it needs to defer to a different interface it should provide the user with an explanation of what they will be seeing to lessen the jarring effect.

I'm keen for others to contribute, there is some information about this in the project's CONTRIBUTING file. I'm looking for all sorts of contributions from all kinds of people and keen to help you get started if you aren't confident with the type of contribution you would like to make.

There's a mailing list as part of the ~cambria team on Launchpad and IRC channel if you are interested in discussing it more.

Read more

You may well have heard about it (on this blog especially), but though I spend lots of my time involved with it and talking to people about it, there may be some people who aren't entirely sure what we are doing with the Ubuntu Distributed Development initiative, or what we are trying to achieve. To try and help this I wrote up an overview of what we are doing.

If this project interests you and you would like to help, or just observe, then you can subscribe to the mailing list. There's lots of fun projects that you could take on: there's far more that is possible and would be hugely useful to Ubuntu developers than we can currently work on. If you want to work on something then feel free to talk to me about it and we can see if there is something that would suit you.

Without further ado...

The aim

The TL;DR version:

  1. Version Control rocks.
  2. Distributed version control rocks even more.
  3. Bazaar rocks particularly well.
  4. Let's use Bazaar for Ubuntu.

Or, if you prefer a more verbose version...

Ubuntu is a global project with many people contributing to the development of it in many ways. In particular development/packaging involves many people working on packages, and much of this requires more than one person to work on the change that it is being made, for e.g.

  1. Working on the problem together
  2. Sponsoring
  3. Other review

etc.

These things usually require the code to be passed backwards and forwards, and in particular, merged. In addition, we sometimes have to do things like merge the patch in the bug with a later version of the Ubuntu package. In fact, Ubuntu is a derivative of Debian, and we expend a huge effort every cycle merging the two.

Distributed version control systems have to be good at merging, it's a fundamental property. We currently do without, but we have tools such as MoM that use version control techniques to help us with some of the merging. We could carry on in this fashion, or we could move to use a distributed version control system and make use of its features, and gain a lot of other things in the process.

Tasks such as viewing history, and annotating to find who made a particular change and why, also become much easier than when you have to download and unpack lots of tarballs.

This isn't to say that there aren't costs to the transition, and tools and processes we currently use that don't currently have an obvious analogue in the bzr world. That just means we have to identify those things and put the work in to provide an alternative, or to port, where it makes sense.

The aim is therefore to help make Ubuntu developers more productive, and enable us to increase the number of developers, by making use of modern technologies, in particular Bazaar, though there are several other things that are also being used to do this.

What it isn't

This isn't a project to overhaul all the Ubuntu development tools. While there are many things I would like to fix about some of our tools (see some of the things that Barry had to get his head around in the "First Impressions" thread), that can go ahead without having to tie it in to this project. I hope that when me make some common tasks easier, it will focus attention on others that are still overly complex, and encourage people to work on those too.

We are not replacing the entire stack. We are building upon the lower layers, and replacing some of the higher ones. We aim for compatibility where possible, and not breaking existing workflows until it makes sense.

The plan

You can read the original overall specification for this work at

https://wiki.ubuntu.com/DistributedDevelopment/Specification

It is rather dry and lacking in commentary, and also a little out of date as we drill down in to each of the phases. Therefore I'll say a little more about the plan here.

The plan is to work from the end of the Ubuntu developers, converting the things that we work most directly with first. This should give the biggest impact. We will then work to pull in other things that improve the system.

This means that we start by making all packages available in bzr, and make it possible to use bzr to do packaging tasks. In addition to this we are working with the LP developers to make it possible for Soyuz to build a source package from the branch, so that you don't have to leave bzr to make a change to a package. This work is underway.

After that we make all of Debian available in bzr in the same way. This allows us to merge from Debian directly in bzr. At a first cut, this just allows us to replace MoM, but in fact allows for more than that. Have a conflict? You have much more information available as to why the changes were made, which should help when deciding what to do.

The next step after that is to also bring the Vcs-* branches in to the history. These are the branches used by the Debian maintainer, and so allow you to work directly with the Debian maintainer without switching out of the system that you have learnt.

In a similar way we then want to pull in the upstream branches themselves. Again, this will allow you to work closely with upstream, without having to step out of the normal workflow you know.

The last point deserves some more explanation. The idea is that you will be able to grab a package as you normally do, work on a patch, and then when you are happy run a command or three that does something like the following:

  • Merges your change in to the tip of upstream, allowing you to resolve any conflicts.
  • Provide a cover letter for the change (seeded with the changelog entry and/or commit message(s).
  • Send the change off to upstream in their preferred format and location (LP merge proposal, patch in the bugtracker, mailing list etc.)

As you can imagine, there are a fair number of prerequisites that we need to complete before we can get to that stage, but I think of that as the goal. This will smooth some of the difficulties that arise in packaging from having to deal with a variety of upstreams. Finding the upstream VCS, working out their preferred form and location for submission, rebasing your change on their tip etc. I hope this will make Ubuntu developers more efficient, make forwarding changes easier to do and do well, and save new contributors from having to learn too many things at once.

Where we are now

We currently have all of Ubuntu imported (give or take), you can

bzr branch lp:ubuntu/<source package name>

which is great in itself for many people.

We also have all of Debian imported, and similarly available with

bzr branch lp:debian/<source package name>

which naturally allows

bzr merge lp:debian/<source package name>

so you can make use of that right now.

We are also currently looking at the sponsorship process around bzr branches, and once we have that cracked it will be much easier for upstream developers who know bzr to submit a bugfix, and that's a large constituency.

In addition, this means that a new contributor can start without having to learn debdiff etc., we can pass code around without having to merge two diffs and the like.

This is great in itself, but we are still some way from the final goal.

We are currently working on the VCS-* branches, to make them mergeable, but their are a number of prerequisites.

In addition the Launchpad team are also working on making it possible to build from a branch.

Where we can go

As I said, building on top of bzr makes a number of things easier.

For instance, once LP can build from branches, we could have a MoM-a-like that very cheaply tries to merge from Debian every time there is an upload there, and if it succeeds build the package. This could then tell you not only if there were any conflicts in the merge, but any build failures, even before you download the code.

In addition, we are currently talking a lot about Daily Builds, building the latest code every day (or commit, week, whatever). There are a number of things this brings. It doesn't strictly require version control, but as it's basically a merging problem having everything in Bazaar makes it much easier to do. We have a system now built on "recipes" that we are working to add to LP.

Parts of the work

There are a number of parts to the work, and you will see these and others being discussed on the list:

  • bzr (obviously), which we sometimes need to change to make this work possible, either bug fixes, or sometimes new features.
  • bzr-builddeb, which is a bzr plugin that knows how to go from branch to package and vice-versa.
  • bzr-builder, the bzr plugin that implements "recipes."
  • Launchpad, which hosts the branches, provides the merge prosals, and will allow building from branches and daily builds.
  • The bzr importer, this is the process that mirrors the Ubuntu and Debian archives in to bzr and pushes the branches to LP.

and probably others that I have forgotten right now.

Read more

Commit access is no more

Many projects that I work on, or follow the development of, and granted there may be a large selection bias here, are showing some of the same tendencies. Combined these indicate to me that we need to change the way we look at becoming a trusted member of the project.

The obvious change here is the move to distributed version control. I'm obviously a fan of this change, and for many reasons. One of those is the democratisation of the tools. There is no longer a special set of people that gets to use the best tools, with everyone else having to make do. Now you get to use the same tools whether you were the founder of the project, or someone working on your first change. That's extremely beneficial as it means that we don't partition our efforts to improve the tools we use. It also means that new contributors have an easier time getting started, as they get to use better tools. These two influences combine as well: a long time contributor can describe how they achieve something, and the new contributor can directly apply it, as they use the same tools.

This change does mean that getting "commit access" isn't about getting the ability to commit anymore; everyone can commit anytime to their own branch. Some projects, e.g. Bazaar, don't even hand out "commit access" in the literal sense, the project blessed code is handled by a robot, you just get the ability to have the robot merge a branch.

While it is true that getting "commit access" was never really about the tools, it was and is about being trusted to shepherd the shared code, a lot of projects still don't treat it that way. Once a developer gets "commit access" they just start committing every half-cooked patch they have to trunk. The full use of distributed version control, with many branches, just emphasises the shared code aspect. Anyone is free to create a branch with their half-baked idea and see if anyone else likes it. The "blessed" branch is just that, one that the project as a whole decides they will collaborate on.

This leads to my second change, code review. This is something that I also deeply believe in; it is vital to quality, and a point at which open source software becomes a massive advantage, so something we should exploit to the full. I see it used increasingly in many projects, and many moving up jml's code review "ladder" towards pre-merge review of every change. There seems to be increasing acceptance that code review is valuable, or at least that it is something a good project does.

Depending on the project the relationship of code review and "commit access" can vary, but at the least, someone with "commit access" can make their code review count. Some projects will not allow even those with "commit access" to act unilaterally, requiring multiple reviews, and some may even relegate the concept, working off votes from whoever is interested in the change.

At the very least, most projects will have code review when a new contributor wishes to make a change. This typically means that when you are granted "commit access" you are able or expected to review other people's code, even though you may never have done so before. Some projects also require every contribution to be reviewed, meaning that "commit access" doesn't grant you the ability to do as you wish, it instead just puts the onus on you to review the code of others as well as write your own.

As code review becomes more prevalent we need to re-examine what we see as "commit access," and how people show that they are ready for it. It may be that the concept becomes "trusted reviewer" or similar, but at the least code review will be a large part of it. Therefore I feel that we shouldn't just be looking at a person's code contributions, but also their code review contributions. Code review is a skill, some people are very good at it, some people are very very bad at it. You can improve with practice and teaching, and you can set bad examples for others if you are not careful. We will have to make sure that review runs through the blood of a project, everyone reviews the code of everyone else, and the reviews are reviewed.

The final change that I see as related is that of empowering non-code contributors. More and more projects are valuing these contributors, and one important part of doing that is trusting them with responsibility. It may be that sometimes trusting them means giving them "commit access", if they are working on improving the inline help for instance. Yes, it may be that distributed version control and code review mean that they do not have to do this, but those arguments could be made for code contributors too.

This leads me to another, and perhaps the most important, aspect of the "commit access" idea: trust. The fundamental, though sometimes unspoken, measure we use to gauge if someone should get "commit access" is whether we believe them to be trustworthy. Do we trust them to introduce code without review? Do we trust them to review other people's changes? Do we trust them to change only those areas they are experts in, or to speak to someone else if they are not? This is the rule we should be applying when making this decision, and we should be sure to be aware that this is what we are doing. There will often be other considerations as well, but this decision will always factor.

These ideas are not new, and the influences described here did not create them. However the confluence of them, and the changes that will likely happen in our projects over the next few years, mean that we must be sure to confront them. We must discard the "commit access" idea as many projects have seen it, and come up with new responsibilities that better reflect the tasks people are doing, the new ways projects operate, and that reward the interactions that make our projects stronger.

Read more

One of the new things that is going to be in karmic is that the kerneloops daemon will be installed and running by default. This tool, created by Arjan van de Ven, watches the kernel logs for problems. It has a companion service, kerneloops.org which aggregates reports of these problems, and can sort by kernel version and the like. This allows kernel developers to spot the most commonly encountered problems, areas of the code which are prone to bugs etc. When the kerneloops daemon catches a problem it allows you to send the problem to kerneloops.org.

We however, are not using the applet that comes with kerneloops to do this, we are making use of the brilliant Apport. There are a couple of reasons for this. We also want to make it easy for you to report these issues as bugs to Launchpad, and we don't want to prompt you with two different interfaces to do that.

The changes mean that if your machine has a kernel issue you will get an apport prompt as usual. As well as asking if you would like to report the problem to Launchpad like it does for other crashes it will ask if you would also like to report it to kerneloops.org. Passing the information through apport means that it can also be used on servers as well without running X.

Hopefully you will never see this improvement, but it's now going to be there for when those bugs do creep in.

Read more

I've just implemented the most requested feature in bzr-builder (Hi Ted), command support.

Sometimes you need to run a particular command to prepare a branch of your project for packaging (e.g. autoreconf). I think this should generally go in your build target, but not everyone agrees, and sometimes there is just no other way.

Therefore I added a new instruction to bzr-builder recipes, "run". If you put

run some command here

in your recipe then it will run "some command here" at that point when assembling.

Note that running commands that require arbitrary network access is still to be discouraged, as you don't know in what environment someone may assemble the recipe. I'd also advise against using commands unless you really need them, but that's obviously your call.

Read more