Canonical Voices

Posts tagged with 'general'

Graham Binns

juju bottle

Benji’s blog post earlier this week gave you all some insight into what the Launchpad Yellow Squad has been doing recently in its attempt to parallelise the Launchpad test suite. One of the side effects of this is that we’ve been making quite a lot of use of Juju, and we thought it’d be nice to actually spell out what we’ve been doing.

The problem

We’re working to parallelise Launchpad’s test suite so that it doesn’t take approximately one epoch to get a branch from being approved for merging until it lands. A lofty goal, sure, and one that presents some interesting problems from the perspective of building an environment to test our work in. You see, Launchpad’s build infrastructure is a pretty complicated beast. It’s come a long way since the time when submitting a branch for merging meant sending an email to our PQM bot, which would then run the test suite and kick the branch out if it failed, but now it’s something of a behemoth.

Time for some S&M

We use Buildbot as our continuous integration system. There are two parts to Buildbot: the master and the slave. Broadly put, the slave is the part of Buildbot that is responsible for doing the actual work of compilation and running tests and the master is responsible for telling the slave when to do things. Each master can be responsible for several slaves. When it became obvious that we were going to need to essentially replicate our existing setup in order to test our parallelisation work, we considered asking Canonicals system administrators, in our sweetest tones, to give us a box upon which to do our testing work, but we spotted two reasons that this would be problematic:

  1. We didn’t actually know at the outset what the best architecture was for our project.
  2. Asking for a machine without knowing what you actually need is likely to earn you a look so old it could have come from an ammonite, at least if you have sensible sysadmins.

So instead, the obvious solution: use Amazon EC2. After all, that would allow us to play with different architectures without there being any huge cost in terms of physical resources. Moreover, we’d be able to have root access on the instances on which we were testing, which makes debugging such a complicated process so much easier.


There was still a problem. How to actually set up the test instances, given that there are five of us spread between three timezones, that it takes a significant amount of time to set up a machine for Launchpad development, and finally that we don’t really want to leave EC2 instances running overnight if we don’t have to (because it’s expensive).

The sequence of steps we’d have to take to up an instance tends to look something like this:

  1. Launch a new EC2 instance (this happens pretty quickly, thanks, Amazon)
  2. Make sure that everyone’s public SSH keys are usable on that instance
  3. Run our Launchpad setup script(s) (this takes about an hour, usually).
  4. Install buildbot.
  5. Configure buildbot correctly as   master or slave.
  6. Run buildbot (or buildslave, if this is a slave) and make sure it’s hooked up correctly to the other type of buildbot.
  7. Get some code into buildbot and make it run the test suite.
As you can see, this is pretty long-winded and rather fragile; it’s very easy for us to miss out a step or misconfigure something, get confused and then be left with a broken instance and a bit of a headache. Now, you’d be quite right to argue that we could just write a checklist – or better yet, a shell script – to do a lot of the setup work for us. A good idea, true. But there’s a better way…
To be continued (or some other phrase that doesn’t sound so hammy that it almost goes “oink”)…
(Image by under a Creative Commons license)

Read more
Dan Harrop-Griffiths

Launchpad has a lot of tests, almost 20,000. There are tests that make sure the internals work as expected, that verify the Javascript works in web browsers, and everything in between. In a perfect world those tests would only take seconds to run. In this world they take hours; six hours on our current continuous integration machines, for instance.

These long-running tests severely impact the time it takes to develop and deploy changes to Launchpad. We would like to improve the situation.

Given that the test cases are theoretically independent of one another, the obvious thing to do is to run the tests in parallel on a multi-core machine. Unfortunately many of the tests interact with the environment (databases, memcached, temporary directories, etc.) and conflict if run simultaneously.

Enter LXC

What we need is a way to isolate the test processes from each another. Virtual machines would allow us to do that, but the overhead and heavy-weight setup makes them unappealing. That’s where LXC (Linux Containers) comes in handy. LXC

allows the easy creation of “containers” that are isolated from the ”host” machine without the performance overhead of VMs.

For example, to create a new container use lxc-create:

lxc-create -n test -t ubuntu

The container can then be started:

lxc-start -n test -d

And we can connect to it via SSH (using the default username and password shown during creation, if applicable):

ssh ubuntu@test

There are many options for customising the containers, including mounting a portion of the host’s file system in the container so sharing files between the two is easy.

Getting Ephemeral

All this is very nice for running isolated, parallel test runs but setting up and managing eight or more containers (one per core) is
off-putting, so we have used (and improved) a new LXC feature, ”ephemeral” containers (created with lxc-start-ephemeral).

Ephemeral containers are “clones” of a base container and can have a temporary file system that reflects the contents of the base container but any writes are stored in-memory and are not written to disk. This allows us to install Launchpad on a single base container and then spawn many ephemeral containers, each with their own list of tests to run.

The ephemeral containers can then write to their local file systems without interfering with the others running simultaneously. The
containers may also benefit from faster IO because of the file system changes being stored in memory.


We are still working out the kinks in our approach and wrestling with the occasional LXC bug as well as bugs in the Launchpad test suite itself. Even so we have already shortened a full test run on an eight-core EC2 instance down to 45 minutes; a substantial improvement over the current six hours.


(Image by Tolka Rova, Creative Commons license)

Read more
Dan Harrop-Griffiths

Why there is always time

Prague Astronomical Clock Prague Astronomical Clock

One of the main obstacles I come across when putting forward ideas for user testing for a project, is time:

“We’re on a very tight deadline, we can’t fit in testing,” “Can we leave it until the next release? There isn’t enough time at the moment.” “We just haven’t added in the time for all that user testing stuff.”

But the good news is – there is always time.

User testing, or usability testing (which is what we should really call it as we’re testing if things are usable, not testing the users themselves) can be extremely flexible. It can range from a detailed study of hundreds of painstakingly selected users, conducted in specially constructed labs with hidden screens, video recording devices and microphones, costing thousands of credits, with months to analyse and report the results. On the other end of the scale, it can simply be asking someone you pass in the corridor to look at a quick sketch of a wireframe you’ve made on the back of a napkin.

User testing can be both of these things, and everything in between, and yes, this can depend on time, and of course the other buzzword that sits so closely next to it – money. The thing is, it’s always better to do something, rather than nothing, however tight a deadline is – even if that is just asking a few users to try out a particular feature or function that you’re developing – whether this be with a flat mock-up or a working prototype.

Setting up some basic tests with a handful of users, running them and then writing up the results doesn’t need to take more than a day or two. The results will be pretty simple, and depending on the tests, will more likely be useful as a sense-check than a source of detailed information on user behaviour or working patterns, but  this is still valuable stuff that can make or break a new feature. The results will broadly have one of three outcomes – user’s just didn’t ‘get it’ and there are big problems to be fixed; there are smaller problem’s that have slipped everyone’s mind but the user’s found fairly quickly; or (rarely, almost never) everything was perfect and the users had a seamless, faultless experience.

After I’ve reached this point in the discussion, I sometimes come across another potential user research blocker…

“But there’s no point in finding this out, we don’t have enough time to change things before our deadline.”

It may be true that there’s no time to redesign a feature based on recommendations from user testing results in your current cycle – but it’s better to go into the next phase of a project already knowing at least a bit about what user’s think. If you’re in the final stage of a project, these kind of problems can be treated as bugs and ticked off one at a time.

It’s easy to become blinkered with a project, working with the same concepts, terminology and use patterns day after day – it can become hard to think – “if I was looking at all this stuff for the first time, would it make sense?” User testing in its quickest and simplest form aims to answer this question. And that’s something there’s always time for.

Read more
Dan Harrop-Griffiths

Meet Laura Czajkowski

Laura CzajkowskiDan: What’s your role on the Launchpad team?

Laura: I’m the Launchpad Support Specialist, so my job is pretty varied each day. Launchpad is rather larger than I first ever thought or had experience in using but it’s great to see so many people use it on a daily basis.

My role is to help people via email or IRC with their queries or point them in the right direction of where they can get more information or submit a bug or help them achieve something. I also look after Launchpad bugs and questions each day and it’s fascinating to see the varying questions we get on there so it’s a great way to learn and also see what interesting projects are on Launchpad and the communities that use it.

Dan: You’ve been working on Launchpad as a community member for a while though yeah?

I’ve been using it in the Ubuntu community in the past for blueprints, reporting bugs and and tracking issues and the odd time if I can help out in translations.

Dan: What’s been the biggest challenge in your new role so far?

Laura: Bazaar and PPAs both of which are bizarre to me at present, but the folks in the Launchpad and Bazaar teams have been really helpful to me and really made working with them easy.

Dan: Where do you work, and what can you see from your window?

Laura: I live in London, and work from home four days a week so when I look out the window I see the reflection of the London Eye. The other day a week I head into Canonical HQ.

Dan: If time/money was not an issue, what would you change about Launchpad?

Laura: Oh I’d love to make Launchpad translatable as I do know many people who love to get more involved, having it translated would help here. I’d also love to get more of the developer community involved in Launchpad, and where Launchpad isn’t doing what they’d like get them to submit patches and get them more involved in the process. It’s open source after all :)

Dan: How did you first start to get involved in the open source community?

Laura:I got involved when I was in college where I was roped into joining our computer society Skynet. Soon I became treasurer and event organiser and then eventually president of the society and got involved running open source conferences. Never looked back since!

Read more
Curtis Hovey

We are reimagining the nature of privacy in Launchpad. The goal of the disclosure feature is to introduce true private projects, and we are reconciling the contradictory implementations of privacy in bugs and branches.

We are adding a new kind of privacy called “Proprietary” which will work differently than the current forms of privacy.

The information in proprietary data is not shared between projects. The conversations, client, customer, partner, company, and organisation data are held in confidence. proprietary information is unlikely to every be made public.

Many projects currently have private bugs and branches because they contain proprietary information. We expect to change these bugs from generic private to proprietary. We know that private bugs and branches that belong to projects that have only a proprietary license are intended to be proprietary. We will not change bugs that are public, described as security, or are shared with another project.

This point is a subtle change from what I have spoken and written previously. We are not changing the current forms of privacy. We do not assume that all private things are proprietary. We are adding a new kind of privacy that cannot be shared with other projects to ensure the information is not disclosed.

Launchpad currently permits projects to have default private bugs and branches. These features exist for proprietary projects. We will change the APIs to clarify this. eg:

    project.private_bugs = True  => project.default_proprietary_bugs = True
    project.setBranchVisibilityTeamPolicy(FORBIDDEN) => project.default_proprietary_branches = True

Projects with commercial subscriptions will get the “proprietary” classification. Project contributors will be able to classify their bugs and branches as proprietary. The maintainers will be able to enable default proprietary bugs and branches.

Next part: Launchpad will use policies instead of roles to govern who has access to a kind of privacy.

Read more
Curtis Hovey

We are reimagining the nature of privacy in Launchpad. The goal of the disclosure feature is to introduce true private projects, and we are reconciling the contradictory implementations of privacy in bugs and branches.

We must change the UI to accommodate the a kind of privacy, and we must change some existing terms because to avoid confusion.

We currently have two checkboxes, Private and Security that create 4 combined states:

  • Public
  • Public Security
  • Private Security
  • Private something else

Most private bugs in Launchpad are private because they contain user data. You might think at first that something that is just private is proprietary. This is not the so. Ubuntu took advantage of defects in Launchpad’s conflation of subscription and access to address a kind of privacy we did not plan for. Most private bugs in Launchpad are owned by Ubuntu. They were created by the apport bug reporting process and may contain personal user data. These bugs cannot be made public until they are redacted or purged of user data. We reviewed a sample of private bugs that belong to public projects and discovered more than 90% were made private because they contained user data. Since project contributors cannot hide or edit bug comments, they chose to make the bug private to protect the user. Well done. Launchpad needs to clarify when something contains user data so that everyone knows that it cannot
be made public without removing the personal information.

Public and private security bugs represent two states in a workflow. The goal of every security bug is to be resolved, then made public so that users are informed. People who work on these issues do not use ”public” and “private”, they use “unembargoed” and “embargoed”.

Also, when I view something that is private, Launchpad needs to tell me why. The red privacy banner shown on Launchpad pages must tell me why something is private. Is it because the page contains user data, proprietary information, or an embargoed security issue? This informs me if the thing could become public.

When I want to change somethings visibility, I expect Launchpad to show me a choice that clearly states my options. Launchpad’s pickers currently shows me a term without an explanation, yet Launchpad’s code does contain the term’s definition. Instead of making me search (in vain), the picker must inform me. Given the risks of disclosing personal user data or proprietary information, I think an informative picker is essential. I expect to see something like this when I open the visibility picker for a bug:

Branches require a similar, if not identical way of describing their kind of information. I am not certain branches contain user data, but if one did, it would be clear that the branch should not be visible to everyone and should not be merged until the user data is removed.

Next post: We are adding a new kind of privacy called “Proprietary” which will work differently than the current forms of privacy.

Read more
Francis J. Lacoste

Faster deployments

CheetahBack in September, we announced our first fastdowntime deployment. That was a new way to do deployment involving DB changes. This meannt less downtime for you the user, but we were also hoping that it would speed up our development by allowing us to deliver changes more often.

How can we evaluate if we sped up development using this change? The most important metric we look at when making this evaluation is cycle time. That’s the time it takes to go from starting to make a change to having this change live in production.  So before fastdowntime, our cycle time was about 10 days, and it is now about 9 days. So along the introduction of this new deployment process, we cut 1 day off the average, or a 10% improvement. That’s not bad.

But comparing the cumulative frequency distribution of the cycle time with the old process and the new will give us a better idea of the improvement.

Cycle time chart

On this chart, the gap between the orange (fastdowntime deployment) and blue (original process) lines shows the improvement to us.  We can see that more changes were completed sooner. For example, under the old process about 60% of the changes were completed in less than 9 days whereas about 70% were completed under the same time in the new process. It’s interesting to note that for changes that took less than 4 days to complete or that took more than 3 weeks to complete, there is no practical difference between the two distributions. We can explain that by the fact that things that were fast before are still fast, and things that takes more than 3 weeks would usually have also encountered a deployment point in the past.

That’s looking at the big picture. Looking at the overall cycle time is what gives us confidence that the process as a whole was improved. For example, the gain in deployment could have been lost by increased development time. But the closer picture is more telling.

Deployment cycle time chart

The cycle time charted in this case is from the time a change is ready to be deployed until it’s actually live. It basically excludes the time to code, review, merge and test the changes. In this case, we can see that 95% of the changes had to wait less than 9 days to go live under the new process whereas it would take 19 days previously to get the same ratio. So an
improvement of 10 days! That’s way more nice.

Our next step on improving our cycle time is to parallelize our test suite. This is another major bottleneck in our process. In the best case, it usually takes about half a day between the time a developer submits their branch for merging until it is ready for QA on qastaging. The time in between is passed waiting and  running the test suite. It takes about 6 hours to our buildbot to validate a set of revisions. We have a project underway to run the tests in parallel. We hope to reduce the test suite time to under an hour with it. This means that it now would be possible for a developer to merge and QA a change on the same day! With this we expect to shave another day maybe two from the global cycle time.

Unfortunately, there are no easy silver bullets to make a dent in the time it takes to code a change. The only way to be faster there would be to make the Launchpad code base simpler. That’s also under way with the services oriented architecture project. But that will take some time to complete.

Photo by Martin Heigan. Licence: CC BY NC ND 2.0.

Read more
Diogo Matsubara

How to do Juju – Charming oops-tools

Recently the Launchpad Red Squad and Product Team started working on a new cloud project. As part of that project we’ll be using Juju, a tool that helps you easily deploy services on the cloud.

As an opportunity to learn more about how Juju works, I wrote a charm to deploy oops-tools, an open source Django application that helps visualize and aggregate error reports from Launchpad, on Amazon’s EC2 and Canonical’s private Openstack cloud.

You might be asking, what’s a charm? Charms are basically instructions on how to deploy services, that can be shared and re-used.

Assuming you already have a bootstrapped juju environment, deploying oops-tools using this charm is as easy as:

$ juju deploy --repository=. local:oops-tools
$ juju deploy --repository=. local:postgresql
$ juju add-relation postgresql:db oops-tools
$ juju expose oops-tools

That’s it! With just a few commands, I have an instance of oops-tools up and running in a few minutes.

Under the hood, the oops-tools charm:

  • starts two Ubuntu instances in the chosen cloud provider, one for the webserver and another for the database server
  • downloads the latest trunk version of oops-tools and its dependencies from Launchpad
  • configures oops-tools to run under Apache’s mod_wsgi
  • configures oops-tools to use the database server

There’s still work to do, like add support for RabbitMQ (oops-tools uses rabbit to provide real-time error reports), but this initial iteration proved useful to learn about Juju and how to write a charm. As it is, it can be used by developers who want to hack on oops-tools and can be easily changed to deploy oops-tools in a production environment.

If you’d like to give it a try, you can get the charm here:




(“Harry Potter’s The Standard Book of Spells” by Craig Grobler, licensed under a CC BY-NC-ND license)

Read more
Dan Harrop-Griffiths

Fighting fire with fire – Changes to bug heat

 bug heat storm trooper candle

We’re making changes to the way that bug heat is calculated and displayed in Launchpad.  From 6th February, bug heat will no longer age/degrade, and the active flame counter symbol will be replaced by a number, next to a static flame.  Here’s why.

Bug heat ageing is the cause of a wide range of timeouts in Launchpad. Every bug’s heat has to be recalculated and updated every day using a series of complex calculations, and when there are around 1 million bugs reports to track, that’s a lot of pressure on the system, consuming a significant chunk of resources.  Turning off bug aging is the simplest way to solve this issue.


new bug heat image


The flame counter symbol, although adding some visual flair (and flare), also needs to update every time the bug age recalculations are made.  The continual stream of updates to the bug rows also results in poor search index performance.

We’ll still have a flame symbol, however it’ll be static, with the bug heat number next to it. Although not as visually dynamic, it’ll be easier to work out bug heat scores more exactly, at a glance.

Although I’m sure some of us will miss this little Launchpad feature, less timeouts is good news for everyone.



(“Happy and safe birthday” by Stéfan, licensed under a CC:BY-NC-SA license)

Read more
Dan Harrop-Griffiths

New feature – Customise your bug listings

Custom Bug Listings

Over the past few months the Launchpad Orange Squad has been working to make it easier to get the information that matters to you from bug listings.

A lot of you have said in the past that you’d like to be able to filter bugs in a way that works best for you. Hopefully this new feature, with its customisable functionality should help with this goal, filling your screen with more of what you want to see.

Custom bug listings green bug


You can now sort bugs by criteria such as name, importance, status and age. You can switch on the criteria that you use most and turn off criteria that you don’t use. So if you always like to see bug age, but aren’t interested in heat, you can switch on age and switch off heat, and so on.

bug column screen shot


We’ve also redesigned how bug listings are displayed – fitting more information into each bug listing, and adding sort options such as bug age, assignee, reporter, and tags.

You can put your results into ascending or descending order without having to reload the page, and you’ll be able to save your preferred layout, so your settings will be saved for the next time you need to look over your bugs.

User research

This was my first main project since joining the Launchpad team back in November as the new Usability & Communications Specialist. User research has played an important part in how we’ve defined the feature and the decisions the team has made to improve the display, wording and functionality.

A number of you took part in one to one interviews, at group sessions at UDS-P and by taking part in an online survey. Thanks to everyone involved – what you told us has really helped to make this feature a more user-friendly experience. Some of our user research results (link) are already available online, with more being added soon. We’ll be carrying out some further tests in the weeks ahead, so please get in touch if you’d like to get involved.


Every new feature has teething problems, and custom bug listings is no different. We still have a number of bugs that need tweaking, so please bear with us, and file any bugs if you spot anything that’s still out there.

Read more
Aaron Bentley

New approaches to new bug listings

The new bug listings listings were the first time my squad, the Orange Squad, had a chance to work on some really nice in-depth client-side UI since our squad was formed. Not only were we implementing the feature, we wanted to lay groundwork for future features.  Here are some of the new things we’ve done.

Synchronized client-side and server-side rendering

Early on, we decided to try out the Mustache template language, because it has client and server implementations. Although we wanted to make a really responsive client-side UI, we also wanted to have server-side rendering, so that we’re not broken for web crawlers and those with JavaScript disabled. Being able to use the same template on the server and the client seemed ideal, since it would ensure identical rendering, regardless what environment the rendering was done in.

It’s been a mixed bag. We did accomplish the goal of using a single template across client and server, but there are significant bugs on both sides.

The JavaScript implementation, mustache.js, is slow on Firefox. Rendering 75 rows of data takes a noticeable length of time. If you’re a member of our beta team, you can see what I mean. Go to the bugs page for Launchpad itself in Firefox. Click Next. Now click Previous. This will load the data from cache, but it still takes a visible length of time before the listings are updated (and the Previous link goes grey).

mustache.js also has bugs that cause it to eat newlines and whitespace, but those can be worked around by using the appropriate entity references, e.g. replacing “\n” with “

The Python implementation, Pystache, does not implement scoping correctly. It is supposed to be possible to access outer variables from within a loop. On the client, we use this to control the visibility of fields while looping over rows. On the server, we have to inject these control variables into every row in order for them to be in scope.

We needed a way to load new batches of data. Mustache can use JSON data as its input. Launchpad’s web pages have long had the ability to provide JSON data to JavaScript, but Brad Crittenden and I recently added support for retrieving the same data without the page, via a special ++model++ URL. This seemed like the perfect fit to me, and it’s turned out pretty well. Using the ++model++ URL rather than a the Launchpad web service means the server-side rendering can tightly parallel the client-side rendering.  Each uses the same data to render the same template.  It also means we don’t have to develop a new API, which would probably be too page-specific.

Client-side Feature Flags

While in development, the feature was hidden behind a Feature Flag. But at one point, we found we wanted access to feature flags on the client side, so we’ve now implemented that.

History as Model

We wanted users to be able to use their browser’s Next and Back buttons in a sensible way, especially if they wanted to return to previous settings. We also wanted all our widgets to have a consistent understanding of the page state.

We were able to address both of these desires by using YUI’s History object as a common model object for all widgets.  History provides a key/value mapping, so it can be used as a model.  That mapping gets updated when the user clicks their browser next and back buttons.  And when we update History programmatically, we can update the URL to reflect the page state, so that the user can bookmark the page (or reload it) and get the same results.  Any update to History, whether from the user or from code, causes an event to be fired.

We’re able to boil the page state down to a batch identifier and a list of which fields are currently visible. The actual batches are stored elsewhere, because History isn’t a great place to store large amounts of data.  For one thing, there are limits on the amount of data that can be stored.  For another, the implementation that works with old browsers, HistoryHash, can’t store anything more complex than a string as a value.

All our widgets then listen for events indicating History has changed, and update themselves according to the new values in the model.

Summing up

It’s been an interesting feature to work on, both because of the new techniques we’ve been able to try out, and because we’ve been closely involved with the Product team, trying to bring their designs to life.  We haven’t quite finalized it yet, but I’m going on leave today, so I wanted to let you know what we’ve all been up to.  Happy holidays!

Read more
Francis J. Lacoste


The Launchpad maintenance teams have been working since the beginning of the year at reducing our Critical bugs count to 0. Without much success this far. The long term trend keeps the backlog at around 300.  And it’s not because we haven’t been fixing these. Since the beginning of the year, more than 800 Critical bugs were fixed, but more than 900 were reported :-(

So I investigated what was the source of all these new critical bugs we were finding. A random sample of 50 critical bugs filed were analyzed to see where and why they were introduced. The full analysis is available as a published Google document.

Here are the most interesting findings from the report:

  • Most of the new bugs (68%) are actually legacy issues lurking in our code base.
  • Performance and spotty test coverage represents together more than 50% of the cause of our new bugs. We should refocus maintenance on tackling performance problems, that’s what is going to bring us the most bang for the bucks (even though it’s not cheap).
  • As a team, we should increase our awareness of testing techniques and testing coverage. Always do TDD, maybe investigate ATDD to increase the coverage and documentation our the business rules we should be supporting.
  • We also need to pay more attention to how code is deployed, it’s now very usual for scripts to be interrupted, and for the new and ancient version of the code to operate in parallel.

Another way of looking at this is that Launchpad represents a very deep mine of technical debt. We don’t know how exactly deep the mine is, but we are going to find existing Critical issues until we hit the bottom of the mine. (Or until we finish refactoring enough of Launchpad for better testing and performance. That’s what the SOA project is all about.)

In the mean time, we should pay attention to regressions and fallouts, (those are really the new criticals) to  make sure that we aren’t extending the mine!

Photo by Brian W. Tobin. Licence: CC BY-NC-ND 2.0.

Read more
Dan Harrop-Griffiths

Custom bug listings – have your say

Our custom bug listings beta has been up and running for just over a week now – thanks to everyone in the Launchpad beta testers group that have tried it out, and thank you for all your valuable feedback and comments. If you haven’t tried it yet, you can get access by joining our beta team here:

We want to improve how the default information is displayed to make this tool work better, so we’ve put together a super-quick survey to find out:

- What information about a bug you most want to see in bug listings

- What the default ‘order by’ options should be

- If you’d like to see any other ‘order by’ options.

These three questions should only take a few minutes to complete, but they’ll add real value to our work redesigning how bug listings appear and function. Here’s the link if you’d like to take part

Read more
Raphaël Badin

Edit 2011-11-15 08:18 UTC: The problem is now fixed and we’ve re-enabled the new menu.

Edit 2011-11-11 13:42 UTC: We’ve temporarily disabled the new menu while we fix some unfortunate side effect.

We’ve just deployed a new, simplified version of the branch menu displayed on the right hand side of personal code pages (e.g. personal page for the Launchpad team). It looks like this:

Old menu

New menu

Calculating the number of branches took way too much time for people/teams with a huge number of branches (e.g., up to the point that they were getting timeouts.

The new design, along with optimisations we’ve made to the database queries, should improve performance for everyone.

Read more
Martin Pool

We’ve just upgraded Launchpad’s builder machines to Bazaar 2.4. Most importantly, this means that recipe builds of very large trees will work reliably, such as the daily builds of the Linaro ARM-optimized gcc. (This was bug 746822 in Launchpad).

We are going to do some further rollouts over the next week to improve supportability of recipe builds, support building non-native packages, handle muiltiarch package dependencies, improve the buildd deployment story etc.

Read more
Matthew Revell

Welcome to BerliOS projects

It’s sad to read that BerliOS will close in December, after nearly twelve years of serving open source projects. One fewer project hosting site means that the open source world is that bit poorer.

If you’ve been hosting your project on the BerliOS Developer platform and you’re looking for a new home, you’ve got plenty of choice.

We’d love to welcome you to Launchpad and here are a few reasons why you should consider Launchpad:

If you have questions, you’re very welcome to join us in #launchpad on FreeNode and the launchpad-users mailing list.

Read more
Martin Pool

We’ve recently deployed two features that make it easier to find bugs that you’re previously said affect you:

1: On your personal bugs page, there’s now an Affecting bugs that shows all these bugs.

2: On a project, distribution or source package bug listing page, there’s now a “Bugs affecting me” filter on the right (for example, bugs affecting you in the Launchpad product).

Counts of the number of affected users already help developers know which bugs are most urgent to fix, both directly and by feeding into Launchpad’s bug heat heuristic. With these changes, the “affects me” feature will also make it easier for you to keep an eye on these bugs, without having to subscribe to all mail from them.

screenshot of

Read more
Martin Pool

If you use gmail, you should now be able to send commands to Launchpad without gpg-signing.

gmail puts a DKIM cryptographic signature on outgoing mail, which is a cryptographic signature that proves that the mail was sent by gmail and that it was sent by the purported user. We verify the signature on Launchpad and treat that mail as trusted which means, for example, that you can triage bugs over mail or vote on merge proposals. Previously you needed to GPG-sign the mail which is a bit of a hassle for gmail.

(DKIM is signed by the sending domain, not by the user, so it doesn’t inherently prove that the purported sender is the actual one. People could intentionally or unintentionally set up a server that allows intra-domain impersonation, and it’s reported to be easy to misconfigure DKIM signers so that this happens. (Consider a simple SMTP server that accepts, signs and forwards everything from 192.168/16 with no authentication.) However, in cases like gmail we can reasonably assume Google don’t allow one user to impersonate another. We can add other trusted domains on request.)

If you have gmail configured to use some other address as your From address it will still work, as long as you verify both your gmail address and your other address.

You can use email commands to interact with both bugs and code merge proposals. For instance when Launchpad sends you mail about a new bug, you can just reply

  status confirmed
  importance medium

Thanks for letting us know!

We do this using the pydkim library.

Note that you do need at least one leading space before the commands.

If you hit any bugs, let us know.

Read more
Matthew Revell

Deployment reports are now public

Steve Kowalik writes:

For a while now Launchpad has been using deployment reports that tell us what state our QA is in, which revisions are safe to deploy to production, or which revisions are not safe to deploy since they failed QA.

Some time ago, I started the process to make these reports public, and I’m proud to announce that today, they are!

If you’re waiting to see when your code in Launchpad is likely to go live, take a look at the new public deployment report.

Read more
Francis J. Lacoste

Speeding up development

SpeedometerToday we reached a significant milestone, we completed our first fast down-time deployment. Two obvious reasons for doing this were already mentioned in the announcement and our technical architect”s post describing the change:

  1. We’ll have less downtime per month (at the cost of more frequent but short interruption).
  2. We’ll be able to deploy fixes and changes involving DB schema more frequently.

But from my perspective, the most important benefit I think we’ll get from this is a speed up in our rate of development, particularly, in terms of completing feature projects. It’s not a secret, our feature squads spends a lot of time to complete their projects. There are multiple reasons for this, but in the end, there usually fall under two broad cateries: the time it takes to actually make the change, and the delays in getting feedback on the change itself.

To help with the first category, you’ll want better and more powerful libraries, better architecture, developers’ training, etc. Think the time difference between developping a database web application in Django vs as a CGI C application using only the standard C library. Launchpad isn’t using the most modern libraries and toolkits, and we could still make a lot of improvements there.  But the costs of making changes in this space are compounded by the problems of the other category.

Once you wrote the change, you are far from done. There are lots of hoops you still have to jump through before saying “done-done“: you’ll need to make sure the tests pass, to get your changes reviewed, merged, QAed and then deployed. And finally, you’ll probably want to make sure that it matches the user’s expectations, but until it’s in production, this is hard to assess reliably. All of these steps takes time and introduce delays, some bigger than others. The Launchpad team is always on the look-out to cut in these delays and the new “fast down-time deployments” cut on of the biggest one we had.

Cycle time distribution

Since a picture is worth at least a thousand words, have a look at the chart above to have a better idea of what I’m talking about. It shows the distribution of the time it takes to complete a “change”. (What this plots is the cycle time from coding to deployment of our Kanban cards which roughly map to one logical change.) You’ll see that 50% of our changes are deployed to production in about a week. And the next 45% takes between 1 and 5 weeks. Now, our feature projects are composed of many many of these smaller changes. If those are all  relatively small changes, why do they take so long?

One of the big bottleneck was the batch size of our DB deployment. If a change required a DB schema it waited until the next downtime deployment which happened once every month. In theory, that means that on average a change involving the database would wait 2 weeks in the queue before deployment. In practice, it’s more complex than that, because squad leads would often plan around these. So a database change would be hold off onto because it was deemed that it couldn’t be safely completely to be part of the next downtime deployment. So it might be put on hold in favour of other work, and delayed to the next downtime deployment. It’s also frequent to have other changes building on the first also queued up waiting for the next deployment window. Add on top of that, that it’s common for the completion of a feature to require several iterations of DB change based on feedback and you quickly understand how you can be working months on a feature project!

But this major bottleneck is now gone! We’ll be able to land and deploy DB changes reliably within days, giving us much more rapid feedback. I’m looking forward to the change in the cycle time distribution in the coming months. The whole distribution should move toward the left. I’ll write a follow-up in two months to see if this prediction comes true.
Photo by Nathan E Photography. Licence: CC BY 2.0.



Read more