Canonical Voices

What Björn Tillenius talks about

The short version is that if you want to enable middleclick scrolling for Lenovo clickpads in Ubuntu, do this in a terminal:

sudo add-apt-repository ppa:bjornt/evdev
sudo apt-get update
sudo apt-get dist-upgrade

The commands above should upgrade the xserver-xorg-input-evdev package, as well as remove the xserver-xorg-input-synaptics and xserver-xorg-input-all packages.

Next you need to create a file at /usr/share/X11/xorg.conf.d/90-clickpad.conf with the following contents:

Section "InputClass"
    Identifier "Clickpad"
    MatchIsTouchpad "on"
    MatchDevicePath "/dev/input/event*"
    Driver "evdev"
    # Synaptics options come here.
    Option "Clickpad" "true"
    option "EmulatedMidButtonTime" "0"
    Option "SoftButtonAreas" "65% 0 0 40% 42% 65% 0 40%"
    Option "AreaBottomEdge" "0%"
EndSection

Section "InputClass"
    Identifier   "TrackPoint"
    MatchProduct "TrackPoint"
    MatchDriver  "evdev"
    Option       "EmulateWheel"       "1"
    Option       "EmulateWheelButton" "2"
    Option       "XAxisMapping"       "6 7"
EndSection

The interesting options are SoftButtonAreas and AreaBottomEdge. SoftButtonAreas specifies where the buttons should be. If you want the buttons at the top, itt should generally be in the form "R 0 0 H L R 0 H", where R is where the border between the middle and right buttons, H is the height of the buttons, and L is the border between the middle and left buttons.

The AreaBottomEdge turns off the touchpad, expect for clicking. If you want to keep using the touchpad, you can instead specify AreaTopEdge, with the same value you use for H. That would enable the touchpad below the buttons.

Unfortunately, you can't specify where the left button should be, instead if occupies everything that isn't the middle or right button. This is a bit annoying, since at least I tend to touch the touchpad with my palm when reaching for the middle button, which will result in a left click being registered instead of a middle click.

I created this package because Ubuntu doesn't quite support the clickpads that come in the newer Lenovo laptops. Ubuntu does support clickpads, and with the SoftButtonAreas config settings it's possible to have three soft buttons on the clickpad where the real buttons used to be. However, what's not supported out of the box is middleclick scrolling, where you hold the middle button and scroll with the trackpoint.

The main problem is that the clickpad is driven by synaptics and the trackpoint by evdev, and they can't communicate to generate the scroll events. Bae Taegil patched the evdev driver to basically include the synaptics driver. I've taken that patch and generated a package for Ubuntu 14.04. I've only added a package for Trusty, but I could add packages for other releases if needed. I will most likely add one for Utopic, when it becomes more stable.

Read more

In short, if you want to use web.go in Google App Engine's Go runtime environment, check out my google-app-engine branch of web.go. Using that one you can start using web.go like this:

package webgoexample

import (
    "http"
    "log"
    "os"
    "web"
)

var server *web.Server

func init() {
    server = &web.Server{
        Config: web.Config,
        Logger: log.New(os.Stdout, "", log.Ldate|log.Ltime)}
    server.Get("/", func(ctx *web.Context) {
        ctx.Write([]uint8("Hello from web.go!"))
    })

    // Send all requests to web.go.
    http.HandleFunc("/", handler)
}

func handler(writer http.ResponseWriter, request *http.Request) {
    server.ServeHTTP(writer, request)
}

BTW, if you simply put the web.go branch in your root dir, you have to remove the examples/ directory, otherwise App Engine won't be able to compile your project.

The branch in question has quite minimal changes to web.go, but I haven't proposed to merge it to trunk yet, since it removes some functionality. First of all I changed it not to set up /debug/ paths, so that I could remove the use of http/pprof, since it's not available on App Engine. After that I had to also remove the use of net.ResolveTCPAddr, which is also not available on App Engine. I basically replaced it with net.SplitHostPort, which I suspect is good enough. It doesn't resolve host names and port names, but I'd be surprised if hr.RemoteAddr wouldn't be an IP address and a port number.

Read more

formataddr() and unicode

I often see code like this:

message["To"] = formataddr((name, email))

This looks like it should work, especially since the docstring of formataddr() says that it will return a string value suitable for a To or Cc header. However, while it works most of the time, it fails if name is a unicode string containing non-ascii characters. It may look ok if you look simply read message["To"], but as soon as you convert the message or header to a byte string, you will see the problem.

>>> from email.Message import Message
>>> from email.Utils import formataddr
>>> msg = Message()
>>> msg["To"] = formataddr((u"Björn", "bjorn@tillenius.me"))
>>> msg["To"]
u'Bj\xf6rn <bjorn@tillenius.me>'
>>> msg.as_string()
'To: =?utf-8?b?QmrDtnJuIDxiam9ybkB0aWxsZW5pdXMubWU+?=\n\n'

Most code that will use the To address in the example will fail, since there's no visible e-mail address in there. The header should look like this, i.e. only the name itself should be encoded:

To: =?utf-8?b?QmrDtnJu?= <bjorn@tillenius.me>

I wish Python would handle this better. I usually end up writing a helper function like this for projects I work on:

def format_address(name, email):
    email = str(email)
    if not name:
        return email
    name = str(Header(name))
    return formataddr((name, email))

Read more

To start with, I think drive-by fixes are great. If you see that something is wrong when fixing something else, it can be a good idea to fix it right away, since otherwise you probably won't do it.

However, even when doing drive-by fixes, I still think that each landed branch should focus on one thing only. As soon as you start to group unrelated things together, you make more work for others. It might be easier for you, but think about all the people that are going to look at your changes. Please, don't be lazy! It doesn't take much work to extract the drive-by fix into a separate branch, and most importantly to land it separately. If you do find that it's too time-consuming to do this, let's talk, and see what is taking time. There should be something we can do to make it easier.

There's no such thing as a risk-free drive-by fix. There's always the potential of something going wrong (even if the application is well-tested). When something goes wrong, someone needs to go back and look at what was done. Now, if you land the drive-by fix together with unrelated (or even related) changes, you basically hide it. By reducing your workload slightly, you create much more work for someone else.

For example, on Friday I saw that we had some problems with scripts in Launchpad. They were trying to write to a mail directory, to which they didn't have access. That was odd, since scripts have always talked to the SMTP servers directly, and didn't use the queued mailer that needed write access to that directory. Looking through the recent commit logs didn't reveal anything. Luckily enough, William Grant pointed out that r9205 of db-devel contained a change to sendmail.py, which probably was the cause of the problems. This turned out to be correct, but it was still pretty much impossible to see why that change was made. I decided that the best thing to do was to revert the change, but I wasn't sure exactly what to revert. That diff of that revision is more than 4000 lines, and more than 70 files were changed. So how can I know which other files were change to accommodate the change in sendmail.py. I tried looking at the commit logs, but that didn't reveal much. The only thing I could do was to revert the change in sendmail.py and send it off to ec2, waiting three hours to see if anything broke. So, I plead again, if you do drive-by fixes (and you should), please spend a few minutes extra, to extract the fix into a separate branch, and land it separately! Is there maybe anything we can do to make this easier to do?

Read more

I've been working on a new release/merge workflow for Launchpad. I've written it from the developers' point of view, but I'd love some comments from users of launchpad.net, so let me try to explain how you, as users, would be affected by this change.

The proposal is that we would decouple our feature development cycles from our release cycles. We would more or less get rid of our releases, and push features to our users when they are ready to be used. Every feature would first go to edge.launchpad.net, and when it's considered good enough, it will get pushed to launchpad.net for everyone to use. Bug fixes would also go to edge.launchpad.net first, and pushed to launchpad.net when they are confirmed to work. Sadly, Launchpad will still go down once a month for updating DB and other regular maintenance, just like before. The amount (and frequency) of downtime would stay the same as before.

There are users who are in our beta team and use edge.launchpad.net all the time, and those who want a more stable Launchpad, and use launchpad.net.

Users of launchpad.net

Those who aren't in the beta team would get bug fixes sooner than with the current workflow. Instead of having to wait for the end of the release cycle, they will get it as soon as the fix has been confirmed to work on edge.launchpad.net. The same is true for features, kind of. These users would have to wait a bit longer than today, since today we push even unfinished features to launchpad.net users at the end of the release cycle. With the new workflow, these users would have to wait for the feature to be considered complete, but in return these user should get a better experience when seeing the feature for the first time.

One potential source of problem is that even though fixes and features get tested on edge.launchpad.net, before going to launchpad.net, with each update there is the potential of some other issue being introduced. For example, fixing a layout bug on one page, might make another page look different. With the current workflow this happens only once a month, instead of a few times every month with the new workflow. That said, even today we update launchpad.net multiple times every month, to fix more serious issues.

Users of edge.launchpad.net

If you are in the beta team, and use edge.launchpad.net on a regular basis, it won't be that different from how it works today. Just like today, you would be exposed to features that are under development. What would change is that we will try to do a better job at telling you which features that are on edge.launchpad.net only. This way you will have a better chance at actually helping us test and use the features, and tell us about any problems, so that we can fix it right away. This should make you more aware of new features that are being added to Launchpad, and provide a better opportunity for you to make it better.

One potential source of problem here is that developers will know that their work won't end up on launchpad.net, before they say it's ready, so they push more rough features to edge.launchpad.net. Thus it could be a more rockier ride than today. But of course, our developers care a lot about their users, so they won't land their work, unless it's really good! :-)

Conclusion

My hope is that this will provide a better and stable experience for users of launchpad.net, and provide users of edge.launchpad.net a better opportunity to help us making new features rock! But I'm interested to hear what you, the actual users, think about this.

Read more

I made the transition from the Bugs team lead to the Launchpad Technical Architect quite a while ago. While the first time was spent mainly on satisfying my coding desires, it's time to define what I'm going to spend my time as technical architect! My road map that shows the high level things that I'll be focusing on is available here:

I'll also be writing blog posts (and sending mails to the launchpad-dev mailing list of course) regularly to keep people updated with my progress and findings. My blog is at http://tillenius.me/ and I tag all posts related to Launchpad with launchpad.

I'm currently working on decoupling our feature development cycles with our release cycles, which I do mainly, because I think it's important, not because it's part of the technical architect's responsibility. But in parallel with that my next task is to set up a team that can help me doing a good job. I'll expand more about the team in another post, but in short it will consist of members from each sub-team in Launchpad. It will act as a forum to discuss what needs my attention, and they will also help me figuring out solutions to problems, and help me implement the solutions.

One of the first major tasks will be to come up with a testing strategy. Currently when we write tests we don't think that much about it. Everyone have their preferences, and we have a wide variety of testing styles, making it hard to find which code paths are exercised by which tests, and how good test coverage we have. This leads to us sometimes having bad test coverage, and some times having too much test coverage, i.e. we have redundant tests that make the test suite slower. Coming up with guidelines on how to write tests, which kind of tests to write, where to place them, etc., is the first step. But we also need to figure out how to make our test suite faster, what kind of documentation to provide, and so on.

In addition to the tasks on the roadmap, I also have a number of things I do on a regular basis. This includes reviewing database patches for API consistency, help teams design features from a technical point of view, keep my eyes open for areas in the code that need refactoring and clean-up.

Read more

We've used Windmill in our Launchpad buildbots for a while now, and it's actually worked out quite well. I was afraid that we would have a lot of fallout, since in the beginning Windmill was fragile and caused a lot of intermittent test failure. However, so far I'd said that we've had very little problems. There was one intermittent failure, but it was known from the beginning that it would fail eventually. Apart from that we've had only one major issue, and that's that something is using 100% CPU when our combined Javascript file is bigger than 512 000 bytes. This stopped people from landing Javascript additions for a while, and we still haven't resolved this issue, apart from making the file smaller.

There are some things that would be nice to improve with regards to Windmill. The most important thing is to make sure that launchpad.js can be bigger than 512 000 bytes:

It would also be nice to make the test output nicer. At the moment Windmill logs quite a lot to stderr, making it look like the test failed, even though it didn't. We don't want Windmill to log anything really, unless it's a critical failure:

I was going to say that we also have some problems related to logging in (because we look at a possibly stale page to decide whether the user is logged in), but it seems like Salgado already fixed it!

It would also be nice to investigate whether the problem with asserting a node directly after waiting for it sometimes fails. We had problems like that before; code was waiting for an element, and when using assertNode directly after the wait, the node still didn't exist. I haven't seen any test fail like that lately, so it might have been fixed somehow:

There are some other things I could think of that would be nice to have. I haven't found any bugs filed for them, but I'll list them here.

  • Don't run the whole test suite under xvfb-run. It'd be better to start xvfb only for the Windmill tests.
  • Use xvfb by default for Windmill tests. When running the Windmill tests it's quite annoying to have Firefox pop up now and then. It'd be better to run them headless by default.
  • Switches for making debugging easier. Currently we shut down Firefox after running the Windmill tests. It should be possible to have Firefox remain running after the test has finished running, so that you can manually poke around if you want to. If we use xvfb by default, we also need a switch for not using it.

Read more

Block on test failures

I can't stress enough how important it is to automatically block, stop the line, when a regression occurs. Forcing someone to take action. Don't think it's enough to have tests to catch regressions. It won't help you much, unless you run them automatically, and most importantly, block on test failures, forcing someone to fix them.

In Launchpad we have been quite good at this in the past. Already from the beginning we ran the whole test suite before every commit. If a test failed, the commit wasn't made, and the committer had to make the test pass before being able to commit those changes to the mainline. Now we have something similar. For performance reasons, instead of running the tests before the commit, we run them after. If a test fail, we enter testfix mode, blocking all commits to mainline, until the test passes again.

But, when we decided to bring in AJAX into the equation, we failed to do the same for the new test infrastructure we added. We use Windmill to test our AJAX functionality, and since it was a bit flaky, and it wasn't trivial to integrate it into our existing test suite, we thought it was enough to be able to run the tests manually to avoid regressions. This was a big mistake. Not many people are going to run the tests manually, so regressions are bound to sneak in without you noticing it. Believe me, I know. I integrated the Windmill tests into our normal zope.testing test runner. When I did this, I found out that a lot of our Windmill tests were actually failing. We set up a buildbot builder to run the Windmill automatically, hoping that it would make regressions less likely to be introduced without us knowing about it. It helped a bit, we actually did catch a few regressions, but it was hard manual work. It required someone (me!) to keep an eye on the buildbot runs, looking through the test log, and chase people to fix it. This led to having not all the tests passing most of the time, which made it even harder to notice new regressions. So while simply having the tests run automatically helps a bit, it still requires a lot of discipline and manual work to prevent regressions from being unnoticed.

That's why I'm pleased to announce, that Windmill tests are now included in the regular Launchpad test suite, which means that when a Windmill test fails, we will enter testfix mode and we're forced to take action. It will be a bit painful in the beginning. I'm sure that we will see some spurious test failures. However, I'm sure it will be less painful that it has been to keep the current Windmill test suite under control.

The next time you add new testing infrastructure, let's include it in the regular test suite from the beginning, OK?

Read more

When doing work on something that is supposed to be used by others, don't forget to think about how it's actually going to be used. Not only to think about it, but to actually try it out, to confirm that it works nicely when integrated, and that it's easy to integrate it. And let's not forget to document how to integrate it, and ideally to test it as well.

As an example, in Launchpad we use lazr-js for our Javascript infrastructure. We recently changed the way it's integrated into Launchpad, giving it a proper setup.py file, so that we can generate an egg and depend on it through Buildout. The integration issue was of course taken into account there, making sure it was easy to build lazr-js both standalone, and when used in another project, like Launchpad. There was one command to build everything, which is simple enough. However, one thing wasn't done. It wasn't documented how you should use lazr-js in another project. Therefore, when people continued to develop lazr-js, adding more features, and making the build system more complicated, there wasn't much thought about keeping it easy to build lazr-js in other projects. The build process became more complicated, multiple commands had to be executed. This is fine when building lazr-js by itself, since all you have to do is make build. However, when using lazr-js as an egg, you don't have access the Makefile, which means that you have to duplicate the build steps. Therefore, having the build to be more than one command, makes it harder to use elsewhere. In fact, the build process of lazr-js changed so much, that we didn't know anymore, how to properly use the latest version of lazr-js in Launchpad.

This is just one example of integrating external libraries. But the same is true for code internal to the project. When developing code that is to be used by multiple call sites, it's important to think about how it's actually going to be used. It's easy to get carried away, developing a, what you think, really nice API. But then when people start to use it, it turns out that it's not so nice.

What can be done to avoid integration issue? Ideally, you should document and test how the integration is supposed to work. By doing this, you get a feel of how to use the API. Doctests are actually quite nice for this purpose. If you manage to produce a readable doctest, it's quite probable that your API is easy enough to use.

Sometimes adding tests for integration isn't feasible. For example, in the case of lazr-js is not that easy. What I do when I develop on lazr-js is to have a throw-away Launchpad branch, where I use my lazr-js branch and manually make sure that it works nicely when integrated. Take a look at how it looks when integrated. What steps do you have to do to use it? Is that something that you will want to do for every call site? Are people likely to copy and paste an existing example to use your code? If the answer to the last question is yes, it's not easy to use your code.

Read more

I will probably implement this myself at some point, but if anyone wants to use their bzr and launchpadlib skills to make the world a slightly better place, I'd be grateful. You don't even have to have any bzr and launchpadlib skills, both are quite easy to get started with. This could be a great opportunity to learn more about them!

I want a bzr plugin that queries Launchpad and list all my branches that aren't Merged or Abandoned. I want it to work in the context of a branch, so that it automatically knows which project I'm interested in, although having it work outside a branch and list branches for all my projects would be useful as well.

To remove branches from the list of active branches, I'd also like to be able to mark a branch as Abandoned using the plugin.

Bonus points if any attached merge proposals and their status also are listed.

Read more

In PyGarmin, I used PyUSB to implement USB support, and I was struck with one odd error. Sometimes, a USBError was raised, with the error message "No error". I couldn't find any documentation for this, and I still don't understand why an error was raised, saying that there was no error.

The error happenend when trying to read from the bus for the first time, after trying to send two packet without any errors. After some testing, I found out that simply trying to read again from the bus didn't work, but if I send the two packets again, everything works without any errors.

For those interested, I fixed this in revision 91 of PyGarmin.

Read more