Smooth scrolling available in Chromium

What currently happens when you drag two fingers on a touchpad is that the X server intercepts those touches and sends mouse wheel events to applications. The semantics of a mouse wheel event are roughly “move down/up three lines”. This is jerky and not very pleasant. There has been no way of doing pixel perfect scrolling.

With the recent work on X multitouch and the uTouch gesture stack, smoothness has now become possible. Witness pixel accurate scrolling in Chromium in this Youtube video.

The remaining jerkiness in the video is mainly caused by Chromium redrawing its window contents from scratch whenever the viewport is moved.

The code is available in Chromium’s code review site.

Speed bumps hide in places where you least expect them

The most common step in creating software is building it. Usually this means running make or equivalent and waiting. This step is so universal that most people don’t even think about it actively. If one were to see what the computer is doing during build, one would see compiler processes taking 100% of the machine’s CPUs. Thus the system is working as fast as it possibly can.

Right?

Some people working on Chromium doubted this and built their own replacement of Make called Ninja. It is basically the same as Make: you specify a list of dependencies and then tell it to build something. Since Make is one of the most used applications in the world and has been under development since the 70s, surely it is as fast as it can possibly be done.

Right?

Well, let’s find out. Chromium uses a build system called Gyp that generates makefiles. Chromium devs have created a Ninja backend for Gyp. This makes comparing the two extremely easy.

Compiling Chromium from scratch on a dual core desktop machine with makefiles takes around 90 minutes. Ninja builds it in less than an hour. A quad core machine builds Chromium in ~70 minutes. Ninja takes ~40 minutes. Running make on a tree with no changes at all takes 3 minutes. Ninja takes 3 seconds.

So not only is Ninja faster than Make, it is faster by a huge margin and especially on the use case that matters for the average developer: small incremental changes.

What can we learn from this?

There is an old (and very wise) saying that you should never optimize before you measure. In this case the measurement seemed to indicate that nothing was to be done: CPU load was already maximized by the compiler processes. But sometimes your tools give you misleading data. Sometimes they lie to you. Sometimes the “common knowledge” of the entire development community is wrong. Sometimes you just have to do the stupid, irrational, waste-of-time -thingie.

This is called progress.

PS I made quick-n-dirty packages of a Ninja git checkout from a few days ago and put them in my PPA. Feel free to try them out. There is also an experimental CMake backend for Ninja so anyone with a CMake project can easily try what kind of a speedup they would get.

Solution to all API and ABI mismatch issues

One of the most annoying things about creating shared libraries for other people to use is API and ABI stability. You start going somewhere, make a release and then realize that you have to totally change the internals of the library. But you can’t remove functions, because that would break existing apps. Nor can you change structs, the meanings of fields or any other maintenance task to make your job easier. The only bright spot in the horizont is that eventually you can do a major release and break compatibility.

We’ve all been there and it sucks. If you choose to ignore stability because, say, you have only a few users who can just recompile their stuff, you get into endless rebuild cycles and so on. But what if there was a way to eliminate all this in one, swift, elegant stroke?

Well, there is.

Essentially every single library can be reduced to one simple function call that looks kind of like this.

library_result library_do(const char *command, library_object *obj, ...)

The command argument tells the library what to do. The arguments tell it what to do it to and the result tells what happened. Easy as pie!

So, to use a car analogy, here’s an example of how you would start a car.

library_object *car;
library_result result = library_do("initialize car", NULL);
car = RESULT_TO_POINTER(result);
library_do("start engine", car);
library_do("push accelerometer", car);

Now you have a moving car and you have also completely isolated the app from the library using an API that will never need to be changed. It is perfectly forwards, backwards and sideways compatible.

And it gets better. You can query capabilities on the fly and act accordingly.

if(RESULT_TO_BOOLEAN(library_do("has automatic transmission", car))
  do_something();

Dynamic detection of features and changing behavior based on them makes apps work with every version of the library ever. The car could even be changed into a moped, tractor, or a space shuttle and it would still work.

For added convenience the basic commands could be given as constant strings in the library’s header file.

Deeper analysis

If you, dear reader, after reading the above text thought, even for one microsecond, that the described system sounds like a good idea, then you need to stop programming immediately.

Seriously!

Take your hands away from the keyboard and just walk away. As an alternative I suggest taking up sheep farming in New Zealand. There’s lots of fresh air and a sense of accomplishment.

The API discussed above is among the worst design abominations imaginable. It is the epitome of Making My Problem Your Problem. Yet variants of it keep appearing all the time.

The antipatterns and problems in this one single function call would be enough to fill a book. Here are just some of them.

Loss of type safety

This is the big one. The arguments in the function call can be anything and the result can be anything. So which one of the following should you use:

library_do("set x", o, int_variable);
library_do("set x", o, &int_variable);
library_do("set x", o, double_variable);
library_do("set x", o, &double_variable);
library_do("set x", o, value_as_string)

You can’t really know without reading the documentation. Which you have to do every single time you use any function. If you are lucky, the calling convention is the same on every function. It probably is not. Since the compiler does not and can not verify correctness, what you essentially have is code that works either by luck or faith.

The only way to know for sure what to do is to read the source code of the implementation.

Loss of tools

There are a lot of nice tools to help you. Things such as IDE code autocompletion, API inspectors, Doxygen, even the compiler itself as discussed above.

If you go the generic route you throw away all of these tools. They account for dozens upon dozens of man-years just to make your job easier. All of that is gone. Poof!

Loss of debuggability

One symptom of this disease is putting data in dictionaries and other high level containers rather than variables to “allow easy expansion in the future”. This is workable in languages such as Java or Python, but not in C/C++. Here is screengrab from a gdb session demonstrating why this is a terrible idea:

(gdb) print map
$1 = {_M_t = {
    _M_impl = {<std::allocator<std::_Rb_tree_node<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> > const, int> > >> = {<__gnu_cxx::new_allocator<std::_Rb_tree_node<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> > const, int> > >> = {<No data fields>}, <No data fields>},
      _M_key_compare = {<std::binary_function<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool>> = {<No data fields>}, <No data fields>}, _M_header = {_M_color = std::_S_red, _M_parent = 0x607040,
        _M_left = 0x607040, _M_right = 0x607040}, _M_node_count = 1}}}

Your objects have now become undebuggable. Or at the very least extremely cumbersome, because you have to dig out the information you need one tedious step at a time. If the error is non-obvious, it’s source code diving time again.

Loss of performance

Functions are nice. They are type-safe, easy to understand and fast. The compiler might even inline them for you. Generic action operators are not.

Every single call to the library needs to first go through a long if/else tree to inspect which command was given or do a hash table lookup or something similar. This means that every single function call turns into a a massive blob of code that destroys branch prediction and pipelining and all those other wonderful things HW engineers have spent decades optimizing for you.

Loss of error-freeness

The code examples above have been too clean. They have ignored the error cases. Here’s two lines of code to illustrate the difference.

x = get_x(obj); // Can not possibly fail
status = library_do("get x", obj); // Anything can happen

Since the generic function can not provide any guarantees the way a function can, you have to always inspect the result it provides. Maybe you misspelled the command. Maybe this particular object does not have an x value. Maybe it used to but the library internals have changed (which was the point of all this, remember?). So the user has to inspect every single call even for operations that can not possibly fail. Because they can, they will, and if you don’t check, it is your fault!

Loss of consistency

When people are confronted with APIs such as these, the first thing they do is to write wrapper functions to hide the ugliness. Instead of a direct function call you end up with a massive generic invocation blob thingie that gets wrapped in a function call that is indistinguishable from the direct function call.

The end result is an abstraction layer covered by an anti-abstraction layer; a concretisation layer, if you will.

Several layers, actually, since every user will code their own wrapper with their own idiosyncrasies and bugs.

Loss of language features

Let’s say you want the x and y coordinates from an object. Usually you would use a struct. With a generic getter you can not, because a struct implies memory layout and thus is a part of API and ABI. Since we can’t have that, all arguments must be elementary data types, such as integers or strings. What you end up with are constructs such as this abomination here (error checking and the like omitted for sanity):

obj = RESULT_TO_POINTER(library_do("create FooObj", NULL);
library_do("set constructor argument a", obj, 0);
library_do("set constructor argument b", obj, "hello");
library_do("set constructor argument c", obj, 5L);
library_do("run constructor", obj)

Which is so much nicer than

object *obj = new_object(0, "hello", 5); // No need to cast to Long, the compiler does that automatically.

Bonus question: how many different potentially failing code paths can you find in the first code snippet and how much protective code do you need to write to handle all of them?

Where does it come from?

These sorts of APIs usually stem from their designers’ desire to “not limit choices needlessly”, or “make it flexible enough for any change in the future”. There are several different symptoms of this tendency, such as the inner platform effect, the second system effect and soft coding. The end result is usually a framework framework framework.

How can one avoid this trap? There is really no definitive answer, but there is a simple guideline to help you get there. Simply ask yourself: “Is this code solving the problem at hand in the most direct and obvious way possible?” If the answer is no, you probably need to change it. Sooner rather than later.

Things just working

I have a Macbook with a bcm4331 wireless chip that has not been supported in Linux. The driver was added to kernel 3.2. I was anxious to test this when I upgraded to precise.

After the update there was no net connection. The network indicator said “Missing firmware”. So I scoured the net and found the steps necessary to extract the firmware file to the correct directory.

I typed the command and pressed enter. That exact second my network indicator started blinking and a few seconds later it had connected.

Without any configuration, kernel module unloading/loading or “refresh state” button prodding.

It just worked. Automatically. As it should. And even before it worked it gave a sensible and correct error message.

To whoever coded this functionality: I salute you.

More uses for btrfs snapshots

I played around with btrfs snapshots and discovered two new interesting uses for them. The first one deals with unreliable operations. Suppose you want to update a largish SVN checkout but your net connection is slightly flaky. The reason can be anything, bad wires, overloaded server, electrical outages, and so on.

If SVN is interrupted mid-transfer, it will most likely leave your checkout in a non-consistent state that can’t be fixed even with ‘svn cleanup’. The common wisdom on the Internet is that the way to fix this is to delete or rename the erroneous directory and do a ‘svn update’, which will either work or not. With btrfs snapshots you can just do a snapshot of your source tree before the update. If it fails, just nuke the broken directory and restore your snapshot. Then try again. If it works, just get rid of the snapshot dir.

What you essentially gain are atomic operations on non-atomic tasks (such as svn update). This has been possible before with ‘cp -r’ or similar hacks, but they are slow. Btrfs snapshots can be done in the blink of an eye and they don’t take extra disk space.

The other use case is erroneous state preservation. Suppose you hack on your stuff and encounter a crashing bug in your tools (such as bzr or git). You file a bug on it and then get back to doing your own thing. A day or two later you get a reply on your bug report saying “what is the output of command X”. Since you don’t have the given directory tree state around any more, you can’t run the command.

But if you snapshot your broken tree and store it somewhere safe, you can run any analysis scripts on it any time in the future. Even possibly destructive ones, because you can always run the analysis scripts in a fresh snapshot. Earlier these things were not feasible because making copies took time and possibly lots of space. With snapshots they don’t.

Fun stuff with btrfs

I work on, among other things, Chromium. It uses SVN as its revision control system. There are several drawbacks to this, which are well known (no offline commits etc). They are made worse by Chromium’s enormous size. An ‘svn update’ can easily take over an hour.

Recently I looked into using btrfs’s features to make things easier. I found that with very little effort you can make things much more workable.

First you create a btrfs subvolume.

btrfs subvolume create chromium_upstream

Then you check out Chromium to this directory using the guidelines given in their wiki. Now you have a pristine upstream SVN checkout. Then build it once. No development is done in this directory. Instead we create a new directory for our work.

btrfs subvolume snapshot chromium_upstream chromium_feature_x

And roughly three seconds later you have a fresh copy of the entire source tree and the corresponding build tree. Any changes you make to individual files in the new directory won’t cause a total rebuild (which also takes hours). You can hack with complete peace of mind knowing that in the event of failure you can start over with two simple commands.

sudo btrfs subvolume delete chromium_feature_x
btrfs subvolume snapshot chromium_upstream chromium_feature_x

Chromium upstream changes quite rapidly, so keeping up with it with SVN can be tricky. But btrfs makes it easier.

cd chromium_upstream
gclient sync # Roughly analogous to svn update.
cd ..
btrfs subvolume snapshot chromium_upstream chromium_feature_x_v2
cd chromium_feature_x/src && svn diff > ../../thingy.patch && cd ../..
cd chromium_feature_x_v2/src && patch -p0 < ../../thingy.patch && cd ../..
sudo btrfs subvolume delete chromium_feature_

This approach can be taken with any tree of files: images, even multi-gigabyte video files. Thanks to btrfs’s design, multiple copies of these files take roughly the same amount of disk space as only one copy. It’s kind of like having backup/restore and revision control built into your file system.

The four stages of command entry

Almost immediately after the first computers were invented, people wanted them to do as they command. This process has gone through four distinct phases.

The command line

This was the original way. The user types his command in its entirety and presses enter. The computer then parses it and does what it is told. There was no indication on whether the written command was correct or not. The only way to test it was to execute it.

Command completion

An improvement to writing the correct command. The user types in a few letters from the start of the desired command or file name and presses tab. If there is only one choice that begins with those letters, the system autofills the rest. Modern autocompletion systems can fill in command line arguments, host names and so on.

Live preview

This is perhaps best known from IDEs. When the user types some letters, the IDE presents all choices that correspond to those letters in a pop up window below the cursor. The user can then select one of them or keep writing. Internet search sites also do this.

Live preview with error correction

One thing in common with all the previous approaches is that the input must be perfect. If you search for Firefox but accidentally type in “ifrefox”, the systems return zero matches. Error correcting systems try to find what the user wants even if the input contains errors. This is a relatively new approach, with examples including Unity’s new HUD and Google’s search (though the live preview does not seem to do error correction).

The future

What is the next phase in command entry? I really have no idea, but I’m looking forward to seeing it.

Complexity kills

The biggest source of developer headache is complexity. Specifically unexpected complexity. The kind that pops out of nowhere from the simplest of settings and makes you rip your hair out.

As an example, here is a partial and simplified state machine for what should happen when using a laptop’s trackpad.

If you have an idea of what should happen in the states marked “WTF?”, do send me email.

What is worse than having a problem?

The only thing worse than having a problem is having a poor solution to a problem.

Why?

Because that prevents a good solution from being worked out. The usual symptom of this is having a complicated and brittle Rube Goldberg machine to do something that really should be a just simpler. It’s just that nobody bothers to do the Right Thing, because the solution that we have almost kinda, sorta works most of the time so there’s nothing to worry about, really.

Some examples include the following:

  • X used to come with a configurator application that would examine your hardware and print a conf file, which you could then copy over (or merge with) the existing conf file. Nowadays X does the probing automatically.
  • X clipboard was a complete clusterf*ck, but since middle button paste mostly worked it was not seen as an issue.
  • The world is filled with shell script fragments with the description “I needed this for something long ago, but I don’t remember the reason any more and am afraid to remove it”.
  • Floppies (remember those?) could be ejected without unmounting them causing corruption and other fun.

How can you tell when you have hit one of these issues? One sign is that you get one of the following responses:

  • “Oh, that’s a bit unfortunate. But if you do [complicate series of steps] it should work.”
  • “You have to do X before you do Y. Otherwise it just gets confused.”
  • “It does not do X, but you can do almost the same with [complicated series of steps] though watch out for [long list of exceptions].”
  • “Of course it will fail [silently] if you don’t have X. What else could it do?”
  • “You ran it with incorrect parameters. Just delete all your configuration files [even the hidden ones] and start over.”

If you ever find yourself in the situation of getting this kind of advice, or, even worse, giving it out to other people, please consider just spending some effort to fixing the issue properly. You will be loved and adored if you do.

Rooting basic infrastructure for fun and profit

You know how we laugh at users of some other OS’s for running random binary files they get from the Internet.

Well we do it as well. And instead of doing it on our personal machines, we do it on those servers that run our most critical infrastructure?

Here is a simple step by step plan that you can use to take over all Linux distributions’ master servers.

  1. Create a free software project. It can be anything at all.
  2. Have it included in the distros you care about.
  3. Create/buy a local exploit trojan.
  4. Create a new minor release of your project.
  5. Put your trojan inside the generated configure script
  6. Boom! You have now rooted the build machines (with signing keys etc) of every single distro.

Why does this exploit work? Because configure is essentially an uninspectable blob of binary code. No-one is going to audit that code and the default packaging scripts use configure scripts blindly if they exist.

Trojans in configure scripts have been found in the wild.

So not only are the Autotools a horrible build system, they are also a massive security hole. By design.

Post scriptum: A simple fix to this is to always generate the configure script yourself rather than using the one that comes with the tarball. But then you lose the main advantage of Autotools: that you don’t need special software installed on the build machine.