Speed bumps hide in places where you least expect them

The most common step in creating software is building it. Usually this means running make or equivalent and waiting. This step is so universal that most people don’t even think about it actively. If one were to see what the computer is doing during build, one would see compiler processes taking 100% of the machine’s CPUs. Thus the system is working as fast as it possibly can.

Right?

Some people working on Chromium doubted this and built their own replacement of Make called Ninja. It is basically the same as Make: you specify a list of dependencies and then tell it to build something. Since Make is one of the most used applications in the world and has been under development since the 70s, surely it is as fast as it can possibly be done.

Right?

Well, let’s find out. Chromium uses a build system called Gyp that generates makefiles. Chromium devs have created a Ninja backend for Gyp. This makes comparing the two extremely easy.

Compiling Chromium from scratch on a dual core desktop machine with makefiles takes around 90 minutes. Ninja builds it in less than an hour. A quad core machine builds Chromium in ~70 minutes. Ninja takes ~40 minutes. Running make on a tree with no changes at all takes 3 minutes. Ninja takes 3 seconds.

So not only is Ninja faster than Make, it is faster by a huge margin and especially on the use case that matters for the average developer: small incremental changes.

What can we learn from this?

There is an old (and very wise) saying that you should never optimize before you measure. In this case the measurement seemed to indicate that nothing was to be done: CPU load was already maximized by the compiler processes. But sometimes your tools give you misleading data. Sometimes they lie to you. Sometimes the “common knowledge” of the entire development community is wrong. Sometimes you just have to do the stupid, irrational, waste-of-time -thingie.

This is called progress.

PS I made quick-n-dirty packages of a Ninja git checkout from a few days ago and put them in my PPA. Feel free to try them out. There is also an experimental CMake backend for Ninja so anyone with a CMake project can easily try what kind of a speedup they would get.

Solution to all API and ABI mismatch issues

One of the most annoying things about creating shared libraries for other people to use is API and ABI stability. You start going somewhere, make a release and then realize that you have to totally change the internals of the library. But you can’t remove functions, because that would break existing apps. Nor can you change structs, the meanings of fields or any other maintenance task to make your job easier. The only bright spot in the horizont is that eventually you can do a major release and break compatibility.

We’ve all been there and it sucks. If you choose to ignore stability because, say, you have only a few users who can just recompile their stuff, you get into endless rebuild cycles and so on. But what if there was a way to eliminate all this in one, swift, elegant stroke?

Well, there is.

Essentially every single library can be reduced to one simple function call that looks kind of like this.

library_result library_do(const char *command, library_object *obj, ...)

The command argument tells the library what to do. The arguments tell it what to do it to and the result tells what happened. Easy as pie!

So, to use a car analogy, here’s an example of how you would start a car.

library_object *car;
library_result result = library_do("initialize car", NULL);
car = RESULT_TO_POINTER(result);
library_do("start engine", car);
library_do("push accelerometer", car);

Now you have a moving car and you have also completely isolated the app from the library using an API that will never need to be changed. It is perfectly forwards, backwards and sideways compatible.

And it gets better. You can query capabilities on the fly and act accordingly.

if(RESULT_TO_BOOLEAN(library_do("has automatic transmission", car))
  do_something();

Dynamic detection of features and changing behavior based on them makes apps work with every version of the library ever. The car could even be changed into a moped, tractor, or a space shuttle and it would still work.

For added convenience the basic commands could be given as constant strings in the library’s header file.

Deeper analysis

If you, dear reader, after reading the above text thought, even for one microsecond, that the described system sounds like a good idea, then you need to stop programming immediately.

Seriously!

Take your hands away from the keyboard and just walk away. As an alternative I suggest taking up sheep farming in New Zealand. There’s lots of fresh air and a sense of accomplishment.

The API discussed above is among the worst design abominations imaginable. It is the epitome of Making My Problem Your Problem. Yet variants of it keep appearing all the time.

The antipatterns and problems in this one single function call would be enough to fill a book. Here are just some of them.

Loss of type safety

This is the big one. The arguments in the function call can be anything and the result can be anything. So which one of the following should you use:

library_do("set x", o, int_variable);
library_do("set x", o, &int_variable);
library_do("set x", o, double_variable);
library_do("set x", o, &double_variable);
library_do("set x", o, value_as_string)

You can’t really know without reading the documentation. Which you have to do every single time you use any function. If you are lucky, the calling convention is the same on every function. It probably is not. Since the compiler does not and can not verify correctness, what you essentially have is code that works either by luck or faith.

The only way to know for sure what to do is to read the source code of the implementation.

Loss of tools

There are a lot of nice tools to help you. Things such as IDE code autocompletion, API inspectors, Doxygen, even the compiler itself as discussed above.

If you go the generic route you throw away all of these tools. They account for dozens upon dozens of man-years just to make your job easier. All of that is gone. Poof!

Loss of debuggability

One symptom of this disease is putting data in dictionaries and other high level containers rather than variables to “allow easy expansion in the future”. This is workable in languages such as Java or Python, but not in C/C++. Here is screengrab from a gdb session demonstrating why this is a terrible idea:

(gdb) print map
$1 = {_M_t = {
    _M_impl = {<std::allocator<std::_Rb_tree_node<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> > const, int> > >> = {<__gnu_cxx::new_allocator<std::_Rb_tree_node<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> > const, int> > >> = {<No data fields>}, <No data fields>},
      _M_key_compare = {<std::binary_function<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool>> = {<No data fields>}, <No data fields>}, _M_header = {_M_color = std::_S_red, _M_parent = 0x607040,
        _M_left = 0x607040, _M_right = 0x607040}, _M_node_count = 1}}}

Your objects have now become undebuggable. Or at the very least extremely cumbersome, because you have to dig out the information you need one tedious step at a time. If the error is non-obvious, it’s source code diving time again.

Loss of performance

Functions are nice. They are type-safe, easy to understand and fast. The compiler might even inline them for you. Generic action operators are not.

Every single call to the library needs to first go through a long if/else tree to inspect which command was given or do a hash table lookup or something similar. This means that every single function call turns into a a massive blob of code that destroys branch prediction and pipelining and all those other wonderful things HW engineers have spent decades optimizing for you.

Loss of error-freeness

The code examples above have been too clean. They have ignored the error cases. Here’s two lines of code to illustrate the difference.

x = get_x(obj); // Can not possibly fail
status = library_do("get x", obj); // Anything can happen

Since the generic function can not provide any guarantees the way a function can, you have to always inspect the result it provides. Maybe you misspelled the command. Maybe this particular object does not have an x value. Maybe it used to but the library internals have changed (which was the point of all this, remember?). So the user has to inspect every single call even for operations that can not possibly fail. Because they can, they will, and if you don’t check, it is your fault!

Loss of consistency

When people are confronted with APIs such as these, the first thing they do is to write wrapper functions to hide the ugliness. Instead of a direct function call you end up with a massive generic invocation blob thingie that gets wrapped in a function call that is indistinguishable from the direct function call.

The end result is an abstraction layer covered by an anti-abstraction layer; a concretisation layer, if you will.

Several layers, actually, since every user will code their own wrapper with their own idiosyncrasies and bugs.

Loss of language features

Let’s say you want the x and y coordinates from an object. Usually you would use a struct. With a generic getter you can not, because a struct implies memory layout and thus is a part of API and ABI. Since we can’t have that, all arguments must be elementary data types, such as integers or strings. What you end up with are constructs such as this abomination here (error checking and the like omitted for sanity):

obj = RESULT_TO_POINTER(library_do("create FooObj", NULL);
library_do("set constructor argument a", obj, 0);
library_do("set constructor argument b", obj, "hello");
library_do("set constructor argument c", obj, 5L);
library_do("run constructor", obj)

Which is so much nicer than

object *obj = new_object(0, "hello", 5); // No need to cast to Long, the compiler does that automatically.

Bonus question: how many different potentially failing code paths can you find in the first code snippet and how much protective code do you need to write to handle all of them?

Where does it come from?

These sorts of APIs usually stem from their designers’ desire to “not limit choices needlessly”, or “make it flexible enough for any change in the future”. There are several different symptoms of this tendency, such as the inner platform effect, the second system effect and soft coding. The end result is usually a framework framework framework.

How can one avoid this trap? There is really no definitive answer, but there is a simple guideline to help you get there. Simply ask yourself: “Is this code solving the problem at hand in the most direct and obvious way possible?” If the answer is no, you probably need to change it. Sooner rather than later.

Things just working

I have a Macbook with a bcm4331 wireless chip that has not been supported in Linux. The driver was added to kernel 3.2. I was anxious to test this when I upgraded to precise.

After the update there was no net connection. The network indicator said “Missing firmware”. So I scoured the net and found the steps necessary to extract the firmware file to the correct directory.

I typed the command and pressed enter. That exact second my network indicator started blinking and a few seconds later it had connected.

Without any configuration, kernel module unloading/loading or “refresh state” button prodding.

It just worked. Automatically. As it should. And even before it worked it gave a sensible and correct error message.

To whoever coded this functionality: I salute you.

More uses for btrfs snapshots

I played around with btrfs snapshots and discovered two new interesting uses for them. The first one deals with unreliable operations. Suppose you want to update a largish SVN checkout but your net connection is slightly flaky. The reason can be anything, bad wires, overloaded server, electrical outages, and so on.

If SVN is interrupted mid-transfer, it will most likely leave your checkout in a non-consistent state that can’t be fixed even with ‘svn cleanup’. The common wisdom on the Internet is that the way to fix this is to delete or rename the erroneous directory and do a ‘svn update’, which will either work or not. With btrfs snapshots you can just do a snapshot of your source tree before the update. If it fails, just nuke the broken directory and restore your snapshot. Then try again. If it works, just get rid of the snapshot dir.

What you essentially gain are atomic operations on non-atomic tasks (such as svn update). This has been possible before with ‘cp -r’ or similar hacks, but they are slow. Btrfs snapshots can be done in the blink of an eye and they don’t take extra disk space.

The other use case is erroneous state preservation. Suppose you hack on your stuff and encounter a crashing bug in your tools (such as bzr or git). You file a bug on it and then get back to doing your own thing. A day or two later you get a reply on your bug report saying “what is the output of command X”. Since you don’t have the given directory tree state around any more, you can’t run the command.

But if you snapshot your broken tree and store it somewhere safe, you can run any analysis scripts on it any time in the future. Even possibly destructive ones, because you can always run the analysis scripts in a fresh snapshot. Earlier these things were not feasible because making copies took time and possibly lots of space. With snapshots they don’t.