The magic number of usability

We have all ran into software inconveniences. These are things that you can basically do but for some reason or another are unintuitive, hard or needlessly complex. When you air your concerns on these issues, sometimes they get fixed. At other times you get back a reply starting with “Well in general you may have a point, but …”.

The rest of the sentence is something along the lines of these examples:

“… you only have to do it once so it’s no big deal.”
“… there are cases where [e.g. autodetection] would not work so having the user [manually do task X] is the only way to be reliable.”
“… I don’t see any problem, in fact I like it the way it is.”
“… replacing [the horrible thing in question] with something better is too much work.”
“… fixing that would change things and change is bad.”
“… that is the established standard way of doing things.”
“… having a human being write that in is good, it means that the input is inspected.”

These are all fine and acceptable reasonings under certain circumstances. In fact, they are great! Let’s see what life would be like in a parallel universe where people had followed them slavishly.

Booting Linux: an adventure for the brave

You arrive to your work computer and turn it on. The LILO boot prompt comes up as usually. You type in the partition you want to boot from. This you must do every time because you might have changed partition settings and thus make LILO go out of sync. You type in your boot stanza sure in the knowledge that you get 100% rock solid boot every time.

Except when you have a typo in your boot command but a computer can’t work around that. And that happens only rarely anyways and why would you boot your computer more than once per month?

Once the kernel has loaded, you type in the kernel modules you need to use the machine. You also type in all extra parameters those modules require because some chipsets may work incorrectly sometimes (or so you have been told). So you type in some dozen strings of hexadecimal numbers and really enjoy it in a stockholmesque way.

Finally all the data is put in and the system will boot itself. Then it is time to type in your network settings. In this universe there is no Protocol to Configure network Host settings Dynamically. And why would there be? Any bug in such a system would render the entire network unusable. No, the only way to ensure that things work is to configure network settings by hand every time. Errors in settings cause only one machine to break, not the entire network. Unless you mix gateway/netmask/IP addresses but surely no-one is that stupid? And if they are, it’s their own damn fault! Having things fail spectacularly is GOOD because it shames people into doing the right thing.

After this and a couple of other simple things (each of which you only need to do once, remember) you finally have a working machine. You log on.

Into a text console, naturally. Not all people need X so it should not be started by default. Resources must be used judiciously after all.

But you only need to start X once per session so no biggie. Just like you only need to write in your monitor modeline once per X startup because autodetection might fail and cause HW failure. The modeline can not be stored in a file and used automatically because you might have plugged in a different monitor. Typing it in every time is the only way to be sure. Or would you rather die horribly in a fire caused by incorrect monitor parameters?

After all that is done you can finally fire up an XTerm to start working. But today you feel like increasing the font size a bit. This is about as simple as can get. XTerm stores a list of font sizes it will display in XResources. All you have to do is to edit them, shut down X and start it up again.

Easy as pie. And the best part: you only have to do this once.

Well, once every time you want to add new font sizes. But how often is that, really?

Summary

The examples listed above are but a small fraction of reality. If computer users had to do all “one time only” things, they would easily take the entire eight hour work day. The reason they don’t is that some developer Out There has followed this simple rule:

Almost every task that needs to be done “one time only” should, in fact, be done exactly zero times.

ZERO!

The cost of #include

Using libraries in C++ is simple. You just do #include<libname> and all necessary definitions appear in your source file.

But what is the cost of this single line of code?

Quite a lot, it turns out. Measuring the effect is straightforward. Gcc has a compiler flag -E, which only runs the preprocessor on the given source file. The cost of an include can be measured by writing a source file that has only one command: #include<filename>. The amount of lines in the resulting file tells how much code the compiler needs to parse in order to use the library.

Here is a table with measurements. They were run on a regular desktop PC with 4 GB of RAM and an SSD disk. The tests were run several times to insure that everything was in cache. The machine was running Ubuntu 12/10 64 bit and the compiler was gcc.

Header                     LOC    Time

map                       8751    0.02
unordered_map             9728    0.03
vector                    9964    0.02
Python.h                 11577    0.05
string                   15791    0.07
memory                   17339    0.04
sigc++/sigc++.h          21900    0.05
boost/regex.h            22285    0.06
iostream                 23496    0.06
unity/unity.h            28254    0.14
xapian.h                 36023    0.08
algorithm                40628    0.12
gtk/gtk.h                52379    0.26
gtest/gtest.h            53588    0.12
boost/proto/proto.hpp    78000    0.63
gmock/gmock.h            82021    0.18
QtCore/QtCore            82090    0.22
QtWebKit/QtWebKit        95498    0.23
QtGui/QtGui             116006    0.29
boost/python.hpp        132158    3.41
Nux/Nux.h               158429    0.71

It should be noted that the elapsed time is only the amount it takes to run the code through the preprocessor. This is relatively simple compared to parsing the code and generating the corresponding machine instructions. I ran the test with Clang as well and the times were roughly similar.

Even the most common headers such as vector add almost 10k lines of code whenever they are included. This is quite a lot more than most source files that use them. On the other end of the spectrum is stuff like Boost.Python, which takes over three seconds to include. An interesting question is why it is so much slower than Nux, even though it has less code.

This is the main reason why spurious includes need to be eliminated. Simply having include directives causes massive loss of time, even if the features in question are never used. Placing a slow include in a much used header file can cause massive slowdowns. So if you could go ahead and not do that, that would be great.