Simple thread safety with pimpl’d objects

A use case that pops up every now and then is to have a self-contained object that needs to be accessed from multiple threads. The problem appears when the object, as part of its usual things calls its own methods. This leads to tricky locking operations, a need to use a recursive mutex or something else that is nonoptimal.

Another common approach is to use the pimpl idiom, which hides the contents of an object inside a hidden private object. There are ample details on the internet, but the basic setup of a pimpl’d class is the following. First of all we have the class header:

class Foo {
public:
    Foo();
    void func1();
    void func2();

private:
    class Private;
    std::unique_ptr<Private> p;
};

Then in the implementation file you have first the defintiion of the private class.

class Foo::Private {
public:
    Private();
    void func1() { ... };
    void func2() { ... };

private:
   void privateFunc() { ... };
   int x;
};

Followed by the definition of the main class.

Foo::Foo() : p(new Private) {
}

void Foo::func1() {
    p->func1();
}

void Foo::func2() {
    p->func2();
}

That is, Foo only calls the implementation bits in Foo::Private.

The main idea to realize is that Foo::Private can never call functions of Foo. Thus if we can isolate the locking bits inside Foo, the functionality inside Foo::Private becomes automatically thread safe. The way to accomplish this is simple. First you add a (public) std::mutex m to Foo::Private. Then you just change the functions of Foo to look like this:

void Foo::func1() {
    std::lock_guard<std::mutex> guard(p->m);
    p->func1()
}

void Foo::func2() {
    std::lock_guard<std::mutex> guard(p->m);
    p->func2();
}

This accomplishes many things nicely:

  • Lock guards make locks impossible to leak, no matter what happens
  • Foo::Private can pretend that it is single-threaded which usually makes implementation a lot easier

The main drawback of this approach is that the locking is coarse, which may be a problem when squeezing out ultimate performance. But usually you don’t need that.

Complexity, where does it come from?

People often wonder why even the simplest of things seem to take long to implement. Often this is accompanied by uttering the phrase made famous by Jeremy Clarkson: how hard can it be.

Well let’s find out. As an example let’s look into a very simple case of creating a shared library that grabs a screen shot from a video file. The problem description is simplicity itself: open the file with GStreamer, seek to a random location and grab the pixels from the buffer. All in all, ten lines of code, should take a few hours to implement including unit tests.

Right?

Well, no. The very first problem is selecting a proper screenshot location. It can’t be in the latter half of the video, for instance. The simple reason for this is that it may then contain spoilers and the mere task of displaying the image might ruin the video file for viewers. So let’s instead select some suitable point, like 2/7:ths of the way in the video clip.

But in order to do that you need to first determine the length of the clip. Fortunately GStreamer provides functionality for this. Less fortunately some codec/muxer/platform/whatever combinations do not implement it. So now we have the problem of trying to determine a proper clip location for a file whose duration we don’t know. In order to save time and effort let’s just grab the screen shot at ten seconds in these cases.

The question now becomes what happens if the clip is less than ten seconds long? Then GStreamer would (probably) seek to the end of the file and grab a screenshot there. Videos often end in black so this might lead to black thumbnails every now and then. Come to think of it, that 2/7:th location might accidentally land on a fade so it might be all black, too. What we need is an image analyzer that detects whether the chosen frame is “interesting” or not.

This rabbit hole goes down quite deep so let’s not go there and instead focus on the other part of the problem.

There are mutually incompatible versions of GStreamer currently in use: 0.10 and 1.0. These two can not be in the same process at the same time due interesting technical issues. No matter which we pick, some client application might be using the other one. So we can’t actually link against GStreamer but instead we need to factor this functionality out to a separate executable. We also need to change the system’s global security profile so that every app is allowed to execute this binary.

Having all this functionality we can just fork/exec the binary and wait for it to finish, right?

In theory yes, but multimedia codecs are tricky beasts, especially hardware accelerated ones on mobile platforms. They have a tendency to freeze at any time. So we need to write functionality that spawns the process, monitors its progress and then kills it if it is not making progress.

A question we have not asked is how does the helper process provide its output to the library? The simple solution is to write the image to a file in the file system. But the question then becomes where should it go? Different applications have different security policies and can access different parts of the file system, so we need a system state parser for that. Or we can do something fancier such as creating a socket pair connection between the library and the client executable and have the client push the results through that. Which means that process spawning just got more complicated and you need to define the serialization protocol for this ad-hoc network transfer.

I could go on but I think the point has been made abundantly clear.

But can we make it faster?

A common step in a software developer’s life is building packages. This happens both directly on you own machine and remotely when waiting for the CI server to test your merge requests.

As an example, let’s look at the libcolumbus package. It is a common small-to-medium sized C++ project with a couple of dependencies. Compiling the source takes around 10 seconds, whereas building the corresponding package takes around three minutes. All things considered this seems like a tolerable delay.

But can we make it faster?

The first step in any optimization task is measurement. To do this we simulated a package builder by building the source code in a chroot. It turns out that configuring the source takes one second, compiling it takes around 12 seconds and installing build dependencies takes 2m 29s. These tests were run on an Intel i7 with 16GB of RAM and an SSD disk. We used CMake’s Make backend with 4 parallel processes.

Clearly, reducing the last part brings the biggest benefits. One simple approach is to store a copy of the chroot after dependencies are installed but before package building has started. This is a one-liner:

sudo btrfs subvolume snapshot -r chroot depped-chroot

Now we can do anything with the chroot and we can always return back by deleting it and restoring the snapshot. Here we use -r so the backed up snapshot is read-only. This way we don’t accidentally change it.

With this setup, prepping the chroot is, effectively, a zero time operation. Thus we have cut down total build time from 162 seconds to 13, which is a 12-fold performance improvement.

But can we make it faster?

After this fix the longest single step is the compilation. One of the most efficient ways of cutting down compile times is CCache, so let’s use that. For greater separation of concerns, let’s put the CCache repository on its own subvolume.

sudo btrfs subvolume create chroot/root/.ccache

We build the package once and then make a snapshot of the cache.

sudo btrfs subvolume snapshot -r chroot/root/.ccache ccache

Now we can delete the whole chroot. Reassembling it is simple:

sudo btrfs subvolume snapshot depped-chroot chroot
sudo btrfs subvolume snapshot ccache chroot/root/.ccache

The latter command gave an error about incorrect ioctls. The same effect can be achieved with bind mounts, though.

When doing this the compile time drops to 0.6 seconds. This means that we can compile projects over 100 times faster.

But can we make it faster?

At this point all individual steps take a second or so. Optimizing them further would yield negligible performance improvements. In actual package builds there are other steps that can’t be easily optimized, such as running the unit test suite, running Lintian, gathering and verifying the package and so on.

If we look a bit deeper we find that these are all, effectively, single process operations. (Some build systems, such as Meson, will run unit tests in parallel. They are in the minority, though.) This means that package builders are running processes which consume only one CPU most of the time. According to usually reliable sources package builders are almost always configured to work on only one package at a time.

Having a 24 core monster builder run single threaded executables consecutively does not make much sense. Fortunately this task parallelizes trivially: just build several packages at the same time. Since we could achieve 100 times better performance for a single build and we can run 24 of them at the same time, we find that with a bit of effort we can achieve the same results 2400 times faster. This is roughly equivalent to doing the job of an entire data center on one desktop machine.

The small print

The numbers on this page are slightly optimistic. However the main reduction in performance achieved with chroot snapshotting still stands.

In reality this approach would require some tuning, as an example you would not want to build LibreOffice with -j 1. Keeping the snapshotted chroots up to date requires some smartness, but these are all solvable engineering problems.

Build speed: now with measurements

There have been several posts in this blog about compile speed. However most have been about theory. This time it’s all about measurements.

I took the source code of Scribus, which is a largeish C++ application and looked at how much faster I could make it compile. There are three different configurations to test. The first one is building with default settings out of the box. The second one is about changes that can be done without changing any source code, meaning building with the Ninja backend instead of Make and using Gold instead of ld. The third configuration adds precompiled headers to the second configuration.

The measurements turned out to have lots of variance, which I could not really nail down. However it seemed to affect all configurations in the same way at the same time so the results should be comparable. All tests were run on a 4 core laptop with 4 GB of ram. Make was run with ‘-j 6′ as that is the default value of Ninja.

Default:    11-12 minutes
Ninja+Gold: ~9 minutes
PCH:        7 minutes

We can see that a bit of work the compile time can be cut almost in half. Enabling PCH does not require changing any existing source files (though you’ll get slightly better performance if you do). All in all it takes less than 100 lines of CMake code to enable precompiled headers, and half of that is duplicating some functionality that CMake should be exposing already. For further info, see this bug.

Is it upstream? Can I try it? Will it work on my project?

The patch is not upstreamed, because it is not yet clean enough. However you can check out most of it in this merge request to Unity. In Unity’s case the speedup was roughly 40%, though only one library build time was measured. The total build time impact is probably less.

Note that if you can’t just grab the code and expect magic speedups. You have to select which headers to precompile and so on.

Finally, for a well tuned code base, precompiled headers should only give around 10-20% speed boost. If you get more, it probably means that you have an #include maze in your header files. You should probably get that fixed sooner rather than later.

Dealing with bleeding edge software with btrfs snapshots

Developing with the newest of the new packages is always a bit tricky. Every now and then they break in interesting ways. Sometimes they corrupt the system so much that downgrading becomes impossible. Extreme circumstances may corrupt the system’s package database and so on. Traditionally fixing this has meant reinstalling the entire system, which is unpleasant and time consuming. Fortunately there is now a better way: snapshotting the system with btrfs.

The following guide assumes that you are running btrfs as your root file system. Newest quantal can boot off of btrfs root, but there may be issues, so please read the documentation in the wiki.

The basic concept in snapshotting your system is called a subvolume. It is kind of like a subpartition inside the main btrfs partition. By default Ubuntu’s installer creates a btrfs root partition with two subvolumes called @ and @home. The first one of these is mounted as root and the latter as the home directory.

Suppose you are going to do something really risky, and want to preserve your system. First you mount the raw btrfs partition somewhere:

sudo mkdir /mnt/root
sudo mount /dev/sda1 /mnt/root
cd /mnt/root

Here /dev/sda1 is your root partition. You can mount it like this even though the subvolumes are already mounted. If you do an ls, you see two subdirectories, @ and @home. Snapshotting is simple:

sudo btrfs subvolume snapshot @ @snapshot-XXXX

This takes maybe on second and when the command returns the system is secured. You are now free to trash your system in whatever way you want, though you might want to unmount /mnt/root so you don’t accidentally destroy your snapshots.

Restoring the snapshot is just as simple. Mount /mnt/root again and do:

sudo mv @ @broken
sudo subvolume snapshot @snapshot-XXXX @

If you are sure you don’t need @snapshot-XXX any more, you can just rename it @. You can do this even if you are booted in the system, i.e. are using @ as your current system root fs.

Reboot your machine and your system has been restored to the state it was when running the snapshot command. As an added bonus your home directory does not rollback, but retains all changes made during the trashing, which is what you want most of the time. If you want to rollback home as well, just snapshot it at the same time as the root directory.

You can get rid of useless and broken snapshots with this command:

sudo btrfs subvolume delete @useless-snapshot

You can’t remove subvolumes with rm -r, even if run with sudo.