Canonical Voices

Posts tagged with 'c++'

niemeyer

As detailed in the preliminary release of qml.v1 for Go a couple of weeks ago, my next task was to finish the improvements in its OpenGL API. Good progress has happened since then, and the new API is mostly done and available for experimentation. At the same time, there’s still work to do on polishing edges and on documenting the extensive API. This blog post aims to present the improvements made, their internal design, and also to invite help for finishing the pending details.

Before diving into the new, let’s first have a quick look at how a Go application using OpenGL might look like with qml.v0. This is an excerpt from the old painting example:

import (
        "gopkg.in/qml.v0"
        "gopkg.in/qml.v0/gl"
)

func (r *GoRect) Paint(p *qml.Painter) {
        width := gl.Float(r.Int("width"))
        height := gl.Float(r.Int("height"))
        gl.Enable(gl.BLEND)
        // ...
}

The example imports both the qml and the gl packages, and then defines a Paint method that makes use of the GL types, functions, and constants from the gl package. It looks quite reasonable, but there are a few relevant shortcomings.

One major issue in the current API is that it offers no means to tell even at a basic level what version of the OpenGL API is being coded against, because the available functions and constants are the complete set extracted from the gl.h header. For example, OpenGL 2.0 has GL_ALPHA and GL_ALPHA4/8/12/16 constants, but OpenGL ES 2.0 has only GL_ALPHA. This simplistic choice was a good start, but comes with a number of undesired side effects:

  • Many trivial errors that should be compile errors fail at runtime instead
  • When the code does work, the developer is not sure about which API version it is targeting
  • Symbols for unsupported API versions may not be available for linking, even if unused

That last point also provides a hint of another general issue: portability. Every system has particularities for how to load the proper OpenGL API entry points. For example, which libraries should be linked with, where they are in the local system, which entry points they support, etc.

So this is the stage for the improvements that are happening. Before detailing the solution, let’s have a look at the new painting example in qml.v1, that makes use of the improved API:

import (
        "gopkg.in/qml.v1"
        "gopkg.in/qml.v1/gl/2.0"
)

func (r *GoRect) Paint(p *qml.Painter) {
        gl := GL.API(p)
        width := float32(r.Int("width"))
        height := float32(r.Int("height"))
        gl.Enable(GL.BLEND)
        // ...
}

With the new API, rather than importing a generic gl package, a version-specific gl/2.0 package is imported under the name GL. That choice of package name allows preserving familiar OpenGL terms for both the functions and the constants (gl.Enable and GL.BLEND, for example). Inside the new Paint method, the gl value obtained from GL.API holds only the functions that are defined for the specific OpenGL API version imported, and the constants in the GL package are also constrained to those available in the given version. Any improper references become build time errors.

To support all the various OpenGL versions and profiles, there are 23 independent packages right now. These packages are of course not being hand-built. Instead, they are generated all at once by a tool that gathers information from various sources. The process can be tersely described as:

  1. A ragel-based parser processes Qt’s qopenglfunctions_*.h header files to collect version-specific functions
  2. The Khronos OpenGL Registry XML is parsed to collect version-specific constants
  3. A number of tweaks defined in the tool’s code is applied to the state
  4. Packages are generated by feeding the state to text templates

Version-specific functions might also be extracted from the Khronos Registry, but there’s a good reason to use information from the Qt headers instead: Qt already solved the portability issue. It works in several platforms, and if somebody is using QML successfully, it means Qt is already using that system’s OpenGL capabilities. So rather than designing a new mechanism to solve the same problem, the qml package now leverages Qt for resolving all the GL function entry points and the linking against available libraries.

Going back to the example, it also demonstrates another improvement that comes with the new API: plain types that do not carry further meaning such as gl.Float and gl.Int were replaced by their native counterparts, float32 and int32. Richer types such as Enum were preserved, and as suggested by David Crawshaw some new types were also introduced to represent entities such as programs, shaders, and buffers. The custom types are all available under the common gl/glbase package that all version-specific packages make use of.

So this is all working and available for experimentation right now. What is left to do is almost exclusively improving the list of function tweaks with two goals in mind, which will be highlighted below as those are areas where help would be appreciated, mainly due to the footprint of the API.

Documentation importing

There are a few hundred functions to document, but a large number of these are variations of the same function. The previous approach was to simply link to the upstream documentation, but it would be much better to have polished documentation attached to the functions themselves. This is the new documentation for MultMatrixd, for example. For now the documentation is being imported manually, but the final process will likely consist of some automation and some manual polishing.

Function polishing

The standard C OpenGL API can often be translated automatically (see BindBuffer or BlendColor), but in other cases the function prototype has to be tweaked to become friendly to Go. The translation tool already has good support for defining most of these tweaks independently from the rest of the machinery. For example, the following logic changes the ShaderSource function from its standard from into something convenient in Go:

name: "ShaderSource",
params: paramTweaks{
        "glstring": {rename: "source", retype: "...string"},
        "length":   {remove: true},
        "count":    {remove: true},
},
before: `
        count := len(source)
        length := make([]int32, count)
        glstring := make([]unsafe.Pointer, count)
        for i, src := range source {
                length[i] = int32(len(src))
                if len(src) > 0 {
                        glstring[i] = *(*unsafe.Pointer)(unsafe.Pointer(&src))
                } else {
                        glstring[i] = unsafe.Pointer(uintptr(0))
                }
        }
`,

Other cases may be much simpler. The MultMatrixd tweak, for instance, simply ensures that the parameter has the proper length, and injects the documentation:

name: "MultMatrixd",
before: `
        if len(m) != 16 {
                panic("parameter m must have length 16 for the 4x4 matrix")
        }
`,
doc: `
        multiplies the current matrix with the provided matrix.
        ...
`,

and as an even simpler example, CreateProgram is tweaked so that it returns a glbase.Program instead of the default uint32.

name:   "CreateProgram",
result: "glbase.Program",

That kind of polishing is where contributions would be most appreciated right now. One valid way of doing this is picking a range of functions and importing and polishing their documentation manually, and while doing that keeping an eye on required tweaks that should be performed on the function based on its documentation and prototype.

If you’d like to help somehow, or just ask questions or report your experience with the new API, please join us in the project mailing list.

Read more
niemeyer

As part of the on going work on Ubuntu Touch phones, I was invited to contribute a Go package to interface with ubuntuoneauth, a C++ and Qt library that authenticates against Ubuntu One using the system account made available by the phone owner. The details of that library and its use case are not interesting for most people right now, but the work to interface with it is a good example to talk about because, besides the result (uoneauth) being an external and independent Go package that extends the qml package, ubuntuoneauth is not a QML library, but rather a plain Qt library. Some of the callbacks use even traditional C++ types that do not inherit from QObject and have no Qt metadata, so offering that functionality from Go nicely takes a bit more work.

What follows are some of the highlights of that integration logic, to serve as a reference for similar extensions in the future. Note that if your interest is in creating QML applications with Go, none of this is necessary and the documentation is a better place to start.

As an initial step, the following examples demonstrate how the original C++ library and the Go package being designed are meant to be used. The first snippet contains the relevant logic taken out of the examples/signing-main.cpp file, tweaked for clarity:

int main(int argc, char *argv[]) {
    (...)
    UbuntuOne::SigningExample *example = new UbuntuOne::SigningExample(&a);
    QTimer::singleShot(0, example, SLOT(doExample()));
    (...)
}

SigningExample::SigningExample(QObject *parent) : QObject(parent) {
    QObject::connect(&service, SIGNAL(credentialsFound(const Token&)),
                     this, SLOT(handleCredentialsFound(Token)));
    QObject::connect(&service, SIGNAL(credentialsNotFound()),
                     this, SLOT(handleCredentialsNotFound()));
    (...)
}

void SigningExample::doExample() {
    service.getCredentials();
}

void SigningExample::handleCredentialsFound(Token token) {
    QString authHeader = token.signUrl(url, QStringLiteral("GET"));
    (...)
}
 
void SigningExample::handleCredentialsNotFound() {
    qDebug() << "No credentials were found.";
}

The example hooks into various signals in the service, one for each possible outcome, and then calls the service’s getCredentials method to initiate the process. If successful, the credentialsFound signal is emitted with a Token value that is able to sign URLs, returning an HTTP header that can authenticate a request.

That same process is more straightforward when using the library from Go:

service := uoneauth.NewService(engine)
token, err := service.Token()
if err != nil {
        return err
}
signature := token.HeaderSignature("GET", url)

Again, this gets a service, a token from it, and signs a URL, in a “forward” way.

So the goal is turning the initial C++ workflow into this simpler Go API. A good next step is looking into how the NewService function is implemented:

func NewService(engine *qml.Engine) *Service {
        s := &Service{reply: make(chan reply, 1)}

        qml.RunMain(func() {
                s.obj = *qml.CommonOf(C.newSSOService(), engine)
        })

        runtime.SetFinalizer(s, (*Service).finalize)

        s.obj.On("credentialsFound", s.credentialsFound)
        s.obj.On("credentialsNotFound", s.credentialsNotFound)
        s.obj.On("twoFactorAuthRequired", s.twoFactorAuthRequired)
        s.obj.On("requestFailed", s.requestFailed)
        return s
}

NewService creates the service instance, and then asks the qml package to run some logic in the main Qt thread via RunMain. This is necessary because a number of operations in Qt, including the creation of objects, are associated with the currently running thread. Using RunMain in this case ensures that the creation of the C++ object performed by newSSOService happens in the main Qt thread (the “GUI thread”).

Then, the address of the C++ UbuntuOne::SSOService type is handed to CommonOf to obtain a Common value that implements all the common logic supported by C++ types that inherit from QObject. This is an unsafe operation as there’s no way for CommonOf to guarantee that the provided address indeed points to a C++ value with a type that inherits from QObject, so the call site must necessarily import the unsafe package to provide the unsafe.Pointer parameter. That’s not a problem in this context, though, since such extension packages are necessarily dealing with unsafe logic in either case.

The obtained Common value is then assigned to the service’s obj field. In most cases, that value is instead assigned to an anonymous Common field, as done in qml.Window for example. Doing so means qml.Window values implement the qml.Object interface, and may be manipulated as a generic object. For the new Service type, though, the fact that this is a generic object won’t be disclosed for the moment, and instead a simpler API will be offered.

Following the function logic further, a finalizer is then registered to ensure the C++ value gets deallocated if the developer forgets to Close the service explicitly. When doing that, it’s important to ensure the Close method drops the finalizer when called, not only to facilitate the garbage collection of the object, but also to avoid deallocating the same value twice.

The next four lines in the function should be straightforward: they register methods of the service to be called when the relevant signals are emitted. Here is the implementation of two of these methods:

func (s *Service) credentialsFound(token *Token) {
        s.sendReply(reply{token: token})
}

func (s *Service) credentialsNotFound() {
        s.sendReply(reply{err: ErrNoCreds})
}

func (s *Service) sendReply(r reply) {
        select {
        case s.reply <- r:
        default:
                panic("internal error: multiple results received")
        }
}

Handling the signals consists of just sending the reply over the channel to whoever initiated the request. The select statement in sendReply just ensures that the invariant of having a reply per request is not broken without being noticed, as that would require a slightly different design.

There’s one more point worth observing in this logic: the token value received as a parameter in credentialsFound was already converted into the local Token type. In most cases, this is unnecessary as the parameter is directly useful as a qml.Object or as another native type (int, etc), but in this case UbuntuOne::Token is a plain C++ type that does not inherit from QObject, so the default signal parameter that would arrive in the Go method has only a type name and the value address.

Instead of taking the plain value, it is turned into a more useful one by registering a converter with the qml package:

func convertToken(engine *qml.Engine, obj qml.Object) interface{} {
        // Copy as the one held by obj is passed by reference.
        addr := unsafe.Pointer(obj.Property("plainAddr").(uintptr))
        token := &Token{C.tokenCopy(addr)}
        runtime.SetFinalizer(token, (*Token).finalize)
        return token
}

func init() {
        qml.RegisterConverter("Token", convertToken)
}

Given that setup, the Service.Token method may simply call the getCredentials method from the underlying UbuntuOne::SSOService method to fire a request, and block waiting for a reply:

func (s *Service) Token() (*Token, error) {
        s.mu.Lock()
        qml.RunMain(func() {
                C.ssoServiceGetCredentials(unsafe.Pointer(s.obj.Addr()))
        })
        reply := <-s.reply
        s.mu.Unlock()
        return reply.token, reply.err
}

The lock ensures that a second request won’t take place before the first one is done, forcing the correct sequencing of replies. Given the current logic, this isn’t strictly necessary since all requests are equivalent, but this will remain correct even if other methods from SSOService are added to this interface.

The returned Token value may then be used to sign URLs by simply calling the respective underlying method:

func (t *Token) HeaderSignature(method, url string) string {
        cmethod, curl := C.CString(method), C.CString(url)
        cheader := C.tokenSignURL(t.addr, cmethod, curl, 0)
        defer freeCStrings(cmethod, curl, cheader)
        return C.GoString(cheader)
}

No care about using qml.RunMain has to be taken in this case, because UbuntuOne::Token is a simple C++ type that does not interact with the Qt machinery.

This completes the journey of creating a package that provides access to the ubuntuoneauth library from Go code. In many cases it’s a better idea to simply rewrite the logic in Go, but there will be situations similar to this library, where either rewriting would take more time than reasonable, or perhaps delegating the maintenance and stabilization of the underlying logic to a different team is the best thing to do. In those cases, an approach such as the one exposed here can easily solve the problem.

Read more
niemeyer

As recently announced, my latest endeavor at Canonical is to enable graphical client-side development with the Go language via a new qml Go package that integrates the language with Qt’s QML framework.

The QML framework solves the problem of designing graphic applications via a language that offers a convenient mix of declarative and procedural features. As a very simple example, if the following QML content is loaded by itself, it will draw a square centralized inside a window:

import QtQuick 2.0

Rectangle {
        width: 640
        height: 280
        color: "black"

        Rectangle {
                width: 250; height: 250
                anchors.centerIn: parent
                color: "red"

                MouseArea {
                        anchors.fill: parent
                        onClicked: console.log("Stop poking me!")
                }
        }
}

As expected, it would look similar to:

Centralized Rectangle

The red rectangle will remain centered as the window is resized, and will print a message every time it is clicked. The clicked signal was hooked into logic implemented in JavaScript in this case, but it might as well have been hooked into custom Go logic with the same ease using the qml package. With very minor changes, it could also be something much more interesting than a red rectangle, such as a modern web browser:

import QtQuick 2.0
import QtWebKit 3.0

Rectangle {
        width: 640
        height: 280
        color: "black"

        WebView {
                width: 250; height: 250
                anchors.centerIn: parent
                url: "http://golang.org"
        }
}

The result would be:

Web Page Rectangle

It’s worth noting that this wasn’t just a screenshot of a web page, but rather a tiny fully functional web browser accessing a web page, easily embedded into the QML application to satisfy whatever browsey needs the author might have.

Being able to leverage all these existing building blocks so comfortably is what makes a graphics platform attractive to develop in, and is what inspired the on going effort to have that platform working under the Go language. As we can observe on some of the work published, that side of things seems to be going well.

Now, the next level up is to enable people to create such building blocks without leaving the Go language. The initial step towards that is already committed to the code repository. It enables Go types to be created and seamlessly integrated into the QML language. For example, consider this simple Go type:

type GoType struct {
        Text string
}

func (v *GoType) OnTextChanged() {
        fmt.Println("Text changed...")
}

If this type is made available to QML content via the qml.RegisterTypes function, it may then be used as any other native type. This would work as a QML file, for instance:

import GoExtensions 1.0

GoType {
        text: "Have you signed up for GopherCon yet?"
}

There’s a relevant detail, though: this type has no visible content by itself. This means that displaying something must be done via interactions with other QML items, or via external systems (dbus, for example).

Solving that problem has been one of my objectives in the last month, and although it’s not yet publicly available, the work is in a good-enough state that I feel comfortable talking about it.

As described, the main goal is enabling these custom types to paint. This will be achieved by exposing a Go package that offers an OpenGL API, and defining methods that the custom types have to implement for being able to render at appropriate times. Although the details aren’t finalized, the current draft can run familiar OpenGL code similar to the following:

func (v *GoType) Paint(p *qml.Painter) {
        obj := p.Object()
        width := gl.Float(obj.Int("width"))
        height := gl.Float(obj.Int("height"))

        gl.Enable(gl.BLEND)
        gl.BlendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
        gl.Color4f(1.0, 1.0, 1.0, 0.8)
        gl.Begin(gl.QUADS)
        gl.Vertex2f(0, 0)
        gl.Vertex2f(width, 0)
        gl.Vertex2f(width, height)
        gl.Vertex2f(0, height)
        gl.End()

        gl.LineWidth(2.5)
        gl.Color4f(0.0, 0.0, 0.0, 1.0)
        gl.Begin(gl.LINES)
        gl.Vertex2f(0, 0)
        gl.Vertex2f(width, height)
        gl.Vertex2f(width, 0)
        gl.Vertex2f(0, height)
        gl.End()
}

One interesting point to realize is that, again, this isn’t just rendering into a lonely OpenGL context. The rendered content goes into a framebuffer that lives within the overall QML scene graph, and has the same integration capabilities as every other QML item.

In the following animated image, for example, the white square was generated with the above GL code, and was rendered next to other native items with this QML source:

Go + OpenGL Example

Note the smooth blending and proper overlapping established in the scene graph based on the ordering of elements in the QML source. The animation was also driven by QML without special handling of the content rendered from Go.

Another relevant point in terms of the integration with Go is that the GL code demonstrated above is considered old-school these days. The modern version uses fewer calls, making better use of the graphics card memory by transferring, tracking, and handling data such as vertexes in contiguous arrays. This reduces the impact of the cross-context calls that depart and rejoin the Go runtime, and can unlock interesting use cases that the overhead might otherwise prevent.

In terms of availability of these features, we’re about to enter a holiday season, so they should be polished enough for a first public review at some point in January. Looking forward to it.

UPDATE (2014-01-27): These features are now publicly available. The painting example demonstrates the exact scenario pictured above.

UPDATE (2014-02-18): New screencast demonstrates some of the OpenGL features of Go QML.

Read more
Jussi Pakkanen

One of the main ways of reducing code complexity (and thus compile times) in C/C++ is forward declaration. The most basic form of it is this:

class Foo;

This tells the compiler that there will be a class called Foo but it does not specify it in more detail. With this declaration you can’t deal with Foo objects themselves but you can form pointers and references to them.

Typically you would use forward declarations in this manner.

class Bar;

class Foo {
  void something();
  void method1(Bar *b);
  void method2(Bar &b);
};

Correspondingly if you want to pass the objects themselves, you would typically do something like this.

#include"Bar.h"

class Foo {
  void something();
  void method1(Bar b);
  Bar method2();
};

This makes sense because you need to know the binary layout of Bar in order to pass it properly to and from a method. Thus a forward declaration is not enough, you must include the full header, otherwise you can’t use the methods of Foo.

But what if some class does not use either of the methods that deal with Bars? What if it only calls method something? It would still need to parse all of Bar (and everything it #includes) even though it never uses Bar objects. This seems inefficient.

It turns out that including Bar.h is not necessary, and you can instead do this:

class Bar;

class Foo {
  void something();
  void method1(Bar b);
  Bar method2();
};

You can define functions taking or returning full objects with forward declarations just fine. The catch is that those users of Foo that use the Bar methods need to include Bar.h themselves. Correspondingly those that do not deal with Bar objects themselves do not need to include Bar.hh ever, even indirectly. If you ever find out that they do, it is proof that your #includes are not minimal. Fixing these include chains will make your source files more isolated and decrease compile times, sometimes dramatically.

You only need to #include the full definition of Bar if you need:

  • to use its services (constructors, methods, constants, etc)
  • to know its memory layout

In practice the latter means that you need to either call or implement a function that takes a Bar object rather than a pointer or reference to it.

For other uses a forward declaration is sufficient.

Post scriptum

The discussion above holds even if Foo and Bar are templates, but making template classes as clean can be a lot harder and may in some instances be impossible. You should still try to minimize header includes as much as possible.

Read more
Jussi Pakkanen

With the release of C++11 something quite extraordinary has happened. Its focus on usable libraries, value types and other niceties has turned C++, conceptually, into a scripting language.

This seems like a weird statement to make, so let’s define exactly what we mean by that. Scripting languages differ from classical compiled languages such as C in the following ways:

  • no need to manually manage memory
  • expressive syntax, complex functionality can be implemented in just a couple of lines of code
  • powerful string manipulation functions
  • large standard library

As of C++11 all these hold true for C++. Let’s examine this with a simple example. Suppose we want to write a program that reads all lines from a file and writes them in a different file in sorted order. This is classical scripting language territory. In C++11 this code would look something like the following (ignoring error cases such as missing input arguments).

#include<string>
#include<vector>
#include<algorithm>
#include<fstream>

using namespace std;

int main(int argc, char **argv) {
  ifstream ifile(argv[1]);
  ofstream ofile(argv[2]);
  string line;
  vector<string> data;
  while(getline(ifile, line)) {
    data.push_back(line);
  }
  sort(data.begin(), data.end());
  for(const auto &i : data) {
    ofile << i << std::endl;
  }
  return 0;
}

That is some tightly packed code. Ignoring include boilerplate and the like leaves us with roughly ten lines of code. If you were to do this with plain C using only its standard library merely implementing getline functionality reliably would take more lines of code. Not to mention it would be tricky to get right.

Other benefits include:

  • every single line of code is clear, understandable and expressive
  • memory leaks can not happen, could be reworked into a library function easily
  • smaller memory footprint due to not needing a VM
  • compile time with -O3 is roughly the same as Python VM startup and has to be done only once
  • faster than any non-JITted scripting language

Now, obviously, this won’t mean that scripting languages will disappear any time soon (you can have my Python when you pry it from my cold, dead hands). What it does do is indicate that C++ is quite usable in fields one traditionally has not expected it to be.

Read more
Jussi Pakkanen

Pimpl is a common idiom in C++. It means hiding the implementation details of a class with a construct that looks like this:

class pimpl;

class Thing {
private:
  pimpl *p:
public:
 ...
};

This cuts down on compilation time because you don’t have to #include all headers required for the implementation of this class. The downside is that p needs to be dynamically allocated in the constructor, which means a call to new. For often constructed objects this can be slow and lead to memory fragmentation.

Getting rid of the allocation

It turns out that you can get rid of the dynamic allocation with a little trickery. The basic approach is to preserve space in the parent object with, say, a char array. We can then construct the pimpl object there with placement new and delete it by calling the destructor.

A header file for this kind of a class looks something like this:

#ifndef PIMPLDEMO_H
#define PIMPLDEMO_H

#define IMPL_SIZE 24

class PimplDemo {
private:
  char data[IMPL_SIZE];

 public:
  PimplDemo();
  ~PimplDemo();

  int getNumber() const;
};

#endif

IMPL_SIZE is the size of the pimpl object. It needs to be manually determined. Note that the size may be different on different platforms.

The corresponding implementation looks like this.

#include"pimpldemo.h"
#include<vector>

using namespace std;

class priv {
public:
  vector<int> foo;
};

#define P_DEF priv *p = reinterpret_cast<priv*>(data)
#define P_CONST_DEF const priv *p = reinterpret_cast<const priv*>(data)

PimplDemo::PimplDemo() {
  static_assert(sizeof(priv) == sizeof(data), "Pimpl array has wrong size.");
  P_DEF;
  new(p) priv;
  p->foo.push_back(42); // Just for show.
}

PimplDemo::~PimplDemo() {
  P_DEF;
  p->~priv();
}

int PimplDemo::getNumber() const {
  P_CONST_DEF;
  return (int)p->foo.size();
}

Here we define two macros that create a variable for accessing the pimpl. At this point we can use it just as if were defined in the traditional way. Note the static assert that checks, at compile time, that the space we have reserved for the pimpl is the same as what the pimpl actually requires.

We can test that it works with a sample application.

#include<cstdio>
#include<vector>
#include"pimpldemo.h"

int main(int argc, char **argv) {
  PimplDemo p;
  printf("Should be 1: %d\n", p.getNumber());
  return 0;
}

The output is 1 as we would expect. The program is also Valgrind clean so it works just the way we want it to.

When should I use this technique?

Never!

Well, ok, never is probably a bit too strong. However this technique should be used very sparingly. Most of the time the new call is insignificant. The downside of this approach is that it adds complexity to the code. You also have to keep the backing array size up to date as you change the contents of the pimpl.

You should only use this approach if you have an object in the hot path of your application and you really need to squeeze the last bit of efficiency out of your code. As a rough guide only about 1 of every 100 classes should ever need this. And do remember to measure the difference before and after. If there is no noticeable improvement, don’t do it.

Read more
niemeyer

Here is a small programming brain teaser for the weekend:

Assume uf is an unsigned integer with 64 bits that holds the IEEE-754 representation for a binary floating point number of that size.

The questions are:

1. How to tell if uf represents an integer number?

2. How to serialize the absolute value of such an integer number in the minimum number of bytes possible, using big-endian ordering and the 8th bit as a continuation flag? For example, float64(1<<70 + 3<<21) serializes as:

"\x81\x80\x80\x80\x80\x80\x80\x83\x80\x80\x00"

The background for this problem is that the current draft of the strepr specification mentions that serialization. Some languages, such as Python and Ruby, implement transparent arbitrary precision integers, and that makes implementing the specification easier.

For example, here is a simple Python interactive session that arrives at the result provided above exploring the native integer representation.

>>> f = float((1<<70) + (3<<21))
>>> v = int(f)
>>> l = [v&0x7f]
>>> v >>= 7
>>> while v > 0:
...     l.append(0x80 | (v&0x7f))
...     v >>= 7
... 
>>> l.reverse()
>>> "".join("%02x" % i for i in l)
'8180808080808083808000'

Python makes the procedure simpler because it is internally converting the float into an integer of appropriate precision via standard C functions, and then offering bit operations on the resulting value.

The suggested brain teaser can be efficiently solved using just the IEEE-754 representation, though, and it’s relatively easy because the problem is being constrained to the integer space.

A link to an implementation will be provided next week.

UPDATE: The logic is now available as part of the reference implementation of strepr.

Read more
niemeyer

Note: This is a candidate version of the specification. This note will be removed once v1 is closed, and any changes will be described at the end. Please get in touch if you’re implementing it.

Contents


Introduction

This specification defines strepr, a stable representation that enables computing hashes and cryptographic signatures out of a defined set of composite values that is commonly found across a number of languages and applications.

Although the defined representation is a serialization format, it isn’t meant to be used as a traditional one. It may not be seen entirely in memory at once, or written to disk, or sent across the network. Its role is specifically in aiding the generation of hashes and signatures for values that are serialized via other means (JSON, BSON, YAML, HTTP headers or query parameters, configuration files, etc).

The format is designed with the following principles in mind:

Understandable — The representation must be easy to understand to increase the chances of it being implemented correctly.

Portable — The defined logic works properly when the data is being transferred across different platforms and implementations, independently from the choice of protocol and serialization implementation.

Unambiguous — As a natural requirement for producing stable hashes, there is a single way to process any supported value being held in the native form of the host language.

Meaning-oriented — The stable representation holds the meaning of the data being transferred, not its type. For example, the number 7 must be represented in the same way whether it’s being held in a float64 or in an uint16.


Supported values

The following values are supported:

  • nil: the nil/null/none singleton
  • bool: the true and false singletons
  • string: raw sequence of bytes
  • integers: positive, zero, and negative integer numbers
  • floats: IEEE754 binary floating point numbers
  • list: sequence of values
  • map: associative value→value pairs


Representation

nil = 'z'

The nil/null/none singleton is represented by the single byte 'z' (0x7a).

bool = 't' / 'f'

The true and false singletons are represented by the bytes 't' (0x74) and 'f' (0x66), respectively.

unsigned integer = 'p' <value>

Positive and zero integers are represented by the byte 'p' (0x70) followed by the variable-length encoding of the number.

For example, the number 131 is always represented as {0x70, 0x81, 0x03}, independently from the type that holds it in the host language.

negative integer = 'n' <absolute value>

Negative integers are represented by the byte 'n' (0x6e) followed by the variable-length encoding of the absolute value of the number.

For example, the number -131 is always represented as {0x6e, 0x81, 0x03}, independently from the type that holds it in the host language.

string = 's' <num bytes> <bytes>

Strings are represented by the byte 's' (0x73) followed by the variable-length encoding of the number of bytes in the string, followed by the specified number of raw bytes. If the string holds a list of Unicode code points, the raw bytes must contain their UTF-8 encoding.

For example, the string hi is represented as {0x73, 0x02, 'h', 'i'}

Due to the complexity involved in Unicode normalization, it is not required for the implementation of this specification. Consequently, Unicode strings that if normalized would be equal may have different stable representations.

binary float = 'd' <binary64>

32-bit or 64-bit IEEE754 binary floating point numbers that are not holding integers are represented by the byte 'd' (0x64) followed by the big-endian 64-bit IEEE754 binary floating point encoding of the number.

There are two exceptions to that rule:

1. If the floating point value is holding a NaN, it must necessarily be encoded by the following sequence of bytes: {0x64, 0x7f, 0xf8, 0x00 0x00, 0x00, 0x00, 0x00, 0x00}. This ensures all NaN values have a single representation.

2. If the floating point value is holding an integer number it must instead be encoded as an unsigned or negative integer, as appropriate. Floating point values that hold integer numbers are defined as those where floor(v) == v && abs(v) != ∞.

For example, the value 1.1 is represented as {0x64, 0x3f, 0xf1, 0x99, 0x99, 0x99, 0x99, 0x99, 0x9a}, but the value 1.0 is represented as {0x70, 0x01}, and -0.0 is represented as {0x70, 0x00}.

This distinction means all supported numbers have a single representation, independently from the data type used by the host language and serialization format.

list = 'l' <num items> [<item> ...]

Lists of values are represented by the byte 'l' (0x6c), followed by the variable-length encoding of the number of pairs in the list, followed by the stable representation of each item in the list in the original order.

For example, the value [131, -131] is represented as {0x6c, 0x70, 0x81, 0x03, 0x6e, 0x81, 0x03, 0x65}

map = 'm' <num pairs> [<item key> <item value>  ...]

Associative maps of values are represented by the byte 'm' (0x6d) followed by the variable-length encoding of the number of pairs in the map, followed by an ordered sequence of the stable representation of each key and value in the map. The pairs must be sorted so that the stable representation of the keys is in ascending lexicographical order. A map must not have multiple keys with the same representation.

For example, the map {"a": 4, 5: "b"} is always represented as {0x6d, 0x02, 0x70, 0x05, 0x73, 0x01, 'b', 0x73, 0x01, 'a', 0x70, 0x04}.


Variable-length encoding

Integers are variable-length encoded so that they can be represented in short space and with unbounded size. In an encoded number, the last byte holds the 7 least significant bits of the unsigned value, and zero as the eight bit. If there are remaining non-zero bits, the previous byte holds the next 7 bits, and the eight bit is set on to flag the continuation to the next byte. The process continues until there are non-zero bits remaining. The most significant bits end up in the first byte of the encoded value, which must necessarily not be 0x80.

For example, the number 128 is variable-length encoded as {0x81, 0x00}.


Reference implementation

A reference implementation is available, including a test suite which should be considered when implementing the specification.


Changes

draft1 → draft2

  • Enforce the use of UTF-8 for Unicode strings and explain why normalization is being left out.
  • Enforce a single NaN representation for floats.
  • Explain that map key uniqueness refers to the representation.
  • Don’t claim the specification is easy to implement; floats require attention.
  • Mention reference implementation.

Read more
niemeyer

A few years ago, when I started pondering about the possibility of porting juju to the Go language, one of the first pieces of the puzzle that were put in place was goyaml: a Go package to parse and serialize a yaml document. This was just an experiment and, as a sane route to get started, a Go layer that does all the language-specific handling was written on top of the libyaml C scanner, parser, and serializer library.

This was a good initial plan, but for a number of reasons the end goal was always to have a pure Go implementation. Having a C layer in a Go program slows down builds significantly due to the time taken to build the C code, makes compiling in other platforms and cross-compiling harder, has certain runtime penalties, and also forces the application to drop the memory safety guarantees offered by Go.

For these reasons, over the last couple of weeks I took a few hours a day to port the C backend to Go. The total time, considering full time work days, would be equivalent to about a week worth of work.

The work started on the scanner and parser side of the library. This took most of the time, not only because it encompassed more than half of the code base, but also because the shared logic had to be ported too, and there was a need to understand which patterns were used in the old code and how they would be converted across in a reasonable way.

The whole scanner and parser plus header files, or around 5000 code lines of C, were ported over in a single shot without intermediate runs. To steer the process in a sane direction, gofmt was called often to reformat the converted code, and then the project was compiled every once in a while to make sure that the pieces were hanging together properly enough.

It’s worth highlighting how useful gofmt was in that process. The C code was converted in the most convenient way to type it, and then gofmt would quickly put it all together in a familiar form for analysis. Not rarely, it would also point out trivial syntactic issues. A double win.

After the scanner and parser were finally converted completely, the pre-existing Go unmarshaling logic was shifted to the new pure implementation, and the reading side of the test suite could run as-is. Naturally, though, it didn’t work out of the box.

To quickly pick up the errors in the new implementation, the C logic and the Go port were put side-by-side to run the same tests, and tracing was introduced in strategic points of the scanner and parser. With that, it was easy to spot where they diverged and pinpoint the human errors.

It took about two hours to get the full suite to run successfully, with a handful of bugs uncovered. Out of curiosity, the issues were:

  • An improperly dropped parenthesis affected the precedence of an expression
  • A slice was being iterated with copying semantics where a reference was necessary
  • A pointer arithmetic conversion missed the base where there was base+offset addressing
  • An inner scoped variable improperly shadowed the outer scope

The same process of porting and test-fixing was then repeated on the the serializing side of the project, in a much shorter time frame for the reasons cited.

The resulting code isn’t yet idiomatic Go. There are several signs in it that it was ported over from C: the name conventions, the use of custom solutions for buffering and reader/writer abstractions, the excessive copying of data due to the need of tracking data ownership so the simple deallocating destructors don’t double-free, etc. It’s also been deoptimized, due to changes such as the removal of macros and in many cases its inlining, and the direct expansion of large unions which causes some core objects to grow significantly.

At this point, though, it’s easy to gradually move the code base towards the common idiom in small increments and as time permits, and cleaning up those artifacts that were left behind.

This code will be made public over the next few days via a new goyaml release. Meanwhile, some quick facts about the process and outcome follows.

Lines of code

According to cloc, there was a total of 7070 lines of C code in .c and .h files. Of those, 6727 were ported, and 342 were 12 functions that were left unconverted as being unnecessary right now. Those 6727 lines of C became 5039 lines of Go code in a mostly one-to-one dumb translation.

That difference comes mainly from garbage collection, lack of forward declarations, standard helpers such as append, range-based for loops, first class slice type with length and capacity, internal OOM handling, and so on.

Future work code can easily increase the difference further by replacing some of the logic ported with more sensible options available in Go, such as standard abstractions for readers and writers, buffered writing support as availalbe in the standard library, etc.

Code clarity and safety

In the specific context of the work done, which is of a scanner, parser and serializer, the slice abstraction is responsible for noticeable clarity gains in the code, when compared to the equivalent logic based on pointer arithmetic. It also gives a much more comforting guarantee of correctness of the written code due to bound-checking.

Performance

While curious, this shouldn’t be taken as a performance comparison between the two languages, as it is comparing a fine tuned C implementation with something that is worse than a direct one-to-one port: not only it hasn’t seen any time at all on preventing waste, but the original logic was deoptimized due to changes such as the removal of inlining macros and the expansion of large unions. There are many obvious changes to be done for improving performance.

With that out of the way, in a simple decoding benchmark the C-backed decoder runs on about 37% of the time taken by the out-of-the-box deoptimized Go port.

Output size

The previous goyaml.a Go package file had 1463kb. The new one has 1016kb. This difference includes glue code generated for the integration.

Considering only the .c and .h files involved in the port, the C object code generated with the standard flags used by the go build tool (-g -O2) sums up to 789kb. The equivalent Go code with the standard settings compiles to 664kb. The 12 functions not ported are also part of that difference, so the difference is pretty much negligible.

Build time

Building the 8 .c files alone takes 3.6 seconds with the standard flags used by the go build tool (-g -O2). After the port, building the entire Go project with the standard settings takes 0.3 seconds.

Mechanical changes

Many of the mechanical changes were done using regular expressions. Excluding the trivial ones, about a dozen regular expressions were used to swap variable and type names, drop parenthesis, place brackets in the right locations, convert function declarations, and so on.

Read more
niemeyer

Last week I was part of a rant with a couple of coworkers around the fact Go handles errors for expected scenarios by returning an error value instead of using exceptions or a similar mechanism. This is a rather controversial topic because people have grown used to having errors out of their way via exceptions, and Go brings back an improved version of a well known pattern previously adopted by a number of languages — including C — where errors are communicated via return values. This means that errors are in the programmer’s face and have to be dealt with all the time. In addition, the controversy extends towards the fact that, in languages with exceptions, every unadorned error comes with a full traceback of what happened and where, which in some cases is convenient.

All this convenience has a cost, though, which is rather simple to summarize:

Exceptions teach developers to not care about errors.

A sad corollary is that this is relevant even if you are a brilliant developer, as you’ll be affected by the world around you being lenient towards error handling. The problem will show up in the libraries that you import, in the applications that are sitting in your desktop, and in the servers that back your data as well.

Raymond Chen described the issue back in 2004 as:

Writing correct code in the exception-throwing model is in a sense harder than in an error-code model, since anything can fail, and you have to be ready for it. In an error-code model, it’s obvious when you have to check for errors: When you get an error code. In an exception model, you just have to know that errors can occur anywhere.

In other words, in an error-code model, it is obvious when somebody failed to handle an error: They didn’t check the error code. But in an exception-throwing model, it is not obvious from looking at the code whether somebody handled the error, since the error is not explicit.
(…)
When you’re writing code, do you think about what the consequences of an exception would be if it were raised by each line of code? You have to do this if you intend to write correct code.

That’s exactly right. Every line that may raise an exception holds a hidden “else” branch for the error scenario that is very easy to forget about. Even if it sounds like a pointless repetitive task to be entering that error handling code, the exercise of writing it down forces developers to keep the alternative scenario in mind, and pretty often it doesn’t end up empty.

It isn’t the first time I write about that, and given the controversy that surrounds these claims, I generally try to find one or two examples that bring the issue home. So here is the best example I could find today, within the pty module of Python’s 3.3 standard library:

def spawn(argv, master_read=_read, stdin_read=_read):
    """Create a spawned process."""
    if type(argv) == type(''):
        argv = (argv,)
    pid, master_fd = fork()
    if pid == CHILD:
        os.execlp(argv[0], *argv)
    (...)

Every time someone calls this logic with an improper executable in argv there will be a new Python process lying around, uncollected, and unknown to the application, because execlp will fail, and the process just forked will be disregarded. It doesn’t matter if a client of that module catches that exception or not. It’s too late. The local duty wasn’t done. Of course, the bug is trivial to fix by adding a try/except within the spawn function itself. The problem, though, is that this logic looked fine for everybody that ever looked at that function since 1994 when Guido van Rossum first committed it!

Here is another interesting one:

$ make clean
Sorry, command-not-found has crashed! Please file a bug report at:

https://bugs.launchpad.net/command-not-found/+filebug

Please include the following information with the report:

command-not-found version: 0.3
Python version: 3.2.3 final 0
Distributor ID: Ubuntu
Description:    Ubuntu 13.04
Release:        13.04
Codename:       raring
Exception information:

unsupported locale setting
Traceback (most recent call last):
  File "/.../CommandNotFound/util.py", line 24, in crash_guard
    callback()
  File "/usr/lib/command-not-found", line 69, in main
    enable_i18n()
  File "/usr/lib/command-not-found", line 40, in enable_i18n
    locale.setlocale(locale.LC_ALL, '')
  File "/usr/lib/python3.2/locale.py", line 541, in setlocale
    return _setlocale(category, locale)
locale.Error: unsupported locale setting

That’s a pretty harsh crash for the lack of locale data in a system-level application that is, ironically, supposed to tell users what packages to install when commands are missing. Note that at the top of the stack there’s a reference to crash_guard. This function has the intent of catching all exceptions right at the edge of the call stack, and displaying a detailed system specification and traceback to aid in fixing the problem.

Such “parachute catching” is a fairly common pattern in exception-oriented programming and tends to give developers the false sense of having good error handling within the application. Rather than actually guarding the application, though, it’s just a useful way to crash. The proper thing to have done in the case above would be to print a warning, if at all, and then let the program run as usual. This would have been achieved by simply wrapping that one line as in:

try:
    locale.setlocale(locale.LC_ALL, '')
except Exception as e:
    print("Cannot change locale:", e)

Clearly, it was easy to handle that one. The problem, again, is that it was very natural to not do it in the first place. In fact, it’s more than natural: it actually feels good to not be looking at the error path. It’s less code, more linear, and what’s left is the most desired outcome.

The consequence, unfortunately, is that we’re immersing ourselves in a world of brittle software and pretty whales. Although more verbose, the error result style builds the correct mindset: does that function or method have a possible error outcome? How is it being handled? Is that system-interacting function not returning an error? What is being done with the problem that, of course, can happen?

A surprising number of crashes and plain misbehavior is a result of such unconscious negligence.

Read more
niemeyer

This weekend the proper environment settled out for sorting a pet peeve that shows up every once in a while when coding: writing logic that interacts with other applications in the system via their stdin and stdout streams is often more involved than it should be, which seems pretty ironic when sitting in front of a Unix-like system.

Rather than going over the trouble of setting up pipes and hooking them up in a custom way, often applications end up just delegating the job to /bin/sh, which is not ideal for a number of reasons: argument formatting isn’t straightforward, injecting custom application-defined logic is hard, which means even simple tasks that might be easily achieved by the language end up shelling out to further external applications, and so on.

In an attempt to address that, I’ve spent some time working on an experimental Go package that is being released today: pipe.

I hope you like it as well, and please drop me a note if you find any issues.

Read more
Jussi Pakkanen

A relatively large portion of software development time is not spent on writing, running, debugging or even designing code, but waiting for it to finish compiling. This is usually seen as necessary evil and accepted as an unfortunate fact of life. This is a shame, because spending some time optimizing the build system can yield quite dramatic productivity gains.

Suppose a build system takes some thirty seconds to run for even trivial changes. This means that even in theory you can do at most two changes a minute. In practice the rate is a lot lower. If the build step takes only a few seconds, trying out new code becomes a lot faster. It is easier to stay in the zone when you don’t have to pause every so often to wait for your tools to finish doing their thing.

Making fundamental changes in the code often triggers a complete rebuild. If this takes an hour or more (there are code bases that take 10+ hours to build), people try to avoid fundamental changes as much as possible. This causes loss of flexibility. It becomes very tempting to just do a band-aid tweak rather than thoroughly fix the issue at hand. If the entire rebuild could be done in five to ten minutes, this issue would become moot.

In order to make things fast, we first have to understand what is happening when C/C++ software is compiled. The steps are roughly as follows:

  1. Configuration
  2. Build tool startup
  3. Dependency checking
  4. Compilation
  5. Linking

We will now look at each step in more detail focusing on how they can be made faster.

Configuration

This is the first step when starting to build. Usually means running a configure script or CMake, Gyp, SCons or some other tool. This can take anything from one second to several minutes for very large Autotools-based configure scripts.

This step happens relatively rarely. It only needs to be run when changing configurations or changing the build configuration. Short of changing build systems, there is not much to be done to make this step faster.

Build tool startup

This is what happens when you run make or click on the build icon on an IDE (which is usually an alias for make). The build tool binary starts and reads its configuration files as well as the build configuration, which are usually the same thing.

Depending on build complexity and size, this can take anywhere from a fraction of a second to several seconds. By itself this would not be so bad. Unfortunately most make-based build systems cause make to be invocated tens to hundreds of times for every single build. Usually this is caused by recursive use of make (which is bad).

It should be noted that the reason Make is so slow is not an implementation bug. The syntax of Makefiles has some quirks that make a really fast implementation all but impossible. This problem is even more noticeable when combined with the next step.

Dependency checking

Once the build tool has read its configuration, it has to determine what files have changed and which ones need to be recompiled. The configuration files contain a directed acyclic graph describing the build dependencies. This graph is usually built during the configure step. Suppose we have a file called SomeClass.cc which contains this line of code:

#include "OtherClass.hh"

This means that whenever OtherClass.hh changes, the build system needs to rebuild SomeClass.cc. Usually this is done by comparing the timestamp of SomeClass.o against OtherClass.hh. If the object file is older than the source file or any header it includes, the source file is rebuilt.

Build tool startup time and the dependency scanner are run on every single build. Their combined runtime determines the lower bound on the edit-compile-debug cycle. For small projects this time is usually a few seconds or so. This is tolerable.

The problem is that Make scales terribly to large projects. As an example, running Make on the codebase of the Clang compiler with no changes takes over half a minute, even if everything is in cache. The sad truth is that in practice large projects can not be built fast with Make. They will be slow and there’s nothing that can be done about it.

There are alternatives to Make. The fastest of them is Ninja, which was built by Google engineers for Chromium. When run on the same Clang code as above it finishes in one second. The difference is even bigger when building Chromium. This is a massive boost in productivity, it’s one of those things that make the difference between tolerable and pleasant.

If you are using CMake or Gyp to build, just switch to their Ninja backends. You don’t have to change anything in the build files themselves, just enjoy the speed boost. Ninja is not packaged on most distributions, though, so you might have to install it yourself.

If you are using Autotools, you are forever married to Make. This is because the syntax of autotools is defined in terms of Make. There is no way to separate the two without a backwards compatibility breaking complete rewrite. What this means in practice is that Autotool build systems are slow by design, and can never be made fast.

Compilation

At this point we finally invoke the compiler. Cutting some corners, here are the approximate steps taken.

  1. Merging includes
  2. Parsing the code
  3. Code generation/optimization

Let’s look at these one at a time. The explanations given below are not 100% accurate descriptions of what happens inside the compiler. They have been simplified to emphasize the facets important to this discussion. For a more thorough description, have a look at any compiler textbook.

The first step joins all source code in use into one clump. What happens is that whenever the compiler finds an include statement like #include “somefile.h”, it finds that particular source file and replaces the #include with the full contents of that file. If that file contained other #includes, they are inserted recursively. The end result is one big self-contained source file.

The next step is parsing. This means analyzing the source file, splitting it into tokens and building an abstract syntax tree. This step translates the human understandable source code into a computer understandable unambiguous format. It is what allows the compiler to understand what the user wants the code to do.

Code generation takes the syntax tree and transforms it into machine code sequences called object code. This code is almost ready to run on a CPU.

Each one of these steps can be slow. Let’s look at ways to make them faster.

Faster #includes

Including by itself is not slow, slowness comes from the cascade effect. Including even one other file causes everything included in it to be included as well. In the worst case every single source file depends on every header file. This means that touching any header file causes the recompilation of every source file whether they use that particular header’s contents or not.

Cutting down on interdependencies is straightforward. Only #include those headers that you actually use. In addition, header files must not include any other header files if at all possible. The main tool for this is called forward declaration. Basically what it means is that instead of having a header file that looks like this:

#include "SomeClass.hh"

class MyClass {
  SomeClass s;
};

You have this:

class SomeClass;

class MyClass {
  SomeClass *s;
}

Because the definition of SomeClass is not know, you have to use pointers or references to it in the header.

Remember that #including MyClass.hh would have caused SomeClass.hh and all its #includes to be added to the original source file. Now they aren’t, so the compiler’s work has been reduced. We also don’t have to recompile the users of MyClass if SomeClass changes. Cutting the dependency chain like this everywhere in the code base can have a major effect in build time, especially when combined with the next step. For a more detailed analysis including measurements and code, see here.

Faster parsing

The most popular C++ libraries, STL and Boost, are implemented as header only libraries. That is, they don’t have a dynamically linkable library but rather the code is generated anew into every binary file that uses them. Compared to most C++ code, STL and Boost are complex. Really, really complex. In fact they are most likely the hardest pieces of code a C++ compiler has to compile. Boost is often used as a stress test on C++ compilers, because it is so difficult to compile.

It is not an exaggeration to say that for most C++ code using STL, parsing the STL headers is up to 10 times slower than parsing all the rest. This leads to massively slow build times because of class headers like this:

#include <vector>

class SomeClass {
private:
  vector<int> numbers;

public:
  ...
};

As we learned in the previous chapter, this means that every single file that includes this header must parse STL’s vector definition, which is an internal implementation detail of SomeClass and even if they would not use vector themselves. Add some other class include that uses a map, one for unordered_map, a few Boost includes and what do you end up with? A code base where compiling any file requires parsing all of STL and possibly Boost. This is a factor of 3-10 slowdown on compile times.

Getting around this is relatively simple, though takes a bit of work. It is known as the pImpl idiom. One way of achieving it is this:

---header---

struct someClassPrivate;

class SomeClass {
private:
  someClassPrivate *p;
};

---- implementation ---
#include <vector>
struct someClassPrivate {
  vector<int> numbers;
};

SomeClass::SomeClass() {
  p = new someClassPrivate;
}

SomeClass::~SomeClass() {
  delete p;
}

Now the dependency chain is cut and users of SomeClass don’t have to parse vector. As an added bonus the vector can be changed to a map or anything else without needing to recompile files that use SomeClass.

 Faster code generation

Code generation is mostly an implementation detail of the compiler, and there’s not much that can be done about it. There are a few ways to make it faster, though.

Optimizing code is slow. In every day development all optimizations should be disabled. Most build systems do this by default, but Autotools builds optimized binaries by default. In addition to being slow, this makes debugging a massive pain, because most of the time trying to print the value of some variable just prints out “value optimised out”.

Making Autotools build non-optimised binaries is relatively straightforward. You just have to run configure like this: ./configure CFLAGS=’O0 -g’ CXXFLAGS=’-O0 -g’. Unfortunately many people mangle their autotools cflags in config files so the above command might not work. In this case the only fix is to inspect all autotools config files and fix them yourself.

The other trick is about reducing the amount of generated code. If two different source files use vector<int>, the compiler has to generate the complete vector code in both of them. During linking (discussed in the next chapter) one of them is just discarded. There is a way to tell the compiler not to generate the code in the other file using a technique that was introduced in C++0x called extern templates. They are used like this.

file A:

#include <vector>
template class std::vector<int>;

void func() {
  std::vector<int> numbers;
}

file B:

#include <vector>
extern template class std::vector<int>;

void func2() {
  std::vector<int> idList;
}

This instructs the compiler not to generate vector code when compiling file B. The linker makes it use the code generated in file A.

Build speedup tools

CCache is an application that stores compiled object code into a global cache. If the same code is compiled again with the same compiler flags, it grabs the object file from the cache rather than running the compiler. If you have to recompile the same code multiple times, CCache may offer noticeable speedups.

A tool often mentioned alongside CCache is DistCC, which increases parallelism by spreading the build to many different machines. If you have a monster machine it may be worth it. On regular laptop/desktop machines the speed gains are minor (it might even be slower).

Precompiled headers

Precompiled headers is a feature of some C++ compilers that basically serializes the in-memory representation of parsed code into a binary file. This can then be read back directly to memory instead of reparsing the header file when used again. This is a feature that can provide massive speedups.

Out of all the speedup tricks listed in this post, this has by far the biggest payoff. It turns the massively slow STL includes into, effectively, no-ops.

So why is it not used anywhere?

Mostly it comes down to poor toolchain support. Precompiled headers are fickle beasts. For example with GCC they only work between two different compilation units if the compiler switches are exactly the same. Most people don’t know that precompiled headers exist, and those that do don’t want to deal with getting all the details right.

CMake does not have direct support for them. There are a few modules floating around the Internet, but I have not tested them myself. Autotools is extremely precompiled header hostile, because its syntax allows for wacky and dynamic alterations of compiler flags.

Faster Linking

When the compiler compiles a file and comes to a function call that is somewhere outside the current file, such as in the standard library or some other source file, it effectively writes a placeholder saying “at this point jump to function X”. The linker takes all these different compiled files and connects the jump points to their actual locations. When linking is done, the binary is ready to use.

Linking is surprisingly slow. It can easily take minutes on relatively large applications. As an extreme case, linking the Chromium browser on ARM takes 3 gigs of RAM and takes 18 hours.

Yes, hours.

The main reason for this is that the standard GNU linker is quite slow. Fortunately there is a new, faster linker called Gold. It is not the default linker yet, but hopefully it will be soon. In the mean time you can install and use it manually.

A different way of making linking faster is to simply cut down on these symbols using a technique called symbol visibility. The gist of it is that you hide all non-public symbols from the list of exported symbols. This means less work and memory use for the linker, which makes it faster.

Conclusions

Contrary to popular belief, compiling C++ is not actually all that slow. The STL is slow and most build tools used to compile C++ are slow. However there are faster tools and ways to mitigate the slow parts of the language.

Using them takes a bit of elbow grease, but the benefits are undeniable. Faster build times lead to happier developers, more agility and, eventually, better code.

Read more
niemeyer

?Rob Pike just wrote an article/talk that is the best background on the origins of Go yet.

It surprises me how much his considerations match my world view pre-Go, and in a sense give me a fulfilling explanation about why I got hooked into the language. I still recall sitting in a hotel years ago with Jamu Kakar while we went through the upcoming C++0x standard (now C++11) and got perplexed about how someone could think that having details such as rvalue references and move constructors into the language specification was something reasonable.

Rob also expressed again the initial surprise that developers using languages such as Python and Ruby were more often the ones willing to migrate towards Go, rather than ones using C++, with some reasonable explanations about why that is so. While I agree with his considerations, I see Python going through the same kind of issue that caused C++ to be what it is today.

Consider this excerpt from PEP 0380 as evidence:

If yielding of values is the only concern, this can be performed without much difficulty using a loop such as

for v in g:
    yield v

However, if the subgenerator is to interact properly with the caller in the case of calls to send(), throw() and close(), things become considerably more difficult. As will be seen later, the necessary code is very complicated, and it is tricky to handle all the corner cases correctly.

A new syntax will be proposed to address this issue. In the simplest use cases, it will be equivalent to the above for-loop, but it will also handle the full range of generator behaviour, and allow generator code to be refactored in a simple and straightforward way.

This description has the same DNA that creates the C++ problem Rob talks about. Don’t get me wrong, I’m sure yield from will make a lot of people very happy, and that’s exactly the tricky part. It’s easy and satisfying to please a selection of users, but often that leads to isolated solutions that create new cognitive load and new corner cases that in turn lead to new requirements.

The history of generators in Python is specially telling:

  • PEP 0234 [30-Jan-2001] – Iterators – Accepted
  • PEP 0255 [18-May-2001] – Simple Generators – Accepted
  • PEP 0288 [21-Mar-2002] – Generators Attributes and Exceptions – Withdrawn
  • PEP 0289 [30-Jan-2002] – Generator Expressions – Accepted
  • PEP 0325 [25-Aug-2003] – Resource-Release Support for Generators – Rejected
  • PEP 0342 [10-May-2005] – Coroutines via Enhanced Generators – Accepted
  • PEP 0380 [13-Feb-2009] – Syntax for Delegating to a Subgenerator – Accepted

You see the rabbit hole getting deeper? I’ll clarify it further by rephrasing the previous quote from PEP 0380:

If [feature from PEP 0255] is the only concern, this can be performed without much difficulty using a loop [...] However, if the subgenerator is to interact properly with [changes from PEP 0342] things become considerably more difficult. [So we need feature from PEP 0380.]

Yet, while the language grows handling self-inflicted micro-problems, the real issue is still not solved. All of these features are simplistic forms of concurrency and communication, that don’t satisfy the developers, causing community fragmentation.

This happened to C++, to Python, and to many other languages. Go seems slightly special in that regard in the sense that its core development team has an outstanding respect for simplicity, yet dares to solve the difficult problems at their root, while keeping these solutions orthogonal so that they support each other. Less is more, and is not always straightforward.

Read more
Jussi Pakkanen

We all know that compiling C++ is slow.

Fewer people know why, or how to make it faster. Other people do, for example the developers at Remedy made the engine of Alan Wake compile from scratch in five minutes. The payoff for this is increased productivity, because the edit-compile-run cycle gets dramatically faster.

There are several ways to speed up your compiles. This post looks at reworking your #includes.

Quite a bit of C++ compilation time is spent parsing headers for STL, Qt and whatever else you may be using. But how long does it actually take?

To find out, I wrote a script to generate C++ source. You can download it here. What it does is generate source files that have some includes and one dummy function. The point is to simulate two different use cases. In the first each source file includes a random subset of the includes. One file might use std::map and QtCore, another one might use Boost’s strings and so on. In the second case all possible includes are put in a common header which all source files include. This simulates “maximum developer convenience” where all functions are available in all files without any extra effort.

To generate the test data, we run the following commands:

mkdir good bad
./generate_code.py --with-boost --with-qt4 good
./generate_code.py --with-boost --with-qt4 --all-common bad

Compilation is straightforward:

cd good; cmake .; time make; cd ..
cd bad; cmake .; time make; cd ..

By default the script produces 100 source files. When the includes are kept in individual files, compiling takes roughly a minute. When they are in a common header, it takes three minutes.

Remember: the included STL/Boost/Qt4 functionality is not used in the code. This is just the time spent including and parsing their headers. What this example shows is that you can remove 2 minutes of your build time, just by including C++ headers smartly.

The delay scales linearly. For 300 files the build times are 2 minutes 40 seconds and 7 minutes 58 seconds. That’s over five minutes lost on, effectively, no-ops. The good news is that getting rid of this bloat is relatively easy, though it might take some sweat.

  1. Never include any (internal) header in another header if you can use a forward declaration. Include the header in the implementation file.
  2. Never include system headers (STL, etc) in your headers unless absolutely necessary, such as due to inheritance. If your class uses e.g. std::map internally, hide it with pImpl. If your class API requires these headers, change it so that it doesn’t or use something more lightweight (e.g. std::iterator instead of std::vector).
  3. Never, never, ever include system stuff in your public headers. That slows down not just your own compilation time, but also every single user of your library. The only exception is when your library is a plugin or extension to an existing library and even then your includes need to be minimal.

Read more
ThomasVo5

This post explains how to conduct large-scale MOO experiments with the SHARK machine learning library on clusters running Oracle grid engine.

An experiment consists of three phases:

  1. front approximation
  2. performance indicator calculation
  3. result accumulation and statistics calculation

Within this post, I’m going to focus on the first step.

Front Approximation

In this phases, the Pareto front approximations generated by applying multiple multi-objective evolutionary algorithms (MOEAs) to a set of objective functions are recorded.

Here, I assume that we want to evaluate the (µ+1)-MO-CMA-ES relying on the hypervolume indicator on the DTLZ suite of benchmark functions. A ready-to-use command-line application implementing the MO-CMA-ES is bundled with the default Shark installation. The executable is configurable via command-line arguments queryable by passing –help:

  --objectiveFunction arg
  --seed arg (=1)
  --storageInterval arg (=100)
  --searchSpaceDimension arg (=10)
  --maxNoEvaluations arg (=50000)
  --timeLimit arg (=1000)
  --fitnessLimit arg (=1e-10)
  --resultDir arg (=.)
  --algorithmConfigFile arg
  --algorithmUsage 
  --defaultAlgorithmUsage 
  --objectiveSpaceDimension arg (=2)
  --reportFitnessFunctions 

That is, to execute the MO-CMA-ES for DTLZ2 with 3 objectives and terminating after 50000 objective function evaluations, the following call is required:

  SteadyStateMOCMAMain --objectiveFunction=DTLZ2 --objectiveSpaceDimension=3 --maxNoEvaluations=50000 

Note that we do not specify the rng seed explicitly but rely on the default value 1.

For the scenario considered here, we want to run several independent trials of one specific MOEA and one specific objective function in parallel. To this end, we rely on the array job feature of the grid engine and submit an array of 25 independent trials to the grid engine with the following command:

  qsub -N 'DTLZ2_3' -t 1-25 RunAlgo.sh DTLZ2 /globally/known/path 3

Here, the script RunAlgo.sh is defined as follows:

#!/bin/bash
#$ -S /bin/bash
#$ -o /dev/null

SteadyStateMOCMAMain --seed $SGE_TASK_ID --resultDir=$2 --objectiveFunction=$1 --objectiveSpaceDimension=$3

In summary, the script takes care of actually running the algorithm and setting the seed to environment variable $SGE_TASK_ID. The variable is set by the grid engine to the unique job number and thus, we can ensure independent trials. There is one more thing to note: The result dir needs to be known across the whole cluster. Normally, your dev ops provide you with a scratch environment that is accessible from every computing node.

That’s it. Wait a few minutes until the experiment completes and stay tuned for the second post that explains how to evaluate the quality of the Pareto-front approximations.


Filed under: C++, Shark, Technology, Uncategorized

Read more
ThomasVo5

Taken from the SHARK website:

SHARK is a modular C++ library for the design and optimization of adaptive systems. It provides methods for linear and nonlinear optimization, in particular evolutionary and gradient-based algorithms, kernel-based learning algorithms and neural networks, and various other machine learning techniques. SHARK serves as a toolbox to support real world applications as well as research in different domains of computational intelligence and machine learning. The sources are compatible with the following platforms: Windows, Solaris, MacOS X, and Linux.

The library has been in active development for over 10 years now and is in use by scientists all over the world. Last year, we, the core SHARK developers, decided that a rewrite of the library is necessary to support future use cases and provide a solid platform for users and contributors, alike. Our goals were simple:

  • Unify and simplify the library structure.
  • Rely on established components wherever feasible.
  • Documentation, documentation and again, documentation
  • Focus on quality.

In this post, I would like to dive a little deeper into the topic of quality and the processes that we established to ensure a constant and high level of quality. We decided to address quality both from a technical (read: testable) and from an API point of view.

In terms of API quality, we want the programming interface to be consistent, convenient to use and easy to extend. In equivalence to the user experience, we want potential developers to experience a welcoming and friendly environment. As we are a geographically distributed team of developers and scientists, we decided to go for a pre-commit code review approach implemented with the help of ReviewBoard. Despite initial concerns on behalf of the developers, the review process proved to be one of the most useful tools while rewriting the library with developers starting to like the final “Ship It” quickly.

In terms of “technical” quality, we decided to go for continous integration of all (reviewed) commits to the rewrite branch for all of our supported platforms. With the help of Jenkins and a bunch of virtual machines, we finally realized our idea of continous integration testing to prevent from regressions. Our unit test suite is implemented with the unit testing framework provided by boost. Test execution is handled by CTest. Static and dynamic analysis of the library is carried out with the help of cppcheck and valgrind, respectively. Code coverage metrics are calculated with the help of gcov. Finally, we are integrating all of the testing results in the job-specific views of our Jenkins instance, thereby providing developers a single source of information on the state of the library.


Filed under: C++, Quality, Shark

Read more
Gustavo Niemeyer

About 1 year after development started in Ensemble, today the stars finally aligned just the right way (review queue mostly empty, no other pressing needs, etc) for me to start writing the specification about the repository system we’ve been jointly planning for a long time. This is the system that the Ensemble client will communicate with for discovering which formulas are available, for publishing new formulas, for obtaining formula files for deployment, and so on.

We of course would have liked for this part of the project to have been specified and written a while ago, but unfortunately that wasn’t possible for several reasons. That said, there are also good sides of having an important piece flying around in minds and conversations for such a long time: sitting down to specify the system and describe the inner-working details has been a breeze. Even details such as the namespacing of formulas, which hasn’t been entirely clear in my mind, was just streamed into the document as the ideas we’ve been evolving finally got together in a written form.

One curious detail: this is the first long term project at Canonical that will be developed in Go, rather than Python or C/C++, which are the most used languages for projects within Canonical. Not only that, but we’ll also be using MongoDB for a change, rather than the traditional PostgreSQL, and will also use (you guessed) the mgo driver which I’ve been pushing entirely as a personal project for about 8 months now.

Naturally, with so many moving parts that are new to the company culture, this is still being seen as a closely watched experiment. Still, this makes me highly excited, because when I started developing mgo, the MongoDB driver for Go, my hopes that the Go, MongoDB, and mgo trio would eventually be used at Canonical were very low, precisely because they were all alien to the culture. We only got here after quite a lot of internal debate, experiments, and trust too.

All of that means these are happy times. Important feature in Ensemble being specified and written, very exciting tools, home grown software being useful..

Awesomeness.

Read more
niemeyer

Circular buffers are based on an algorithm well known by any developer who’s got past the “Hello world!” days. They offer a number of key characteristics with wide applicability such as constant and efficient memory use, efficient FIFO semantics, etc.

One feature which is not always desired, though, it the fact that circular buffers traditionally will either overwrite the last element, or raise an overflow error, since they are generally implemented as a buffer of constant size. This is an unwanted property when one is attempting to consume items from the buffer and it is not an option to blindly drop items, for instance.

This post presents an efficient (and potentially novel) algorithm for implementing circular buffers which preserves most of the key aspects of the traditional version, while also supporting dynamic expansion when the buffer would otherwise have its oldest entry overwritten. It’s not clear if the described approach is novel or not (most of my novel ideas seem to have been written down 40 years ago), so I’ll publish it below and let you decide.

Traditional circular buffers

Before introducing the variant which can actually expand during use, let’s go through a quick review on traditional circular buffers, so that we can then reuse the nomenclature when extending the concept. All the snippets provided in this post are written in Python, as a better alternative to pseudo-code, but the concepts are naturally portable to any other language.

So, the most basic circular buffer needs the buffer itself, its total capacity, and a position where the next write should occur. The following snippet demonstrates the concept in practice:

buf = [None, None, None, None, None]
bufcap = len(buf)
pushi = 0   

for elem in range(7):
    buf[pushi] = elem
    pushi = (pushi + 1) % bufcap
    
print buf # => [5, 6, 2, 3, 4]

In the example above, the first two elements of the series (0 and 1) were overwritten once the pointer wrapped around. That’s the specific feature of circular buffers which the proposal in this post will offer an alternative for.

The snippet below provides a full implementation of the traditional approach, this time including both the pushing and popping logic, and raising an error when an overflow or underflow would occur. Please note that these snippets are not necessarily idiomatic Python. The intention is to highlight the algorithm itself.

class CircBuf(object):

    def __init__(self):
        self.buf = [None, None, None, None, None]
        self.buflen = self.pushi = self.popi = 0
        self.bufcap = len(self.buf)

    def push(self, x):
        assert self.buflen == 0 or self.pushi != self.popi, 
               "Buffer overflow!"
        self.buf[self.pushi] = x
        self.pushi = (self.pushi + 1) % self.bufcap
        self.buflen += 1

    def pop(self):
        assert self.buflen != 0, "Buffer underflow!"
        x = self.buf[self.popi]
        self.buf[self.popi] = None
        self.buflen -= 1
        self.popi = (self.popi + 1) % self.bufcap
        return x

With the basics covered, let’s look at how to extend this algorithm to support dynamic expansion in case of overflows.

Dynamically expanding a circular buffer

The approach consists in imagining that the same buffer can contain both a circular buffer area (referred to as the ring area from here on), and an overflow area, and that it is possible to transform a mixed buffer back into a pure circular buffer again. To clarify what this means, some examples are presented below. The full algorithm will be presented afterwards.

First, imagine that we have an empty buffer with a capacity of 5 elements as per the snippet above, and then the following operations take place:

for i in range(5):
    circbuf.push(i)

circbuf.pop() # => 0
circbuf.pop() # => 1

circbuf.push(5)
circbuf.push(6)

print circbuf.buf # => [5, 6, 2, 3, 4]

At this point we have a full buffer, and with the original implementation an additional push would raise an assertion error. To implement expansion, the algorithm will be changed so that those items will be appended at the end of the buffer. Following the example, pushing two additional elements would behave the following way:

circbuf.push(7)
circbuf.push(8)

print circbuf.buf # => [5, 6, 2, 3, 4, 7, 8]

In that example, elements 7 and 8 are part of the overflow area, and the ring area remains with the same capacity and length of the original buffer. Let’s perform a few additional operations to see how it would behave when items are popped and pushed while the buffer is split:

circbuf.pop() # => 2
circbuf.pop() # => 3
circbuf.push(9)

print circbuf.buf # => [5, 6, None, None, 4, 7, 8, 9]

In this case, even though there are two free slots available in the ring area, the last item pushed was still appended at the overflow area. That’s necessary to preserve the FIFO semantics of the circular buffer, and means that the buffer may expand more than strictly necessary given the space available. In most cases this should be a reasonable trade off, and should stop happening once the circular buffer size stabilizes to reflect the production vs. consumption pressure (if you have a producer which constantly operates faster than a consumer, though, please look at the literature for plenty of advice on the problem).

The remaining interesting step in that sequence of events is the moment when the ring area capacity is expanded to cover the full allocated buffer again, with the previous overflow area being integrated into the ring area. This will happen when the content of the previous partial ring area is fully consumed, as shown below:

circbuf.pop() # => 4
circbuf.pop() # => 5
circbuf.pop() # => 6
circbuf.push(10)

print circbuf.buf # => [10, None, None, None, None, 7, 8, 9]

At this point, the whole buffer contains just a ring area and the overflow area is again empty, which means it becomes a traditional circular buffer.

Sample algorithm

With some simple modifications in the traditional implementation presented previously, the above semantics may be easily supported. Note how the additional properties did not introduce significant overhead. Of course, this version will incur in additional memory allocation to support the buffer expansion, bu that’s inherent to the problem being solved.

class ExpandingCircBuf(object):

    def __init__(self):
        self.buf = [None, None, None, None, None]
        self.buflen = self.ringlen = self.pushi = self.popi = 0
        self.bufcap = self.ringcap = len(self.buf)

    def push(self, x):
        if self.ringlen == self.ringcap or 
           self.ringcap != self.bufcap:
            self.buf.append(x)
            self.buflen += 1
            self.bufcap += 1
            if self.pushi == 0: # Optimization.
                self.ringlen = self.buflen
                self.ringcap = self.bufcap
        else:
            self.buf[self.pushi] = x
            self.pushi = (self.pushi + 1) % self.ringcap
            self.buflen += 1
            self.ringlen += 1

    def pop(self):
        assert self.buflen != 0, "Buffer underflow!"
        x = self.buf[self.popi]
        self.buf[self.popi] = None
        self.buflen -= 1
        self.ringlen -= 1
        if self.ringlen == 0 and self.buflen != 0:
            self.popi = self.ringcap
            self.pushi = 0
            self.ringlen = self.buflen
            self.ringcap = self.bufcap
        else:
            self.popi = (self.popi + 1) % self.ringcap
        return x

Note that the above algorithm will allocate each element in the list individually, but in sensible situations it may be better to allocate additional space for the overflow area in advance, to avoid potentially frequent reallocation. In a situation when the rate of consumption of elements is about the same as the rate of production, for instance, there are advantages in doubling the amount of allocated memory per expansion. Given the way in which the algorithm works, the previous ring area will be exhausted before the mixed buffer becomes circular again, so with a constant rate of production and an equivalent consumption it will effectively have its size doubled on expansion.

UPDATE: Below is shown a version of the same algorithm which not only allows allocating more than one additional slot at a time during expansion, but also incorporates it in the overflow area immediately so that the allocated space is used optimally.

class ExpandingCircBuf2(object):

    def __init__(self):
        self.buf = []
        self.buflen = self.ringlen = self.pushi = self.popi = 0
        self.bufcap = self.ringcap = len(self.buf)

    def push(self, x):
        if self.ringcap != self.bufcap:
            expandbuf = (self.pushi == 0)
            expandring = False
        elif self.ringcap == self.ringlen:
            expandbuf = True
            expandring = (self.pushi == 0)
        else:
            expandbuf = False
            expandring = False

        if expandbuf:
            self.pushi = self.bufcap
            expansion = [None, None, None]
            self.buf.extend(expansion)
            self.bufcap += len(expansion)
            if expandring:
                self.ringcap = self.bufcap

        self.buf[self.pushi] = x
        self.buflen += 1
        if self.pushi < self.ringcap:
            self.ringlen += 1
        self.pushi = (self.pushi + 1) % self.bufcap

    def pop(self):
        assert self.buflen != 0, "Buffer underflow!"
        x = self.buf[self.popi]
        self.buf[self.popi] = None
        self.buflen -= 1
        self.ringlen -= 1
        if self.ringlen == 0 and self.buflen != 0:
            self.popi = self.ringcap
            self.ringlen = self.buflen
            self.ringcap = self.bufcap
        else:
            self.popi = (self.popi + 1) % self.ringcap
        return x

Conclusion

This blog post presented an algorithm which supports the expansion of circular buffers while preserving most of their key characteristics. When not faced with an overflowing buffer, the algorithm should offer very similar performance characteristics to a normal circular buffer, with a few additional instructions and constant space for registers only. When faced with an overflowing buffer, the algorithm maintains the FIFO property and enables using contiguous allocated memory to maintain both the original circular buffer and the additional elements, and follows up reusing the full area as part of a new circular buffer in an attempt to find the proper size for the given use case.

Read more
Gustavo Niemeyer

ZooKeeper is a clever generic coordination server for distributed systems, and is one of the core softwares which facilitate the development of Ensemble (project for automagic IaaS deployments which we push at Canonical), so it was a natural choice to experiment with.

Gozk is a complete binding for ZooKeeper which explores the native features of Go to facilitate the interaction with a ZooKeeper server. To avoid reimplementing the well tested bits of the protocol in an unstable way, Gozk is built on top of the standard C ZooKeeper library.

The experience of integrating ZooKeeper with Go was certainly valuable on itself, and worked as a nice way to learn the details of integrating the Go language with a C library. If you’re interested in learning a bit about Go, ZooKeeper, or other details related to the creation of bindings and asynchronous programming, please fasten the seatbelt now.

Basics of C wrapping in Go

Creating the binding on itself was a pretty interesting experiment already. I have worked on the creation of quite a few bindings and language bridges before, and must say I was pleasantly surprised with the experience of creating the Go binding. With Cgo, the name given to the “foreign function interface” mechanism for C integration, one basically declares a special import statement which causes a pre-processor to look at the comment preceding it. Something similar to this:

// #include <zookeeper.h>
import "C"

The comment doesn’t have to be restricted to a single line, or to #include statements even. The C code contained in the comment will be transparently inserted into a helper C file which is compiled and linked with the final object file, and the given snippet will also be parsed and inclusions processed. In the Go side, that “C” import is simulated as if it were a normal Go package so that the C functions, types, and values are all directly accessible.

As an example, a C function with this prototype:

int zoo_wexists(zhandle_t *zh, const char *path, watcher_fn watcher,
                void *context, struct Stat *stat);

In Go may be used as:

cstat := C.struct_Stat{}
rc, cerr := C.zoo_wexists(zk.handle, cpath, nil, nil, &cstat)

When the C function is used in a context where two result values are requested, as done above, Cgo will save the well known errno variable after the function has finished executing and will return it wrapped into an os.Errno value.

Also, note how the C struct is defined in a way that can be passed straight to the C function. Interestingly, the allocation of the memory backing the structure is going to be performed and tracked by the Go runtime, and will be garbage collected appropriately once no more references exist within the Go runtime. This fact has to be kept in mind since the application will crash if a value allocated normally within Go is saved with a foreign C function and maintained after all the Go references are gone. The alternative in these cases is to call the usual C functions to get hold of memory for the involved values. That memory won’t be touched by the garbage collector, and, of course, must be explicitly freed when no longer necessary. Here is a simple example showing explicit allocation:

cbuffer := (*C.char)(C.malloc(bufferSize))
defer C.free(unsafe.Pointer(cbuffer))

Note the use of the defer statement above. Even when dealing with foreign functionality, it comes in handy. The above call will ensure that the buffer is deallocated right before the current function returns, for instance, so it’s a nice way to ensure no leaks happen, even if in the future the function suddenly gets a new exit point which didn’t consider the allocation of resources.

In terms of typing, Go is more strict than C, and Cgo-based logic will also ensure that the types returned and passed into the foreign C functions are correctly typed, in the same way done for the native types. Note above, for instance, how the call to the free() function has to explicitly convert the value into an unsafe.Pointer, even though in C no casting would be necessary to pass a pointer into a void * parameter.

The unsafe.Pointer is in fact a very special type within Go. Using it, one can convert any pointer type into any other pointer type in an unsafe way (thus the package name), and also back and forth into a uintptr value with the address of the memory referenced by the pointer. For every other type conversion, Go will ensure at compilation time that doing the conversion at runtime is a safe operation.

With all of these resources, including the ability to use common Go syntax and functionality even when dealing with foreign types, values, and function calls, the integration task turns out to be quite a pleasant experience. That said, some of the things may still require some good thinking to get right, as we’ll see shortly.

Watch callbacks and channels

One of the most interesting (and slightly tricky) aspects of mapping the ZooKeeper concepts into Go was the “watch” functionality. ZooKeeper allows one to attach a “watch” to a node so that the server will report back when changes happen to the given node. In the C library, this functionality is exposed via a callback function which is executed once the monitored node aspect is modified.

It would certainly be possible to offer this functionality in Go using a similar mechanism, but Go channels provide a number of advantages for that kind of asynchronous notification: waiting for multiple events via the select statement, synchronous blocking until the event happens, testing if the event is already available, etc.

The tricky bit, though, isn’t the use of channels. That part is quite simple. The tricky detail is that the C callback function execution happens in a C thread started by the ZooKeeper library, and happens asynchronously, while the Go application is doing its business elsewhere. Right now, there’s no straightforward way to transfer the execution of this asynchronous C function back into the Go land. The solution for this problem was found with some help from the folks at the golang-nuts mailing list, and luckily it’s not that hard to support or understand. That said, this is a good opportunity to get some coffee or your preferred focus-enhancing drink.

The solution works like this: when the ZooKeeper C library gets a watch notification, it executes a C callback function which is inside a Gozk helper file. Rather than transferring control to Go right away, this C function simply appends data about the event onto a queue, and signals a pthread condition variable to notify that an event is available. Then, on the Go side, once the first ZooKeeper connection is initialized, a new goroutine is fired and loops waiting for events to be available. The interesting detail about this loop, is that it blocks within a foreign C function waiting for an event to be available, through the signaling of the shared pthread condition variable. In the Go side, that’s how the call looks like, just to give a more practical feeling:

// This will block until there's a watch available.
data := C.wait_for_watch()

Then, on the C side, here is the function definition:

watch_data *wait_for_watch() {
    watch_data *data = NULL;
    pthread_mutex_lock(&watch_mutex);
    if (first_watch == NULL)
        pthread_cond_wait(&watch_available, &watch_mutex);
    data = first_watch;
    first_watch = first_watch->next;
    pthread_mutex_unlock(&watch_mutex);
    return data;
}

As you can see, not really a big deal. When that kind of blocking occurs inside a foreign C function, the Go runtime will correctly continue the execution of other goroutines within other operating system threads.

The result of this mechanism is a nice to use interface based on channels, which may be explored in different ways depending on the application needs. Here is a simple example blocking on the event synchronously, for instance:

stat, watch, err := zk.ExistsW("/some/path")
if stat == nil && err == nil {
    event := <-watch
    // Use event ...
}

Concluding

Those were some of the interesting aspects of implementing the ZooKeeper binding. I would like to speak about some additional details, but this post is rather long already, so I'll keep that for a future opportunity. The code is available under the LGPL, so if you're curious about some other aspect, or would like to use ZooKeeper with Go, please move on and check it out!

Read more

Just in case people aren’t aware here’s the coding style guide, freshly updated with C++ bits.

Read more