Canonical Voices

Posts tagged with 'article'

niemeyer

As recently announced, my latest endeavor at Canonical is to enable graphical client-side development with the Go language via a new qml Go package that integrates the language with Qt’s QML framework.

The QML framework solves the problem of designing graphic applications via a language that offers a convenient mix of declarative and procedural features. As a very simple example, if the following QML content is loaded by itself, it will draw a square centralized inside a window:

import QtQuick 2.0

Rectangle {
        width: 640
        height: 280
        color: "black"

        Rectangle {
                width: 250; height: 250
                anchors.centerIn: parent
                color: "red"

                MouseArea {
                        anchors.fill: parent
                        onClicked: console.log("Stop poking me!")
                }
        }
}

As expected, it would look similar to:

Centralized Rectangle

The red rectangle will remain centered as the window is resized, and will print a message every time it is clicked. The clicked signal was hooked into logic implemented in JavaScript in this case, but it might as well have been hooked into custom Go logic with the same ease using the qml package. With very minor changes, it could also be something much more interesting than a red rectangle, such as a modern web browser:

import QtQuick 2.0
import QtWebKit 3.0

Rectangle {
        width: 640
        height: 280
        color: "black"

        WebView {
                width: 250; height: 250
                anchors.centerIn: parent
                url: "http://golang.org"
        }
}

The result would be:

Web Page Rectangle

It’s worth noting that this wasn’t just a screenshot of a web page, but rather a tiny fully functional web browser accessing a web page, easily embedded into the QML application to satisfy whatever browsey needs the author might have.

Being able to leverage all these existing building blocks so comfortably is what makes a graphics platform attractive to develop in, and is what inspired the on going effort to have that platform working under the Go language. As we can observe on some of the work published, that side of things seems to be going well.

Now, the next level up is to enable people to create such building blocks without leaving the Go language. The initial step towards that is already committed to the code repository. It enables Go types to be created and seamlessly integrated into the QML language. For example, consider this simple Go type:

type GoType struct {
        Text string
}

func (v *GoType) OnTextChanged() {
        fmt.Println("Text changed...")
}

If this type is made available to QML content via the qml.RegisterTypes function, it may then be used as any other native type. This would work as a QML file, for instance:

import GoExtensions 1.0

GoType {
        text: "Have you signed up for GopherCon yet?"
}

There’s a relevant detail, though: this type has no visible content by itself. This means that displaying something must be done via interactions with other QML items, or via external systems (dbus, for example).

Solving that problem has been one of my objectives in the last month, and although it’s not yet publicly available, the work is in a good-enough state that I feel comfortable talking about it.

As described, the main goal is enabling these custom types to paint. This will be achieved by exposing a Go package that offers an OpenGL API, and defining methods that the custom types have to implement for being able to render at appropriate times. Although the details aren’t finalized, the current draft can run familiar OpenGL code similar to the following:

func (v *GoType) Paint(p *qml.Painter) {
        obj := p.Object()
        width := gl.Float(obj.Int("width"))
        height := gl.Float(obj.Int("height"))

        gl.Enable(gl.BLEND)
        gl.BlendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
        gl.Color4f(1.0, 1.0, 1.0, 0.8)
        gl.Begin(gl.QUADS)
        gl.Vertex2f(0, 0)
        gl.Vertex2f(width, 0)
        gl.Vertex2f(width, height)
        gl.Vertex2f(0, height)
        gl.End()

        gl.LineWidth(2.5)
        gl.Color4f(0.0, 0.0, 0.0, 1.0)
        gl.Begin(gl.LINES)
        gl.Vertex2f(0, 0)
        gl.Vertex2f(width, height)
        gl.Vertex2f(width, 0)
        gl.Vertex2f(0, height)
        gl.End()
}

One interesting point to realize is that, again, this isn’t just rendering into a lonely OpenGL context. The rendered content goes into a framebuffer that lives within the overall QML scene graph, and has the same integration capabilities as every other QML item.

In the following animated image, for example, the white square was generated with the above GL code, and was rendered next to other native items with this QML source:

Go + OpenGL Example

Note the smooth blending and proper overlapping established in the scene graph based on the ordering of elements in the QML source. The animation was also driven by QML without special handling of the content rendered from Go.

Another relevant point in terms of the integration with Go is that the GL code demonstrated above is considered old-school these days. The modern version uses fewer calls, making better use of the graphics card memory by transferring, tracking, and handling data such as vertexes in contiguous arrays. This reduces the impact of the cross-context calls that depart and rejoin the Go runtime, and can unlock interesting use cases that the overhead might otherwise prevent.

In terms of availability of these features, we’re about to enter a holiday season, so they should be polished enough for a first public review at some point in January. Looking forward to it.

UPDATE (2014-01-27): These features are now publicly available. The painting example demonstrates the exact scenario pictured above.

UPDATE (2014-02-18): New screencast demonstrates some of the OpenGL features of Go QML.

Read more
niemeyer

Note: This is a candidate version of the specification. This note will be removed once v1 is closed, and any changes will be described at the end. Please get in touch if you’re implementing it.

Contents


Introduction

This specification defines strepr, a stable representation that enables computing hashes and cryptographic signatures out of a defined set of composite values that is commonly found across a number of languages and applications.

Although the defined representation is a serialization format, it isn’t meant to be used as a traditional one. It may not be seen entirely in memory at once, or written to disk, or sent across the network. Its role is specifically in aiding the generation of hashes and signatures for values that are serialized via other means (JSON, BSON, YAML, HTTP headers or query parameters, configuration files, etc).

The format is designed with the following principles in mind:

Understandable — The representation must be easy to understand to increase the chances of it being implemented correctly.

Portable — The defined logic works properly when the data is being transferred across different platforms and implementations, independently from the choice of protocol and serialization implementation.

Unambiguous — As a natural requirement for producing stable hashes, there is a single way to process any supported value being held in the native form of the host language.

Meaning-oriented — The stable representation holds the meaning of the data being transferred, not its type. For example, the number 7 must be represented in the same way whether it’s being held in a float64 or in an uint16.


Supported values

The following values are supported:

  • nil: the nil/null/none singleton
  • bool: the true and false singletons
  • string: raw sequence of bytes
  • integers: positive, zero, and negative integer numbers
  • floats: IEEE754 binary floating point numbers
  • list: sequence of values
  • map: associative value→value pairs


Representation

nil = 'z'

The nil/null/none singleton is represented by the single byte 'z' (0x7a).

bool = 't' / 'f'

The true and false singletons are represented by the bytes 't' (0x74) and 'f' (0x66), respectively.

unsigned integer = 'p' <value>

Positive and zero integers are represented by the byte 'p' (0x70) followed by the variable-length encoding of the number.

For example, the number 131 is always represented as {0x70, 0x81, 0x03}, independently from the type that holds it in the host language.

negative integer = 'n' <absolute value>

Negative integers are represented by the byte 'n' (0x6e) followed by the variable-length encoding of the absolute value of the number.

For example, the number -131 is always represented as {0x6e, 0x81, 0x03}, independently from the type that holds it in the host language.

string = 's' <num bytes> <bytes>

Strings are represented by the byte 's' (0x73) followed by the variable-length encoding of the number of bytes in the string, followed by the specified number of raw bytes. If the string holds a list of Unicode code points, the raw bytes must contain their UTF-8 encoding.

For example, the string hi is represented as {0x73, 0x02, 'h', 'i'}

Due to the complexity involved in Unicode normalization, it is not required for the implementation of this specification. Consequently, Unicode strings that if normalized would be equal may have different stable representations.

binary float = 'd' <binary64>

32-bit or 64-bit IEEE754 binary floating point numbers that are not holding integers are represented by the byte 'd' (0x64) followed by the big-endian 64-bit IEEE754 binary floating point encoding of the number.

There are two exceptions to that rule:

1. If the floating point value is holding a NaN, it must necessarily be encoded by the following sequence of bytes: {0x64, 0x7f, 0xf8, 0x00 0x00, 0x00, 0x00, 0x00, 0x00}. This ensures all NaN values have a single representation.

2. If the floating point value is holding an integer number it must instead be encoded as an unsigned or negative integer, as appropriate. Floating point values that hold integer numbers are defined as those where floor(v) == v && abs(v) != ∞.

For example, the value 1.1 is represented as {0x64, 0x3f, 0xf1, 0x99, 0x99, 0x99, 0x99, 0x99, 0x9a}, but the value 1.0 is represented as {0x70, 0x01}, and -0.0 is represented as {0x70, 0x00}.

This distinction means all supported numbers have a single representation, independently from the data type used by the host language and serialization format.

list = 'l' <num items> [<item> ...]

Lists of values are represented by the byte 'l' (0x6c), followed by the variable-length encoding of the number of pairs in the list, followed by the stable representation of each item in the list in the original order.

For example, the value [131, -131] is represented as {0x6c, 0x70, 0x81, 0x03, 0x6e, 0x81, 0x03, 0x65}

map = 'm' <num pairs> [<item key> <item value>  ...]

Associative maps of values are represented by the byte 'm' (0x6d) followed by the variable-length encoding of the number of pairs in the map, followed by an ordered sequence of the stable representation of each key and value in the map. The pairs must be sorted so that the stable representation of the keys is in ascending lexicographical order. A map must not have multiple keys with the same representation.

For example, the map {"a": 4, 5: "b"} is always represented as {0x6d, 0x02, 0x70, 0x05, 0x73, 0x01, 'b', 0x73, 0x01, 'a', 0x70, 0x04}.


Variable-length encoding

Integers are variable-length encoded so that they can be represented in short space and with unbounded size. In an encoded number, the last byte holds the 7 least significant bits of the unsigned value, and zero as the eight bit. If there are remaining non-zero bits, the previous byte holds the next 7 bits, and the eight bit is set on to flag the continuation to the next byte. The process continues until there are non-zero bits remaining. The most significant bits end up in the first byte of the encoded value, which must necessarily not be 0x80.

For example, the number 128 is variable-length encoded as {0x81, 0x00}.


Reference implementation

A reference implementation is available, including a test suite which should be considered when implementing the specification.


Changes

draft1 → draft2

  • Enforce the use of UTF-8 for Unicode strings and explain why normalization is being left out.
  • Enforce a single NaN representation for floats.
  • Explain that map key uniqueness refers to the representation.
  • Don’t claim the specification is easy to implement; floats require attention.
  • Mention reference implementation.

Read more
niemeyer

A few years ago, when I started pondering about the possibility of porting juju to the Go language, one of the first pieces of the puzzle that were put in place was goyaml: a Go package to parse and serialize a yaml document. This was just an experiment and, as a sane route to get started, a Go layer that does all the language-specific handling was written on top of the libyaml C scanner, parser, and serializer library.

This was a good initial plan, but for a number of reasons the end goal was always to have a pure Go implementation. Having a C layer in a Go program slows down builds significantly due to the time taken to build the C code, makes compiling in other platforms and cross-compiling harder, has certain runtime penalties, and also forces the application to drop the memory safety guarantees offered by Go.

For these reasons, over the last couple of weeks I took a few hours a day to port the C backend to Go. The total time, considering full time work days, would be equivalent to about a week worth of work.

The work started on the scanner and parser side of the library. This took most of the time, not only because it encompassed more than half of the code base, but also because the shared logic had to be ported too, and there was a need to understand which patterns were used in the old code and how they would be converted across in a reasonable way.

The whole scanner and parser plus header files, or around 5000 code lines of C, were ported over in a single shot without intermediate runs. To steer the process in a sane direction, gofmt was called often to reformat the converted code, and then the project was compiled every once in a while to make sure that the pieces were hanging together properly enough.

It’s worth highlighting how useful gofmt was in that process. The C code was converted in the most convenient way to type it, and then gofmt would quickly put it all together in a familiar form for analysis. Not rarely, it would also point out trivial syntactic issues. A double win.

After the scanner and parser were finally converted completely, the pre-existing Go unmarshaling logic was shifted to the new pure implementation, and the reading side of the test suite could run as-is. Naturally, though, it didn’t work out of the box.

To quickly pick up the errors in the new implementation, the C logic and the Go port were put side-by-side to run the same tests, and tracing was introduced in strategic points of the scanner and parser. With that, it was easy to spot where they diverged and pinpoint the human errors.

It took about two hours to get the full suite to run successfully, with a handful of bugs uncovered. Out of curiosity, the issues were:

  • An improperly dropped parenthesis affected the precedence of an expression
  • A slice was being iterated with copying semantics where a reference was necessary
  • A pointer arithmetic conversion missed the base where there was base+offset addressing
  • An inner scoped variable improperly shadowed the outer scope

The same process of porting and test-fixing was then repeated on the the serializing side of the project, in a much shorter time frame for the reasons cited.

The resulting code isn’t yet idiomatic Go. There are several signs in it that it was ported over from C: the name conventions, the use of custom solutions for buffering and reader/writer abstractions, the excessive copying of data due to the need of tracking data ownership so the simple deallocating destructors don’t double-free, etc. It’s also been deoptimized, due to changes such as the removal of macros and in many cases its inlining, and the direct expansion of large unions which causes some core objects to grow significantly.

At this point, though, it’s easy to gradually move the code base towards the common idiom in small increments and as time permits, and cleaning up those artifacts that were left behind.

This code will be made public over the next few days via a new goyaml release. Meanwhile, some quick facts about the process and outcome follows.

Lines of code

According to cloc, there was a total of 7070 lines of C code in .c and .h files. Of those, 6727 were ported, and 342 were 12 functions that were left unconverted as being unnecessary right now. Those 6727 lines of C became 5039 lines of Go code in a mostly one-to-one dumb translation.

That difference comes mainly from garbage collection, lack of forward declarations, standard helpers such as append, range-based for loops, first class slice type with length and capacity, internal OOM handling, and so on.

Future work code can easily increase the difference further by replacing some of the logic ported with more sensible options available in Go, such as standard abstractions for readers and writers, buffered writing support as availalbe in the standard library, etc.

Code clarity and safety

In the specific context of the work done, which is of a scanner, parser and serializer, the slice abstraction is responsible for noticeable clarity gains in the code, when compared to the equivalent logic based on pointer arithmetic. It also gives a much more comforting guarantee of correctness of the written code due to bound-checking.

Performance

While curious, this shouldn’t be taken as a performance comparison between the two languages, as it is comparing a fine tuned C implementation with something that is worse than a direct one-to-one port: not only it hasn’t seen any time at all on preventing waste, but the original logic was deoptimized due to changes such as the removal of inlining macros and the expansion of large unions. There are many obvious changes to be done for improving performance.

With that out of the way, in a simple decoding benchmark the C-backed decoder runs on about 37% of the time taken by the out-of-the-box deoptimized Go port.

Output size

The previous goyaml.a Go package file had 1463kb. The new one has 1016kb. This difference includes glue code generated for the integration.

Considering only the .c and .h files involved in the port, the C object code generated with the standard flags used by the go build tool (-g -O2) sums up to 789kb. The equivalent Go code with the standard settings compiles to 664kb. The 12 functions not ported are also part of that difference, so the difference is pretty much negligible.

Build time

Building the 8 .c files alone takes 3.6 seconds with the standard flags used by the go build tool (-g -O2). After the port, building the entire Go project with the standard settings takes 0.3 seconds.

Mechanical changes

Many of the mechanical changes were done using regular expressions. Excluding the trivial ones, about a dozen regular expressions were used to swap variable and type names, drop parenthesis, place brackets in the right locations, convert function declarations, and so on.

Read more
niemeyer

Last week I was part of a rant with a couple of coworkers around the fact Go handles errors for expected scenarios by returning an error value instead of using exceptions or a similar mechanism. This is a rather controversial topic because people have grown used to having errors out of their way via exceptions, and Go brings back an improved version of a well known pattern previously adopted by a number of languages — including C — where errors are communicated via return values. This means that errors are in the programmer’s face and have to be dealt with all the time. In addition, the controversy extends towards the fact that, in languages with exceptions, every unadorned error comes with a full traceback of what happened and where, which in some cases is convenient.

All this convenience has a cost, though, which is rather simple to summarize:

Exceptions teach developers to not care about errors.

A sad corollary is that this is relevant even if you are a brilliant developer, as you’ll be affected by the world around you being lenient towards error handling. The problem will show up in the libraries that you import, in the applications that are sitting in your desktop, and in the servers that back your data as well.

Raymond Chen described the issue back in 2004 as:

Writing correct code in the exception-throwing model is in a sense harder than in an error-code model, since anything can fail, and you have to be ready for it. In an error-code model, it’s obvious when you have to check for errors: When you get an error code. In an exception model, you just have to know that errors can occur anywhere.

In other words, in an error-code model, it is obvious when somebody failed to handle an error: They didn’t check the error code. But in an exception-throwing model, it is not obvious from looking at the code whether somebody handled the error, since the error is not explicit.
(…)
When you’re writing code, do you think about what the consequences of an exception would be if it were raised by each line of code? You have to do this if you intend to write correct code.

That’s exactly right. Every line that may raise an exception holds a hidden “else” branch for the error scenario that is very easy to forget about. Even if it sounds like a pointless repetitive task to be entering that error handling code, the exercise of writing it down forces developers to keep the alternative scenario in mind, and pretty often it doesn’t end up empty.

It isn’t the first time I write about that, and given the controversy that surrounds these claims, I generally try to find one or two examples that bring the issue home. So here is the best example I could find today, within the pty module of Python’s 3.3 standard library:

def spawn(argv, master_read=_read, stdin_read=_read):
    """Create a spawned process."""
    if type(argv) == type(''):
        argv = (argv,)
    pid, master_fd = fork()
    if pid == CHILD:
        os.execlp(argv[0], *argv)
    (...)

Every time someone calls this logic with an improper executable in argv there will be a new Python process lying around, uncollected, and unknown to the application, because execlp will fail, and the process just forked will be disregarded. It doesn’t matter if a client of that module catches that exception or not. It’s too late. The local duty wasn’t done. Of course, the bug is trivial to fix by adding a try/except within the spawn function itself. The problem, though, is that this logic looked fine for everybody that ever looked at that function since 1994 when Guido van Rossum first committed it!

Here is another interesting one:

$ make clean
Sorry, command-not-found has crashed! Please file a bug report at:

https://bugs.launchpad.net/command-not-found/+filebug

Please include the following information with the report:

command-not-found version: 0.3
Python version: 3.2.3 final 0
Distributor ID: Ubuntu
Description:    Ubuntu 13.04
Release:        13.04
Codename:       raring
Exception information:

unsupported locale setting
Traceback (most recent call last):
  File "/.../CommandNotFound/util.py", line 24, in crash_guard
    callback()
  File "/usr/lib/command-not-found", line 69, in main
    enable_i18n()
  File "/usr/lib/command-not-found", line 40, in enable_i18n
    locale.setlocale(locale.LC_ALL, '')
  File "/usr/lib/python3.2/locale.py", line 541, in setlocale
    return _setlocale(category, locale)
locale.Error: unsupported locale setting

That’s a pretty harsh crash for the lack of locale data in a system-level application that is, ironically, supposed to tell users what packages to install when commands are missing. Note that at the top of the stack there’s a reference to crash_guard. This function has the intent of catching all exceptions right at the edge of the call stack, and displaying a detailed system specification and traceback to aid in fixing the problem.

Such “parachute catching” is a fairly common pattern in exception-oriented programming and tends to give developers the false sense of having good error handling within the application. Rather than actually guarding the application, though, it’s just a useful way to crash. The proper thing to have done in the case above would be to print a warning, if at all, and then let the program run as usual. This would have been achieved by simply wrapping that one line as in:

try:
    locale.setlocale(locale.LC_ALL, '')
except Exception as e:
    print("Cannot change locale:", e)

Clearly, it was easy to handle that one. The problem, again, is that it was very natural to not do it in the first place. In fact, it’s more than natural: it actually feels good to not be looking at the error path. It’s less code, more linear, and what’s left is the most desired outcome.

The consequence, unfortunately, is that we’re immersing ourselves in a world of brittle software and pretty whales. Although more verbose, the error result style builds the correct mindset: does that function or method have a possible error outcome? How is it being handled? Is that system-interacting function not returning an error? What is being done with the problem that, of course, can happen?

A surprising number of crashes and plain misbehavior is a result of such unconscious negligence.

Read more
niemeyer

Ethics for code reviewers

In the previous post, I explored a bit how ephemeral most of the artifacts of software development processes are. One of these processes is code reviewing, which is arguably a major player in code quality, knowledge acquisition, and even team dynamics.

Even being so important, the outcome of the code review process — the review itself — tends to reach a very limited audience and have a short life time. It’ll be hard to change that picture given the nature of reviews: they are conversational, and address specific issues for the integration of a change in the project. At the same time, even if code reviews are not generally useful as permanent documentation, we can increase their value as reference material by improving the quality of those conversations. Having a good conversation has many other great side effects, of course.

As a small step in that direction, what follows are personal guidelines that I have been evolving empirically over the years as a software developer and code reviewer. They may not bring you fortune and fame, and are not always easy to apply, but hopefully they will help improving your experience as a member of your team and the value of those reviews.

Explain why

Unless the change is about an extremely obvious mistake, explain why you’re suggesting it. If the reasoning was natural to the author, he’d have done it in the first place. Good explanations also help avoiding the same mistake over and over again, and are much more rewarding to the listener. They also become a target for future references.

If you don’t have enough time to justify it and would rather provide the review sooner than later, one approach is to just recommend the change and invite the author for a conversation later if that would be helpful. That said, try to have that conversation over a media that may be shared with the rest of the team, or recollected whenever necessary.

Be respectful

Always keep in mind that there’s a person on the other side of the wire, not a machine, and that it’s hard to understand written words with little context. Avoid letting anger and frustration leak into the review, even if you feel it is justified. There’s no good outcome in those situations.

It doesn’t matter who broke it, or who coded that silly piece of code. If there is broken code, and the project has reviews, multiple people were in the pipeline for that result, and they were trying to get it right. Take shared ownership of the problem, and look for the solution and for how to avoid such issues in the future.

Praise the good work

Reviews carry some low energy feel by their very nature. No matter how positive you are about them, and how much the whole team understands and agrees it is for the best, you are in fact looking for places to put your finger in someone else’s work. For that reason, it is very helpful to take every chance you can of praising logic, design, code organization, or whatever else that you honestly felt was well done. It won’t ever balance it out, but it will at least remind the author that the contributions are welcome.

Suggestions are appreciated

Perhaps a longer variable name would be helpful, or that constant could have a more descriptive name? In many circumstances, the change is indeed subjective, and the gain is pretty marginal. In those cases, if you really can’t resist the urge to say something, a good approach is a suggestion that may be exercised or not at the author’s discretion. Ideally, suggest several options that would feel better to you, so that your point is better understood and agreement is easier. That said, read on.

Avoid trivialities

When reviewing that very simple point, think to yourself: all things considered, does it actually matter? Is the cost of the author’s time, and the potential debate, really worth it? You surely have your opinion about whether to spell “min <= count” or “count >= min“, but so does everybody else. When it’s purely a matter of preference, the author is entitled to have one after all.

Small branches win

Code reviews are useful for a number of secondary reasons, but the primary goal of the code review is to analyze a proposed change, to fix it for inclusion, or to reject it. It’s often tempting to recommend further changes to be bundled onto the same review, but it’s important to keep some focus. Are these additional changes tightly related to the original idea, or would they rather be more appropriate on a future branch?

Also keep an eye on large review submissions. It’s quite rare to see changes of a thousand lines or more that are really an indivisible unit. More often, it ends up like that organically, as a result of the workflow followed by the author. These branches may be very frustrating, both for the author and for reviewers. For the reviewer, it’s hard to keep the necessary level of attention and enthusiasm for the problem over expanded periods of time. For the author, it will be equally problematic to run over a large review. In some extreme cases, it may be worth going back and breaking down the change into more change sets.

Overall, fast iterations on small branches are much more rewarding to work with.

Work with inline comments

This is about tooling, but doing anything else should be considered unethical really. If you don’t have a system that allows the change diff to be seen within the rest of the content, and comments to be made inlined right where you see the issue, implement one right now. Moving to such a system was the most dramatic change in productivity I had as a reviewer in the past several years, and makes the whole experience a lot more bearable for everyone.

Enjoy!

Make sure you’re enjoying what you do, and appreciate what your code reviews are achieving. There’s little point in playing the role of an intelligent computer over extended periods of time if you are unhappy about it. Get yourself your preferred slow-drinking beverage (chimarrão?), perhaps some snacks, a comfortable chair, and relax.

Read more
niemeyer

Lately I’ve been considering the amount of waste we produce during software development, and how to increase the amount of recycled content. I’m not talking about actual trash, though, but rather about software development artifacts.

Over the years, we’ve learned about and put in practice several means for improving the quality and success rate of projects we create or contribute to. We have practices such as sprints to get people together with high communication bandwidth; we have code reviews for sharing knowledge and improving project quality; we’ve got technical leadership roles to mentor developers and guide the progress of projects; we’ve created kanban boards and burndown charts to help people visualize what they’re going through; and so on.

While all of that seems to have helped tremendously, there’s a sad fact about where we stand: the artifacts of most of these processes are local to their context, and very sensitive to time. That burndown chart is meaningless after it’s burned, and a kanban has no relevant history. Our technical leads indeed guide their teams, but their wisdom stays with the few people that had the chance to interact with them, and subjectively so. That brilliant code review from our best developers has a very limited audience, and rarely carries any meaning just days after it has been accomplished.

That last one is specially interesting. The process of reviewing code is an intense task, very expensive, and that takes a significant portion of the life of an active developer, and even then very little is carried forward as the outcome of that process. We have no effective means or even culture of sharing the generated wisdom to other teams. In fact, we rarely share these details even within the team itself. Why was that line changed like this? Why an interface like that is a bad idea? Who will instruct the new guy next week, and where did we record a bit of the wisdom of the brilliant guy that has left the company recently?

Unfortunately there’s probably no easy solution for this problem. At this point, I mainly recognize that most of the efforts I’ve lead to improve software development for the past several years had a very limited scope. The software itself became immediately better as a result of my efforts, its design became more sensible, and hopefully I contributed a bit to the growth of people around me, but at a company or even community-wide scope, all of these code reviews, sprints, and IRC conversations are buried for very rare revives.

I want to start doing something about this, though. There must be a way to shape these conversations in a more reusable format; in a way that knowledge and agreement can be more proactively preserved and scattered. Perhaps it’s more about how than it is about what. Perhaps we just need to write more posts like this, and cover more topics related to daily development findings. Not sure. I’ll be thinking…

Read more
niemeyer

Our son Otávio was born recently. Right in the first few days, we decided to keep tight control on the feeding times for a while, as it is an intense routine pretty unlike anything else, and obviously critical for the health of the baby. I imagined that it wouldn’t be hard to find an Android app that would do that in a reasonable way, and indeed there are quite a few. We went with Baby Care, as it has a polished interface and more features than we’ll ever use. The app also includes some basic statistics, but not enough for our needs. Luckily, though, it is able to export the data as a CSV file, and post-processing that file with the R language is easy, and allows extracting some fun facts about what the routine of a healthy baby can look like in the first month, as shown below.

Otávio

The first thing to do is to import the raw data from the CSV file. It is a one-liner in R:

> info = read.csv("baby-care.csv", header=TRUE)

Then, this file actually comes with other events that won’t be processed now, so we’ll slice it and grab only the rows and columns of interest:

> feeding <- info[info$Event.type == "Breast",
        c("Event.subType", "Start.Time", "End.Time", "Duration")]

This is how it looks like:

> feeding[100:103,]
    Event.subType       Start.Time         End.Time Duration
129          Left 2013/01/04 13:45 2013/01/04 14:01    00:16
132          Left 2013/01/04 16:21 2013/01/04 16:30    00:09
134         Right 2013/01/04 17:46 2013/01/04 17:54    00:08

Now things get more interesting. Let’s extract that duration column into a more useful vector, and do some basic analysis:

> duration <- as.difftime(as.vector(feeding$Duration), "%H:%M")

> length(duration)
[1] 365

> total = sum(duration)
> units(total) = "hours"
> total
Time difference of 63.71667 hours

> mean(duration)
Time difference of 10.47397 mins
> sd(duration)
[1] 5.937172

A total of 63 hours surprised me, but the mean time of around 10 minutes per feeding is within the recommendation, and the standard deviation looks reasonable. It may be more conveniently pictured as a histogram:

> hist(as.numeric(duration), breaks="FD",
    col="blue", main="", xlab="Minutes")

Duration histogram

Another point we were interested on is if both sides are properly balanced:

> sides <- c("  Right", "  Left")
> tapply(duration, feeding$Event.subType, mean)[sides]
   Right     Left 
10.72283 10.22099

Looks good.

All of the analysis so far goes over the whole period, but how has the daily intake changed over time? We’ll need an additional vector to compute this and visualize in a chart:

> day <- format(strptime(feeding$Start.Time, "%Y/%m/%d %H:%M"),
                "%Y/%m/%d")
> perday <- tapply(duration, day, sum)
> mean(perday)
[1] 136.5357
> sd(perday)
[1] 53.72735
> sd(perday[8:length(perday)])
[1] 17.49735

> plot(perday, type="h", col="blue", xlab="Day", ylab="Minutes")

Daily duration

The mean looks good, with about two hours every day. The standard deviation looks high on a first look, but it’s actually not that bad if we take off the first few days. Looking at the graph shows why: the slope on the left-hand side, which is expected as there’s less milk and the baby has more trouble right after birth.

The chart shows a red flag, though: one day seems well below the mean. This is something to be careful about, as babies can get into a loop where they sleep too much and miss being hungry, the lack of feeding causes hypoglycemia, which causes more sleep, and it doesn’t end up well. A rule of thumb is to wake the baby up every two hours in the first few days, and at most every four hours once he stabilizes for the following weeks.

So, this was another point of interest: what are the intervals between feedings?

> start = strptime(feeding$Start.Time, "%Y/%m/%d %H:%M")
> end = strptime(feeding$End.Time, "%Y/%m/%d %H:%M")
> interval <- start[-1]-end[-length(end)]

> hist(as.numeric(interval), breaks="FD", col="blue",
       main="", xlab="Minutes")

Interval histogram

Seems great, with most feedings well under two hours. There's a worrying outlier, though, of more than 6 hours. Unsurprisingly, it happened over night:

> feeding$End.Time[interval > 300]
[1] 2013/01/07 00:52

It wasn't a significant issue, but we don't want that happening often while his body isn't yet ready to hold enough energy for a full night of sleep. That's the kind of reason we've been monitoring him, and is important because our bodies are eager to get full nights of sleep, which opens the door for unintended slack. As a reward for that kind of control, we've got the chance to enjoy not only his health, but also an admirable mood.

Love, Dad.

Read more
niemeyer

I’m glad to announce experimental support for multi-document transactions in the mgo driver that integrates MongoDB with the Go language. The support is done via a driver extension, so it works with any MongoDB release supported by the driver (>= 1.8).

Features

Here is a quick highlight list to get your brain ticking before the details:

  • Supports sharding
  • Operations may span multiple collections
  • Handles changes, inserts and removes
  • Supports pre-conditions
  • Self-healing
  • No additional locks or leases
  • Works with existing data

Let’s see what these actually mean and how the goodness is done.


The problem being addressed

The typical example is a bank transaction: imagine you have two documents representing accounts for different people, and you want to transfer 100 bucks from Aram to Ben. Despite the apparent simplicity in that description, there are a number of edge cases that turn it into a non-trivial change.

Imagine an agent processing the change following these steps:

  1. Is Ben’s account valid?
  2. Take 100 bucks out of Aram’s account if its balance is above 100
  3. Insert 100 bucks into Ben’s account

Note that this description already assumes the availability of some single-document atomic operations as supported by MongoDB. Even then, how many race conditions and crash-related problems can you count? Here are some spoilers that hint at the problem complexity:

  • What if Ben cancels his account after (1)?
  • What if the agent crashes after (2)?

How it works

Thanks to the availability of single-document atomic operations, it is be possible to craft a sequence of changes that manipulate documents in a way that supports multi-document transactional behavior. This works as long as the clients agree to use the same conventions.

This isn’t exactly news, though, and there’s even documentation describing how one can explore these ideas. The challenge is in crafting a generic mechanism that not only does the basics but goes beyond by supporting inserts and removes, being workload agnostic, behaving correctly on crashes (!), and yet remaining pleasant to use. That’s the territory being explored.

The implemented semantics offers an isolation level that allows non-repeatable reads to occur (a partially committed transaction is visible), but the changes are guaranteed to only be visible in the order specified in the transaction, and once any change is done the transaction is guaranteed to be applied completely without intervening changes in the affected documents (no dirty reads). Among other things, this means one can use any existing mechanism at read time.

When writing documents that are affected by the transaction mechanism, one must necessarily use the API of the new mgo/txn package, which ended up surprisingly thin and straightforward. In other words for emphasis: if you modify fields that are affected by the transaction mechanism both with and without mgo/txn, it will misbehave arbitrarily. Fields that are read or written by mgo/txn must only be changed using mgo/txn.

Using the example described above, the bank account transfer might be done as:

runner := txn.NewRunner(tcollection)
ops := []txn.Op{{
        C:      "accounts", 
        Id:     "aram",
        Assert: M{"balance": M{"$gte": 100}},
        Update: M{"$inc": M{"balance": -100}},
}, {
        C:      "accounts",
        Id:     "ben",
        Assert: M{"valid": true},
        Update: M{"$inc": M{"balance": 100}},
}}
id := bson.NewObjectId() // Optional
err := runner.Run(ops, id, nil)

The assert and update values are usual MongoDB querying and updating documents. The tcollection is a MongoDB collection that is used to atomically insert the transaction details into the database. As long as that document makes it into the database, the transaction is guaranteed to be eventually entirely applied or entirely aborted. The exact moment when this happens is defined by whether there are other transactions in progress and whether a communication problem occurs and when it occurs, as described below.

Concurrency and crash-proofness

Perhaps the most interesting piece of the puzzle when coming up with a nice transaction mechanism is defining what happens when an agent misbehaves, even more in a world where there are multiple distributed transaction runners. If there are locks, someone must unlock when a runner crashes, and must know the difference between running slowly and crashing. If there are leases, the lease boundary becomes an issue. In both cases, the speed of the overall system would become bounded by the speed of the slowest runner.

Instead of falling onto those issues, the implemented mechanism observes the transactions being attempted on the affected documents, orders them in a globally agreed way, and pushes all of their operations concurrently.

To illustrate the behavior, imagine again the described scenario of bank transferences:

In this diagram there are two transactions being attempted, T1 and T2. The first is a transference from Aram to Ben, and the second is a transference from Ben to Carl. If a runner starts executing T2 while T1 is still being applied by a different runner, the first runner will pick T1 up and complete it before starting to work on T2 which is its real goal. This works even if the original runner of T1 died while it was in progress. In reality, there’s little difference between the original runner of T1 and another runner that observes T1 on its way.

There’s a chance that T1′s runner died too soon, though, and it hasn’t had a chance to even start the transaction by tagging Ben’s account document as participating in it. In that case, T2 will be pushed forward by its own runner independently, since there’s nothing on its way. T1 isn’t lost, though, and it may be resumed at any point by calling the runner’s Resume or ResumeAll methods.

The whole logic is implemented without introducing any new globally shared point of coordination. It works if documents are in different collections, different shards, and it works even if the transaction collection itself is sharded across multiple backends for scalability purposes.

The testing approach

While a lot of thinking was put onto the way the mechanism works, this is of course non-trivial and bug-inviting logic. In an attempt to nail down bugs early on, a testing environment was put in place to simulate multiple runners in a conflicting workload. To make matters more realistic, this simulation happens in a harsh scenario with faults and artificial slowdowns being randomly injected into the system. At the end, the result is evaluated to see if the changes performed respected the invariants established.

While hundreds of thousands of transactions have been successfully run in this fashion, the package should still be considered experimental at this point, and its API is still prone to change.

There’s one race

There’s one known race that’s worth mentioning, and it was consciously left there for the moment as a tradeoff. The race shows itself when inserting a new document, at the point in time when the decision has been made that the insert was genuinely good. At this exact moment, if that runner is frozen for long enough that would allow for a different runner to insert the document and remove it again, and then the original runner is unfrozen without any errors or timeouts, it will naturally go on and insert the new document.

There are multiple solutions for this problem, but they present their own disadvantages. One solution would be to manipulate the document instead of removing it, but that would leave the collection with ghost content that has to be cared for, and that’s an unwanted side effect. A second solution would be to use the internal applyOps machinery that MongoDB uses in its sharding implementation, but that would mean that collections affected by transactions couldn’t be sharded, which is another unwanted side effect (please vote for SERVER-1439 so we can use it).

Have fun!

I hope the package serves you well, and if you would like to talk further about it, please join the mgo-users mailing list and drop a message.

Read more
niemeyer

?Rob Pike just wrote an article/talk that is the best background on the origins of Go yet.

It surprises me how much his considerations match my world view pre-Go, and in a sense give me a fulfilling explanation about why I got hooked into the language. I still recall sitting in a hotel years ago with Jamu Kakar while we went through the upcoming C++0x standard (now C++11) and got perplexed about how someone could think that having details such as rvalue references and move constructors into the language specification was something reasonable.

Rob also expressed again the initial surprise that developers using languages such as Python and Ruby were more often the ones willing to migrate towards Go, rather than ones using C++, with some reasonable explanations about why that is so. While I agree with his considerations, I see Python going through the same kind of issue that caused C++ to be what it is today.

Consider this excerpt from PEP 0380 as evidence:

If yielding of values is the only concern, this can be performed without much difficulty using a loop such as

for v in g:
    yield v

However, if the subgenerator is to interact properly with the caller in the case of calls to send(), throw() and close(), things become considerably more difficult. As will be seen later, the necessary code is very complicated, and it is tricky to handle all the corner cases correctly.

A new syntax will be proposed to address this issue. In the simplest use cases, it will be equivalent to the above for-loop, but it will also handle the full range of generator behaviour, and allow generator code to be refactored in a simple and straightforward way.

This description has the same DNA that creates the C++ problem Rob talks about. Don’t get me wrong, I’m sure yield from will make a lot of people very happy, and that’s exactly the tricky part. It’s easy and satisfying to please a selection of users, but often that leads to isolated solutions that create new cognitive load and new corner cases that in turn lead to new requirements.

The history of generators in Python is specially telling:

  • PEP 0234 [30-Jan-2001] – Iterators – Accepted
  • PEP 0255 [18-May-2001] – Simple Generators – Accepted
  • PEP 0288 [21-Mar-2002] – Generators Attributes and Exceptions – Withdrawn
  • PEP 0289 [30-Jan-2002] – Generator Expressions – Accepted
  • PEP 0325 [25-Aug-2003] – Resource-Release Support for Generators – Rejected
  • PEP 0342 [10-May-2005] – Coroutines via Enhanced Generators – Accepted
  • PEP 0380 [13-Feb-2009] – Syntax for Delegating to a Subgenerator – Accepted

You see the rabbit hole getting deeper? I’ll clarify it further by rephrasing the previous quote from PEP 0380:

If [feature from PEP 0255] is the only concern, this can be performed without much difficulty using a loop [...] However, if the subgenerator is to interact properly with [changes from PEP 0342] things become considerably more difficult. [So we need feature from PEP 0380.]

Yet, while the language grows handling self-inflicted micro-problems, the real issue is still not solved. All of these features are simplistic forms of concurrency and communication, that don’t satisfy the developers, causing community fragmentation.

This happened to C++, to Python, and to many other languages. Go seems slightly special in that regard in the sense that its core development team has an outstanding respect for simplicity, yet dares to solve the difficult problems at their root, while keeping these solutions orthogonal so that they support each other. Less is more, and is not always straightforward.

Read more
Gustavo Niemeyer

Back in 2009 I quickly talked about the obvious revolution in computing that was rolling in the form of mobile phone as computer, and mentioned as well the fact that touch-based interfaces were going to dominate the marketplace because of that.

Move forward a couple of years, and last week I got my first tablet, running Android (a Samsung Galaxy Tab 10.1, if you’re curious). I didn’t know exactly why I needed one, but being in the tech industry I always have that nice excuse for myself of buying things early on for learning about the experience of using them. Last night, I could clearly see this can be a real claim in some cases (in others it’s just an excuse for the wife).

After getting the tablet last week, I’ve started by experimenting with the usual stuff any person would (email, browser, etc), and then downloaded a few games to take on board a longish flight. Some of them were pretty good.. a vertical scrolling shooter, a puzzle-solver, and so on. On all of them, though, it took just a few minutes before the novelty of holding the screen in my hands for interacting with the game got old, and the interest went away with it.

This last night, though, I’ve decided to try another game from the top list, named Cut the Rope, and this time I was immediately hooked into it. That was certainly one of the most enjoyable gaming experiences I had in quite a while, and when going to bed I started to ponder about what was different there.

The game is obviously well executed, with cute drawings and sounds, and also smooth, but I think there was something else as well. In retrospect, the other games felt a lot like ports of a desktop/laptop experience. The side scrolling game, for instance, was quite well suited for a joystick, and at least one other game had an actual joystick emulated on the screen, which is an enabler, but far from nice to be honest.

This one game, though, felt very well suited for a hands-based interaction: quickly drawing lines for cutting ropes, tapping on balloons to push air out, moving levers around, etc. In some more advanced levels, it was clear that my dexterity (or lack thereof) was playing a much more important role in accomplishing the tasks than the traditional button/joystick version of it. This felt like an entirely novel gaming experience that just hadn’t happened yet.

It’s funny and ironic that I had this experience within a week from Microsoft reportedly saying (again!) that a tablet is just another PC. It’s not, and if they tried it out with some minimum attention they’d see why it’s so clearly not.

In that experience, the joystick felt familiar but at the same quite awkward to use, but using my hands naturally in an environment where that was suitable felt very pleasing. We can generalize that a bit and note a common way to relate to innovation: we first try to reuse the knowledge we have when facing a new concept, but when we understand the concept better quite often we’re able to come up with more effective and interesting ways to relate to it.

In the tablet vs. laptop/desktop thread, you probably won’t want to be typing long documents in a tablet, but would most likely prefer to shuffle items in an agenda with your fingers. Also, you likely wouldn’t want to do that detailed CAD work with a fat finger in a screen, but would certainly be happy to review code or a document sitting in your backyard with the birds (no whales).

So, let’s please put that hammer away for a second while creating a most enjoyable touch-based experience.

Read more
niemeyer

Circular buffers are based on an algorithm well known by any developer who’s got past the “Hello world!” days. They offer a number of key characteristics with wide applicability such as constant and efficient memory use, efficient FIFO semantics, etc.

One feature which is not always desired, though, it the fact that circular buffers traditionally will either overwrite the last element, or raise an overflow error, since they are generally implemented as a buffer of constant size. This is an unwanted property when one is attempting to consume items from the buffer and it is not an option to blindly drop items, for instance.

This post presents an efficient (and potentially novel) algorithm for implementing circular buffers which preserves most of the key aspects of the traditional version, while also supporting dynamic expansion when the buffer would otherwise have its oldest entry overwritten. It’s not clear if the described approach is novel or not (most of my novel ideas seem to have been written down 40 years ago), so I’ll publish it below and let you decide.

Traditional circular buffers

Before introducing the variant which can actually expand during use, let’s go through a quick review on traditional circular buffers, so that we can then reuse the nomenclature when extending the concept. All the snippets provided in this post are written in Python, as a better alternative to pseudo-code, but the concepts are naturally portable to any other language.

So, the most basic circular buffer needs the buffer itself, its total capacity, and a position where the next write should occur. The following snippet demonstrates the concept in practice:

buf = [None, None, None, None, None]
bufcap = len(buf)
pushi = 0   

for elem in range(7):
    buf[pushi] = elem
    pushi = (pushi + 1) % bufcap
    
print buf # => [5, 6, 2, 3, 4]

In the example above, the first two elements of the series (0 and 1) were overwritten once the pointer wrapped around. That’s the specific feature of circular buffers which the proposal in this post will offer an alternative for.

The snippet below provides a full implementation of the traditional approach, this time including both the pushing and popping logic, and raising an error when an overflow or underflow would occur. Please note that these snippets are not necessarily idiomatic Python. The intention is to highlight the algorithm itself.

class CircBuf(object):

    def __init__(self):
        self.buf = [None, None, None, None, None]
        self.buflen = self.pushi = self.popi = 0
        self.bufcap = len(self.buf)

    def push(self, x):
        assert self.buflen == 0 or self.pushi != self.popi, 
               "Buffer overflow!"
        self.buf[self.pushi] = x
        self.pushi = (self.pushi + 1) % self.bufcap
        self.buflen += 1

    def pop(self):
        assert self.buflen != 0, "Buffer underflow!"
        x = self.buf[self.popi]
        self.buf[self.popi] = None
        self.buflen -= 1
        self.popi = (self.popi + 1) % self.bufcap
        return x

With the basics covered, let’s look at how to extend this algorithm to support dynamic expansion in case of overflows.

Dynamically expanding a circular buffer

The approach consists in imagining that the same buffer can contain both a circular buffer area (referred to as the ring area from here on), and an overflow area, and that it is possible to transform a mixed buffer back into a pure circular buffer again. To clarify what this means, some examples are presented below. The full algorithm will be presented afterwards.

First, imagine that we have an empty buffer with a capacity of 5 elements as per the snippet above, and then the following operations take place:

for i in range(5):
    circbuf.push(i)

circbuf.pop() # => 0
circbuf.pop() # => 1

circbuf.push(5)
circbuf.push(6)

print circbuf.buf # => [5, 6, 2, 3, 4]

At this point we have a full buffer, and with the original implementation an additional push would raise an assertion error. To implement expansion, the algorithm will be changed so that those items will be appended at the end of the buffer. Following the example, pushing two additional elements would behave the following way:

circbuf.push(7)
circbuf.push(8)

print circbuf.buf # => [5, 6, 2, 3, 4, 7, 8]

In that example, elements 7 and 8 are part of the overflow area, and the ring area remains with the same capacity and length of the original buffer. Let’s perform a few additional operations to see how it would behave when items are popped and pushed while the buffer is split:

circbuf.pop() # => 2
circbuf.pop() # => 3
circbuf.push(9)

print circbuf.buf # => [5, 6, None, None, 4, 7, 8, 9]

In this case, even though there are two free slots available in the ring area, the last item pushed was still appended at the overflow area. That’s necessary to preserve the FIFO semantics of the circular buffer, and means that the buffer may expand more than strictly necessary given the space available. In most cases this should be a reasonable trade off, and should stop happening once the circular buffer size stabilizes to reflect the production vs. consumption pressure (if you have a producer which constantly operates faster than a consumer, though, please look at the literature for plenty of advice on the problem).

The remaining interesting step in that sequence of events is the moment when the ring area capacity is expanded to cover the full allocated buffer again, with the previous overflow area being integrated into the ring area. This will happen when the content of the previous partial ring area is fully consumed, as shown below:

circbuf.pop() # => 4
circbuf.pop() # => 5
circbuf.pop() # => 6
circbuf.push(10)

print circbuf.buf # => [10, None, None, None, None, 7, 8, 9]

At this point, the whole buffer contains just a ring area and the overflow area is again empty, which means it becomes a traditional circular buffer.

Sample algorithm

With some simple modifications in the traditional implementation presented previously, the above semantics may be easily supported. Note how the additional properties did not introduce significant overhead. Of course, this version will incur in additional memory allocation to support the buffer expansion, bu that’s inherent to the problem being solved.

class ExpandingCircBuf(object):

    def __init__(self):
        self.buf = [None, None, None, None, None]
        self.buflen = self.ringlen = self.pushi = self.popi = 0
        self.bufcap = self.ringcap = len(self.buf)

    def push(self, x):
        if self.ringlen == self.ringcap or 
           self.ringcap != self.bufcap:
            self.buf.append(x)
            self.buflen += 1
            self.bufcap += 1
            if self.pushi == 0: # Optimization.
                self.ringlen = self.buflen
                self.ringcap = self.bufcap
        else:
            self.buf[self.pushi] = x
            self.pushi = (self.pushi + 1) % self.ringcap
            self.buflen += 1
            self.ringlen += 1

    def pop(self):
        assert self.buflen != 0, "Buffer underflow!"
        x = self.buf[self.popi]
        self.buf[self.popi] = None
        self.buflen -= 1
        self.ringlen -= 1
        if self.ringlen == 0 and self.buflen != 0:
            self.popi = self.ringcap
            self.pushi = 0
            self.ringlen = self.buflen
            self.ringcap = self.bufcap
        else:
            self.popi = (self.popi + 1) % self.ringcap
        return x

Note that the above algorithm will allocate each element in the list individually, but in sensible situations it may be better to allocate additional space for the overflow area in advance, to avoid potentially frequent reallocation. In a situation when the rate of consumption of elements is about the same as the rate of production, for instance, there are advantages in doubling the amount of allocated memory per expansion. Given the way in which the algorithm works, the previous ring area will be exhausted before the mixed buffer becomes circular again, so with a constant rate of production and an equivalent consumption it will effectively have its size doubled on expansion.

UPDATE: Below is shown a version of the same algorithm which not only allows allocating more than one additional slot at a time during expansion, but also incorporates it in the overflow area immediately so that the allocated space is used optimally.

class ExpandingCircBuf2(object):

    def __init__(self):
        self.buf = []
        self.buflen = self.ringlen = self.pushi = self.popi = 0
        self.bufcap = self.ringcap = len(self.buf)

    def push(self, x):
        if self.ringcap != self.bufcap:
            expandbuf = (self.pushi == 0)
            expandring = False
        elif self.ringcap == self.ringlen:
            expandbuf = True
            expandring = (self.pushi == 0)
        else:
            expandbuf = False
            expandring = False

        if expandbuf:
            self.pushi = self.bufcap
            expansion = [None, None, None]
            self.buf.extend(expansion)
            self.bufcap += len(expansion)
            if expandring:
                self.ringcap = self.bufcap

        self.buf[self.pushi] = x
        self.buflen += 1
        if self.pushi < self.ringcap:
            self.ringlen += 1
        self.pushi = (self.pushi + 1) % self.bufcap

    def pop(self):
        assert self.buflen != 0, "Buffer underflow!"
        x = self.buf[self.popi]
        self.buf[self.popi] = None
        self.buflen -= 1
        self.ringlen -= 1
        if self.ringlen == 0 and self.buflen != 0:
            self.popi = self.ringcap
            self.ringlen = self.buflen
            self.ringcap = self.bufcap
        else:
            self.popi = (self.popi + 1) % self.ringcap
        return x

Conclusion

This blog post presented an algorithm which supports the expansion of circular buffers while preserving most of their key characteristics. When not faced with an overflowing buffer, the algorithm should offer very similar performance characteristics to a normal circular buffer, with a few additional instructions and constant space for registers only. When faced with an overflowing buffer, the algorithm maintains the FIFO property and enables using contiguous allocated memory to maintain both the original circular buffer and the additional elements, and follows up reusing the full area as part of a new circular buffer in an attempt to find the proper size for the given use case.

Read more
Gustavo Niemeyer

ZooKeeper is a clever generic coordination server for distributed systems, and is one of the core softwares which facilitate the development of Ensemble (project for automagic IaaS deployments which we push at Canonical), so it was a natural choice to experiment with.

Gozk is a complete binding for ZooKeeper which explores the native features of Go to facilitate the interaction with a ZooKeeper server. To avoid reimplementing the well tested bits of the protocol in an unstable way, Gozk is built on top of the standard C ZooKeeper library.

The experience of integrating ZooKeeper with Go was certainly valuable on itself, and worked as a nice way to learn the details of integrating the Go language with a C library. If you’re interested in learning a bit about Go, ZooKeeper, or other details related to the creation of bindings and asynchronous programming, please fasten the seatbelt now.

Basics of C wrapping in Go

Creating the binding on itself was a pretty interesting experiment already. I have worked on the creation of quite a few bindings and language bridges before, and must say I was pleasantly surprised with the experience of creating the Go binding. With Cgo, the name given to the “foreign function interface” mechanism for C integration, one basically declares a special import statement which causes a pre-processor to look at the comment preceding it. Something similar to this:

// #include <zookeeper.h>
import "C"

The comment doesn’t have to be restricted to a single line, or to #include statements even. The C code contained in the comment will be transparently inserted into a helper C file which is compiled and linked with the final object file, and the given snippet will also be parsed and inclusions processed. In the Go side, that “C” import is simulated as if it were a normal Go package so that the C functions, types, and values are all directly accessible.

As an example, a C function with this prototype:

int zoo_wexists(zhandle_t *zh, const char *path, watcher_fn watcher,
                void *context, struct Stat *stat);

In Go may be used as:

cstat := C.struct_Stat{}
rc, cerr := C.zoo_wexists(zk.handle, cpath, nil, nil, &cstat)

When the C function is used in a context where two result values are requested, as done above, Cgo will save the well known errno variable after the function has finished executing and will return it wrapped into an os.Errno value.

Also, note how the C struct is defined in a way that can be passed straight to the C function. Interestingly, the allocation of the memory backing the structure is going to be performed and tracked by the Go runtime, and will be garbage collected appropriately once no more references exist within the Go runtime. This fact has to be kept in mind since the application will crash if a value allocated normally within Go is saved with a foreign C function and maintained after all the Go references are gone. The alternative in these cases is to call the usual C functions to get hold of memory for the involved values. That memory won’t be touched by the garbage collector, and, of course, must be explicitly freed when no longer necessary. Here is a simple example showing explicit allocation:

cbuffer := (*C.char)(C.malloc(bufferSize))
defer C.free(unsafe.Pointer(cbuffer))

Note the use of the defer statement above. Even when dealing with foreign functionality, it comes in handy. The above call will ensure that the buffer is deallocated right before the current function returns, for instance, so it’s a nice way to ensure no leaks happen, even if in the future the function suddenly gets a new exit point which didn’t consider the allocation of resources.

In terms of typing, Go is more strict than C, and Cgo-based logic will also ensure that the types returned and passed into the foreign C functions are correctly typed, in the same way done for the native types. Note above, for instance, how the call to the free() function has to explicitly convert the value into an unsafe.Pointer, even though in C no casting would be necessary to pass a pointer into a void * parameter.

The unsafe.Pointer is in fact a very special type within Go. Using it, one can convert any pointer type into any other pointer type in an unsafe way (thus the package name), and also back and forth into a uintptr value with the address of the memory referenced by the pointer. For every other type conversion, Go will ensure at compilation time that doing the conversion at runtime is a safe operation.

With all of these resources, including the ability to use common Go syntax and functionality even when dealing with foreign types, values, and function calls, the integration task turns out to be quite a pleasant experience. That said, some of the things may still require some good thinking to get right, as we’ll see shortly.

Watch callbacks and channels

One of the most interesting (and slightly tricky) aspects of mapping the ZooKeeper concepts into Go was the “watch” functionality. ZooKeeper allows one to attach a “watch” to a node so that the server will report back when changes happen to the given node. In the C library, this functionality is exposed via a callback function which is executed once the monitored node aspect is modified.

It would certainly be possible to offer this functionality in Go using a similar mechanism, but Go channels provide a number of advantages for that kind of asynchronous notification: waiting for multiple events via the select statement, synchronous blocking until the event happens, testing if the event is already available, etc.

The tricky bit, though, isn’t the use of channels. That part is quite simple. The tricky detail is that the C callback function execution happens in a C thread started by the ZooKeeper library, and happens asynchronously, while the Go application is doing its business elsewhere. Right now, there’s no straightforward way to transfer the execution of this asynchronous C function back into the Go land. The solution for this problem was found with some help from the folks at the golang-nuts mailing list, and luckily it’s not that hard to support or understand. That said, this is a good opportunity to get some coffee or your preferred focus-enhancing drink.

The solution works like this: when the ZooKeeper C library gets a watch notification, it executes a C callback function which is inside a Gozk helper file. Rather than transferring control to Go right away, this C function simply appends data about the event onto a queue, and signals a pthread condition variable to notify that an event is available. Then, on the Go side, once the first ZooKeeper connection is initialized, a new goroutine is fired and loops waiting for events to be available. The interesting detail about this loop, is that it blocks within a foreign C function waiting for an event to be available, through the signaling of the shared pthread condition variable. In the Go side, that’s how the call looks like, just to give a more practical feeling:

// This will block until there's a watch available.
data := C.wait_for_watch()

Then, on the C side, here is the function definition:

watch_data *wait_for_watch() {
    watch_data *data = NULL;
    pthread_mutex_lock(&watch_mutex);
    if (first_watch == NULL)
        pthread_cond_wait(&watch_available, &watch_mutex);
    data = first_watch;
    first_watch = first_watch->next;
    pthread_mutex_unlock(&watch_mutex);
    return data;
}

As you can see, not really a big deal. When that kind of blocking occurs inside a foreign C function, the Go runtime will correctly continue the execution of other goroutines within other operating system threads.

The result of this mechanism is a nice to use interface based on channels, which may be explored in different ways depending on the application needs. Here is a simple example blocking on the event synchronously, for instance:

stat, watch, err := zk.ExistsW("/some/path")
if stat == nil && err == nil {
    event := <-watch
    // Use event ...
}

Concluding

Those were some of the interesting aspects of implementing the ZooKeeper binding. I would like to speak about some additional details, but this post is rather long already, so I'll keep that for a future opportunity. The code is available under the LGPL, so if you're curious about some other aspect, or would like to use ZooKeeper with Go, please move on and check it out!

Read more
Gustavo Niemeyer

A while ago Martin Pool made a very interesting post on the design of interfaces, inspired by a talk from Rusty Russel from 2003.

Besides the interesting scale of interface quality explained there, this is a very insightful comment, often overlooked:


Once the code gets too big for one person, it’s all about damage control. Interfaces make damage control possible… except when the interfaces themselves are the problem.

Designing a system as a group of people requires splitting tasks up among team members, and/or the community, and perhaps even separate teams. Interfaces are the touch points of that splitting, and is what represents the functionality offered within the module/library/file/command/service/whatever. Too often, people spend a long time working on the implementation details, thinking really deep about how to obtain the desired behavior, and forget to define clearly what is the interface to that behavior.

Having good interfaces is a key aspect of software development, and getting it correctly offers a number of important benefits:

Good encapsulation

Having good encapsulation is pretty much a synonym of having good interfaces. Too often, though, people focus on the encapsulation of the small pieces (the functions, the classes, etc), and forget about the encapsulation of the larger blocks (the libraries, modules, packages, commands, etc).

Also, in my experience trying to encourage good architectures, I have found that stating “We need good encapsulation!” gives developers no tangible line of action. It reminds me of a parent telling the child “You should be responsible!”. Sure, encapsulation and responsibility both sound great, but.. what does that really mean?

When inviting developers to think about the interfaces of the system parts they are responsible for, encapsulation becomes a natural outcome. It’s clear that there must be a line drawn between that part of the system and the rest, and the shape of this line must be considered while (or even better, before) the behavior is implemented.

Given well designed interfaces, the additional requirement of only using other parts of the system through their public interfaces seals the achievement of good encapsulation. Ideally, this barrier would be a natural property of the language used to develop the system (see the interface quality scale in Martin’s post). In other cases, this must be achieved through conventions, agreements, and good documentation.

Improved scope and communication

By inviting developers to think about the interfaces of the parts they are responsible for, one is basically encouraging the consideration of the interaction between those pieces and the rest of the system. This process gives an interesting perspective, both in terms of the external expectations (what do I need to offer other people?), as well as the internal goals (what do I need to implement for satisfying what other people need?).

Besides helping people to figure the scope and goal of the piece being developed, this will also give a nice structure to some of the communication which must inevitably happen to integrate correctly the separate parts of the system being developed.

Improved testing and experimentation

If an interface is well designed and defined, and encapsulates well part of the functionality of the system, it improves significantly the testing and experimentation related to that part of the system. Again, this has an effect internally and externally to the interface.

Internally in the sense that there’s a clear boundary between the part in development and the rest of the system, and thus it should be easier to verify that the bits which compose it are working according to plan without dragging the whole system together, and also to verify that the interface itself is behaving as intended (and hopefully as documented).

Externally in the sense that, given that there’s agreement regarding what is the public interface to the part being considered, one may easily provide a test double (a fake, or dummy, or mock) to simulate that part of the system. This is well known to be useful in a number of ways:

  • Dependent work may be run in parallel by different people
  • Real implementation backing the given interface may be postponed, until the idea is proven useful, and the interface feels suitable
  • External systems which would be hard to run locally may be simulated so that tests run fast and cheap, even without network connections
  • Faults may be injected in the system via the test doubles to verify behavior in hostile conditions

and so on.

Quality isolation

This point is also my understanding of what Rusty refers to as damage control in his talk. This property is very useful when designing a system, but even then it’s often missed when discussing interfaces and encapsulation.

If there’s a well defined interface to a piece of functionality in the system, and that interface was carefully considered to cover the needs of the system, the implementation of that interface may not start as the most beautiful, or most scalable, or even most reliable piece of software. As any developer responsible for a successful startup will happily point out, a half-baked implementation is often good enough to get things going, prove the concept, and extend the project runway.

Good interfaces play an important role in this kind of situation. They are, in this sense, a way to be better prepared for success (or, for failure, depending on the perspective). If the interface implementation suddenly becomes an issue for whatever reason, the implementation itself may be replaced by something which better suits the current reality, while preserving the interaction with the rest of the system.

Of course, it’s still very hard to predict future system behavior when facing a completely different reality. Changing the scale requirements for the system a few orders of magnitude, for instance, may easily break existing assumptions, and interfaces designed around these assumptions. Still, even if good interfaces won’t be enough to avoid modifications in the architecture and integration points in many cases, they will certainly help framing the conversations which will take place when this happens and new interfaces must be developed.

Conclusion

When developing non-trivial software products, there’s no other way but to split out the problem solving in several layers and components. Looking at the points where these layers and components touch each other is a very useful and natural way to organize conversations and structure work which must take place to push the product forward.

It’s quite revealing to look at the points above, and note that it’s not simply the existence of interfaces themselves which presents the advantages described, but the process which they encourage around them. Software architecture is essentially about people.

Read more
Gustavo Niemeyer

Recovering a bootable EBS image

Scott Moser has just announced this week that the new Ubuntu images which boot out of an EBS-based root filesystem in EC2, and thus will persist across reboots, are available for testing.

As usual with something that just left the oven and is explicitly labeled for testing purposes, there was a minor bug in the first iteration of images which was even mentioned in the announcement itself. The bug, if not worked around as specified in the announcement, will prevent the image from rebooting.

Having an bootable EBS image which can’t reboot is a quite interesting (and ironic) problem. You have an image which persists, but suddenly you have no way to see what is inside the image anymore because you can’t boot it. Naturally, even if the said bug didn’t exist in the first place, it’s fairly easy to get into such a situation accidentally if you’re fiddling with the image configuration.

So, in this post we’ll see how to recover from a situation where a bootable EBS image can’t boot.

Getting started

To start this up, we’ll boot one of the EBS images which Scott mentioned in his announcement: ami-8bec03e2. As we see in the output of ec2-describe-images, this is an EBS-based image for i386:

% ec2-describe-images ami-8bec03e2
IMAGE ami-8bec03e2 099720109477/ebs/ubuntu-images-testing/ubuntu-lucid-daily-i386-server-20100305 099720109477 available public i386 machine aki-3fdb3756 ebs
BLOCKDEVICEMAPPING /dev/sda1 snap-f1efd098 15

Let’s run this image. Remember to replace the value passed in the -k command line option with your own key pair name.

% ec2-run-instances -k gsg-keypair ami-8bec03e2
RESERVATION r-9e4615f6 626886203892 default
INSTANCE i-e3e33a88 ami-8bec03e2 pending gsg-keypair 0 m1.small 2010-03-09T20:04:12+0000 us-east-1c aki-3fdb3756 monitoring-disabled ebs

There we go. We got an instance allocated in the availability zone us-east-1c. It’s important to keep track of this information, since EBS volumes are zone-specific.

As part of the above command, we must have been allocated an EBS volume automatically, and it should be attached to the instance we just started. We can investigate it with the ec2-describe-volumes command:

% ec2-describe-volumes
VOLUME vol-edca1684 15 snap-f1efd098 us-east-1c in-use 2010-03-09T20:04:20+0000
ATTACHMENT vol-edca1684 i-e3e33a88 /dev/sda1 attached 2010-03-09T20:04:24+0000

Now, we’ll get into the running instance and do some arbitrary modifications, just as a way to demonstrate that the data we don’t want to lose actually survives the recovering operation. Note that the domain name is obtained with the ec2-describe-instances command.

% ssh -i ~/.ssh/id_dsa_gsg-keypair ubuntu@ec2-184-73-51-147.compute-1.amazonaws.com
(…)

ubuntu@domU-12-31-39-0E-A0-03:~$ echo “Important data” > important-data
ubuntu@domU-12-31-39-0E-A0-03:~$ ls -l important-data
-rw-r–r– 1 ubuntu ubuntu 15 Mar 9 20:15 important-data

ubuntu@domU-12-31-39-0E-A0-03:~$ sudo reboot
Broadcast message from ubuntu@domU-12-31-39-0E-A0-03
(/dev/pts/0) at 20:18 …
The system is going down for reboot NOW!

Note that we didn’t actually fix the problem reported by Scott, so our machine won’t really reboot. If we wait a while, we can even see that the problem is exactly what was reported in the announcement (note it really takes a bit for the output to be synced up):

% ec2-get-console-output i-e3e33a88 | tail -4
mount: special device ephemeral0 does not exist
mountall: mount /mnt [294] terminated with status 32
mountall: Filesystem could not be mounted: /mnt

Alright, now what? Machine is dead.. and can’t reboot. How do we get to our important data?

Fixing the problem

The first thing we do is to stop the instance. Do not terminate it, or you’ll lose the EBS volume! After stopping it, we’ll detach the EBS volume that was being used as the root filesystem, so that we can attach somewhere else.

% ec2-stop-instances i-e3e33a88
INSTANCE i-e3e33a88 running stopping

% ec2-detach-volume vol-edca1684
ATTACHMENT vol-edca1684 i-e3e33a88 /dev/sda1 detaching 2010-03-09T20:04:22+0000

Now, we need to attach this volume in an image which actually boots, so that we can fix it. For this experiment, we’ll pick one of the daily Lucid images, but we could use any other working image really. Just remind that the image must be running in the same availability zone as our previous instance, since the EBS volume won’t be accessible otherwise.

% ec2-run-instances -k gsg-keypair -z us-east-1c ami-b5f619dc
RESERVATION r-967427fe 626886203892 default
INSTANCE i-fd08d196 ami-b5f619dc pending gsg-keypair 0 m1.small 2010-03-09T21:10:11+0000 us-east-1c aki-3fdb3756 monitoring-disabled instance-store

% ec2-attach-volume vol-edca1684 -i i-fd08d196 -d /dev/sdh1
ATTACHMENT vol-edca1684 i-fd08d196 /dev/sdh1 attaching 2010-03-09T21:10:51+0000

With the instance running and the EBS root device attached with an alternative device name, we can then login to fix the original problem which prevented the image from booting correctly. In our case, we’ll simply do what Scott suggested in the announcement.

% ssh -i ~/.ssh/id_dsa_gsg-keypair ubuntu@ec2-204-236-194-196.compute-1.amazonaws.com
(…)
$ mkdir ebs-root
$ sudo mount /dev/sdh1 ebs-root
$ sudo sed -i ‘s/^ephemeral0/#ephemeral0/’ ebs-root/etc/fstab
$ sudo umount ebs-root
$ logout
Connection to ec2-204-236-194-196.compute-1.amazonaws.com closed.

Done! Our EBS volume is now correct, and it should boot alright. We’ll detach the volume from the temporary instance we created, and will reattach it back to the old bootable EBS instance which is stopped. Note that we won’t yet terminate the temporary instance, because we may need it in case something else is still wrong, and we are already paying to use it for the hour anyway. We just have to remind ourselves to terminate it once we’re fully done.

% ec2-detach-volume vol-edca1684
ATTACHMENT vol-edca1684 i-fd08d196 /dev/sdh1 detaching 2010-03-09T21:10:51+0000

% ec2-attach-volume vol-edca1684 -i i-e3e33a88 -d /dev/sda1
ATTACHMENT vol-edca1684 i-e3e33a88 /dev/sda1 attaching 2010-03-09T21:24:55+0000

% ec2-describe-volumes vol-edca1684
VOLUME vol-edca1684 15 snap-f1efd098 us-east-1c in-use 2010-03-09T20:04:20+0000
ATTACHMENT vol-edca1684 i-e3e33a88 /dev/sda1 attached 2010-03-09T21:24:55+0000

Okay! It should all be good now. It’s time to restart our instance, and see if it is working. Note that since you stopped and started the instance, the public domain name most probably has changed, and thus we need to find it out again with ec2-describe-instances once the instance is running.

% ec2-start-instances i-e3e33a88
INSTANCE i-e3e33a88 stopped pending

% ec2-describe-instances i-e3e33a88
RESERVATION r-9e4615f6 626886203892 default
INSTANCE i-e3e33a88 ami-8bec03e2 ec2-184-73-72-214.compute-1.amazonaws.com domU-12-31-39-03-B8-21.compute-1.internal running gsg-keypair 0 m1.small 2010-03-09T21:28:43+0000 us-east-1c aki-3fdb3756 monitoring-disabled 184.73.72.214 10.249.187.207 ebs
BLOCKDEVICE /dev/sda1 vol-edca1684 2010-03-09T21:24:55.000Z

% ssh -i ~/.ssh/id_dsa_gsg-keypair ubuntu@ec2-184-73-72-214.compute-1.amazonaws.com
(…)
$ cat important-data
Important data

$ logout
Connection to ec2-184-73-72-214.compute-1.amazonaws.com closed.

It worked, and our important data is still there!

Don’t forget to kill the temporary instance you’ve used to fix it after you’re comfortable with the result:

% ec2-terminate-instances i-fd08d196
INSTANCE i-fd08d196 running shutting-down

Conclusion

Concluding, in this post we have seen how to fix a bootable EBS machine which can’t actually boot. The technique consists of detaching the volume from the stopped instance, attaching it to a temporary instance, fixing the image, and then reattaching it back to the original image. This back and forth of EBS volumes is quite useful in many circumstances, so keep it in your tool belt.

Read more