Canonical Voices

niemeyer

This week I found some time to work on another small spin-off from the juju project at Canonical, and I’m happy to make it openly available today: the xmlpath package, which implements an efficient and strict subset of the XPath specification for the Go language.

This new package will be used in an upcoming (and long due) revision of the goamz package API, which is currently limited by the fact that once the XML result returned by Amazon is unmarshalled into a static structure, any other data that the package wasn’t prepared to deal with becomes hard to access by clients. This problem is being solved by parsing the tree into an intermediary form which can then have XPath expressions conveniently and efficiently applied to it.

Path expressions currently supported by the package are in the following format, with all components being optional:

/axis-name::node-test[predicate]/axis-name::node-test[predicate]

Compatibility with the XPath specification goes to the following extent:

  • All axes are supported (“child”, “following-sibling”, etc)
  • All abbreviated forms are supported (“.”, “//”, etc)
  • All node types except for namespace are supported
  • Predicates are restricted to [N], [path], and [path=literal] forms
  • Only a single predicate is supported per path step
  • Richer expressions and namespaces are not supported

For example, consider this simple document:

<library>
  <!-- Great book. -->
  <book id="b0836217462">
    <isbn>0836217462</isbn>
    <title>Being a Dog Is a Full-Time Job</title>
    <author id="CMS">
      <name>Charles M Schulz</name>
      <born>1922-11-26</born>
    </author>
    <character id="PP">
      <name>Peppermint Patty</name>
      <born>1966-08-22</born>
    </character>
    <character id="Snoopy">
      <name>Snoopy</name>
      <born>1950-10-04</born>
    </character>
  </book>
</library>

The following expressions can be applied to it, with the indicated result as first match:

/library/book/isbn “0836217462″
/library/*/isbn “0836217462″
/library/book/../book/./isbn “0836217462″
/library/book/character[2]/name “Snoopy”
/library/book/character[born='1950-10-04']/name “Snoopy”
/library/book//node()[@id='PP']/name “Peppermint Patty”
//*[author/@id='CMS']/name “Charles M Schulz”
/library/book/preceding::comment() ” Great book. “

The API implemented allows compiled paths to be held and re-applied any number of times, concurrently or not. For example:

path := xmlpath.MustCompile("/library/book/isbn")
root, err := xmlpath.Parse(file)
if err != nil {
        log.Fatal(err)
}
if value, ok := path.String(root); ok {
        fmt.Println("Found:", value)
}

Result sets can also be optionally stepped over via an idiomatic iterator interface.

The performance of these operations is close to using the static unmarshaling currently implemented by Go’s encoding/xml package:

BenchmarkParse                 5000        613862 ns/op
BenchmarkSimplePathCompile     1000000     1983 ns/op
BenchmarkSimplePathString      1000000     1565 ns/op

As a reference, this is a similar encoding/xml operation, using a struct with a single nested field on the same document:

BenchmarkSimpleUnmarshal       5000        622519 ns/op

I’m hoping this will make our unavoidable XML interactions slightly less painful.

Read more
niemeyer

10gen, the company behind the MongoDB database, recently announced the availability of the MongoDB Backup Service. This is not a traditional backup service, though. Rather than simply sending scheduled snapshots of the data over to a remote system, the backup service has an agent sitting next to the database that monitors its operation log, and streams the individual operations over to the remote backup servers. This model enables the service to offer some non-conventional features, such as restoring the state of the database at any point in the last 24h, in addition to more traditional snapshots over longer periods.

There’s another interesting fact about how the system was developed: the backup agent is also the first software 10gen releases that is written in the Go language. Reportedly, the agent started as a Java project but, as the project matured, the team wanted to move to a language that compiled to native machine code to make it easier to install. After considering a few options, the team decided that Go was the best fit for its C-like syntax, strong standard library, first-class concurrency, and painless multi-platform support.

I’ve invited Daniel Gottlieb, the main 10gen engineer behind the service agent, to provide some high-level feedback about the use of Go and mgo, the MongoDB driver, and he kindly replied:

Programming the backup agent in Go and the mgo driver has been extremely satisfying. Between the lightweight syntax, the first-class concurrency and the well documented, idiomatic libraries such as mgo, Go has become my language of choice for writing small scripts up to large distributed applications.

The mgo driver is a real pleasure to use. The code is of high quality, the documentation is thorough, clear and detailed, and the API is a thoughtful, natural blend of idiomatic Go and Mongo.

Those are encouraging words, Daniel. It’s great to see not only 10gen making good use of the Go language for first-class services, but contributing to that community of developers by providing its support for the development of the Go driver in multiple ways. Good chance to say thanks!

Read more
niemeyer

Today ubuntufinder.com was updated with the latest image data for Ubuntu 13.04 and all the previous releases as well. Rather than simply hardcoding the values again, though, the JavaScript code was changed so that it imports the new JSON-based feeds that Canonical has been publishing for the official Ubuntu images that are available in EC2, thanks to recent work by Scott Moser. This means the site is always up-to-date, with no manual actions.

Although the new feeds made that quite straightforward, there was a small detail to sort out: the Ubuntu Finder is visually dynamic, but it is actually a fully static web site served from S3, and the JSON feeds are served from the Canonical servers. This means the same-origin policy won’t allow that kind of cross-domain import to be easily done without further action.

The typical workaround for this kind of situation is to put a tiny proxy within the site server to load the JSON and dispatch to the browser from the same origin. Unfortunately, this isn’t an option in this case because there’s no custom server backing the data. There’s a similar option that actually works, though: deploying that tiny proxy server in some other corner and forward the JSON payload as JSONP or with cross-origin resource sharing enabled, so that browsers can bypass the same-origin restriction, and that’s what was done.

Rather than once again doing a special tiny server for that one service, though, this time around a slightly more general tool has emerged, and as an experiment it has been put live so anyone can use it. The server logic is pretty simple, and the idea is even simpler. Using the services from jsontest.com as an example, the following URL will serve a JSON document that can only be loaded from a page that is in a location allowed by the same-origin policy:

If one wanted to load that page from a different location, it might be transformed into a JSONP document by loading it from:

Alternatively, modern browsers that support the cross-origin resource sharing can simply load pure JSON by omitting the jsonpeercb parameter. The jsonpeer server will emit the proper header to allow the browser to load it:

This service is backed by a tiny Go server that lives in App Engine so it’s fast, secure (hopefully), and maintenance-less.

Some further details about the service:

  • Results are JSON with cross-origin resource sharing by default
  • With a query parameter jsonpeercb=<callback name>, results are JSONP
  • The callback name must consist of characters in the set [_.a-zA-Z0-9]
  • Query parameters provided to jsonpeer are used when doing the upstream request
  • HTTP headers are discarded in both directions
  • Results are cached for 5 minutes on memcache before being re-fetched
  • Upstream results must be valid JSON
  • Upstream results must have Content-Type application/json or text/plain
  • Upstream results must be under 500kb
  • Both http and https work; just tweak the URL and the path accordingly

Have fun if you need it, and please get in touch before abusing it.

UPDATE: The service and blog post were tweaked so that it defaults to returning plain JSON with CORS enabled, thanks to a suggestion by James Henstridge.

Read more
niemeyer

A few years ago, when I started pondering about the possibility of porting juju to the Go language, one of the first pieces of the puzzle that were put in place was goyaml: a Go package to parse and serialize a yaml document. This was just an experiment and, as a sane route to get started, a Go layer that does all the language-specific handling was written on top of the libyaml C scanner, parser, and serializer library.

This was a good initial plan, but for a number of reasons the end goal was always to have a pure Go implementation. Having a C layer in a Go program slows down builds significantly due to the time taken to build the C code, makes compiling in other platforms and cross-compiling harder, has certain runtime penalties, and also forces the application to drop the memory safety guarantees offered by Go.

For these reasons, over the last couple of weeks I took a few hours a day to port the C backend to Go. The total time, considering full time work days, would be equivalent to about a week worth of work.

The work started on the scanner and parser side of the library. This took most of the time, not only because it encompassed more than half of the code base, but also because the shared logic had to be ported too, and there was a need to understand which patterns were used in the old code and how they would be converted across in a reasonable way.

The whole scanner and parser plus header files, or around 5000 code lines of C, were ported over in a single shot without intermediate runs. To steer the process in a sane direction, gofmt was called often to reformat the converted code, and then the project was compiled every once in a while to make sure that the pieces were hanging together properly enough.

It’s worth highlighting how useful gofmt was in that process. The C code was converted in the most convenient way to type it, and then gofmt would quickly put it all together in a familiar form for analysis. Not rarely, it would also point out trivial syntactic issues. A double win.

After the scanner and parser were finally converted completely, the pre-existing Go unmarshaling logic was shifted to the new pure implementation, and the reading side of the test suite could run as-is. Naturally, though, it didn’t work out of the box.

To quickly pick up the errors in the new implementation, the C logic and the Go port were put side-by-side to run the same tests, and tracing was introduced in strategic points of the scanner and parser. With that, it was easy to spot where they diverged and pinpoint the human errors.

It took about two hours to get the full suite to run successfully, with a handful of bugs uncovered. Out of curiosity, the issues were:

  • An improperly dropped parenthesis affected the precedence of an expression
  • A slice was being iterated with copying semantics where a reference was necessary
  • A pointer arithmetic conversion missed the base where there was base+offset addressing
  • An inner scoped variable improperly shadowed the outer scope

The same process of porting and test-fixing was then repeated on the the serializing side of the project, in a much shorter time frame for the reasons cited.

The resulting code isn’t yet idiomatic Go. There are several signs in it that it was ported over from C: the name conventions, the use of custom solutions for buffering and reader/writer abstractions, the excessive copying of data due to the need of tracking data ownership so the simple deallocating destructors don’t double-free, etc. It’s also been deoptimized, due to changes such as the removal of macros and in many cases its inlining, and the direct expansion of large unions which causes some core objects to grow significantly.

At this point, though, it’s easy to gradually move the code base towards the common idiom in small increments and as time permits, and cleaning up those artifacts that were left behind.

This code will be made public over the next few days via a new goyaml release. Meanwhile, some quick facts about the process and outcome follows.

Lines of code

According to cloc, there was a total of 7070 lines of C code in .c and .h files. Of those, 6727 were ported, and 342 were 12 functions that were left unconverted as being unnecessary right now. Those 6727 lines of C became 5039 lines of Go code in a mostly one-to-one dumb translation.

That difference comes mainly from garbage collection, lack of forward declarations, standard helpers such as append, range-based for loops, first class slice type with length and capacity, internal OOM handling, and so on.

Future work code can easily increase the difference further by replacing some of the logic ported with more sensible options available in Go, such as standard abstractions for readers and writers, buffered writing support as availalbe in the standard library, etc.

Code clarity and safety

In the specific context of the work done, which is of a scanner, parser and serializer, the slice abstraction is responsible for noticeable clarity gains in the code, when compared to the equivalent logic based on pointer arithmetic. It also gives a much more comforting guarantee of correctness of the written code due to bound-checking.

Performance

While curious, this shouldn’t be taken as a performance comparison between the two languages, as it is comparing a fine tuned C implementation with something that is worse than a direct one-to-one port: not only it hasn’t seen any time at all on preventing waste, but the original logic was deoptimized due to changes such as the removal of inlining macros and the expansion of large unions. There are many obvious changes to be done for improving performance.

With that out of the way, in a simple decoding benchmark the C-backed decoder runs on about 37% of the time taken by the out-of-the-box deoptimized Go port.

Output size

The previous goyaml.a Go package file had 1463kb. The new one has 1016kb. This difference includes glue code generated for the integration.

Considering only the .c and .h files involved in the port, the C object code generated with the standard flags used by the go build tool (-g -O2) sums up to 789kb. The equivalent Go code with the standard settings compiles to 664kb. The 12 functions not ported are also part of that difference, so the difference is pretty much negligible.

Build time

Building the 8 .c files alone takes 3.6 seconds with the standard flags used by the go build tool (-g -O2). After the port, building the entire Go project with the standard settings takes 0.3 seconds.

Mechanical changes

Many of the mechanical changes were done using regular expressions. Excluding the trivial ones, about a dozen regular expressions were used to swap variable and type names, drop parenthesis, place brackets in the right locations, convert function declarations, and so on.

Read more
niemeyer

Last week I was part of a rant with a couple of coworkers around the fact Go handles errors for expected scenarios by returning an error value instead of using exceptions or a similar mechanism. This is a rather controversial topic because people have grown used to having errors out of their way via exceptions, and Go brings back an improved version of a well known pattern previously adopted by a number of languages — including C — where errors are communicated via return values. This means that errors are in the programmer’s face and have to be dealt with all the time. In addition, the controversy extends towards the fact that, in languages with exceptions, every unadorned error comes with a full traceback of what happened and where, which in some cases is convenient.

All this convenience has a cost, though, which is rather simple to summarize:

Exceptions teach developers to not care about errors.

A sad corollary is that this is relevant even if you are a brilliant developer, as you’ll be affected by the world around you being lenient towards error handling. The problem will show up in the libraries that you import, in the applications that are sitting in your desktop, and in the servers that back your data as well.

Raymond Chen described the issue back in 2004 as:

Writing correct code in the exception-throwing model is in a sense harder than in an error-code model, since anything can fail, and you have to be ready for it. In an error-code model, it’s obvious when you have to check for errors: When you get an error code. In an exception model, you just have to know that errors can occur anywhere.

In other words, in an error-code model, it is obvious when somebody failed to handle an error: They didn’t check the error code. But in an exception-throwing model, it is not obvious from looking at the code whether somebody handled the error, since the error is not explicit.
(…)
When you’re writing code, do you think about what the consequences of an exception would be if it were raised by each line of code? You have to do this if you intend to write correct code.

That’s exactly right. Every line that may raise an exception holds a hidden “else” branch for the error scenario that is very easy to forget about. Even if it sounds like a pointless repetitive task to be entering that error handling code, the exercise of writing it down forces developers to keep the alternative scenario in mind, and pretty often it doesn’t end up empty.

It isn’t the first time I write about that, and given the controversy that surrounds these claims, I generally try to find one or two examples that bring the issue home. So here is the best example I could find today, within the pty module of Python’s 3.3 standard library:

def spawn(argv, master_read=_read, stdin_read=_read):
    """Create a spawned process."""
    if type(argv) == type(''):
        argv = (argv,)
    pid, master_fd = fork()
    if pid == CHILD:
        os.execlp(argv[0], *argv)
    (...)

Every time someone calls this logic with an improper executable in argv there will be a new Python process lying around, uncollected, and unknown to the application, because execlp will fail, and the process just forked will be disregarded. It doesn’t matter if a client of that module catches that exception or not. It’s too late. The local duty wasn’t done. Of course, the bug is trivial to fix by adding a try/except within the spawn function itself. The problem, though, is that this logic looked fine for everybody that ever looked at that function since 1994 when Guido van Rossum first committed it!

Here is another interesting one:

$ make clean
Sorry, command-not-found has crashed! Please file a bug report at:

https://bugs.launchpad.net/command-not-found/+filebug

Please include the following information with the report:

command-not-found version: 0.3
Python version: 3.2.3 final 0
Distributor ID: Ubuntu
Description:    Ubuntu 13.04
Release:        13.04
Codename:       raring
Exception information:

unsupported locale setting
Traceback (most recent call last):
  File "/.../CommandNotFound/util.py", line 24, in crash_guard
    callback()
  File "/usr/lib/command-not-found", line 69, in main
    enable_i18n()
  File "/usr/lib/command-not-found", line 40, in enable_i18n
    locale.setlocale(locale.LC_ALL, '')
  File "/usr/lib/python3.2/locale.py", line 541, in setlocale
    return _setlocale(category, locale)
locale.Error: unsupported locale setting

That’s a pretty harsh crash for the lack of locale data in a system-level application that is, ironically, supposed to tell users what packages to install when commands are missing. Note that at the top of the stack there’s a reference to crash_guard. This function has the intent of catching all exceptions right at the edge of the call stack, and displaying a detailed system specification and traceback to aid in fixing the problem.

Such “parachute catching” is a fairly common pattern in exception-oriented programming and tends to give developers the false sense of having good error handling within the application. Rather than actually guarding the application, though, it’s just a useful way to crash. The proper thing to have done in the case above would be to print a warning, if at all, and then let the program run as usual. This would have been achieved by simply wrapping that one line as in:

try:
    locale.setlocale(locale.LC_ALL, '')
except Exception as e:
    print("Cannot change locale:", e)

Clearly, it was easy to handle that one. The problem, again, is that it was very natural to not do it in the first place. In fact, it’s more than natural: it actually feels good to not be looking at the error path. It’s less code, more linear, and what’s left is the most desired outcome.

The consequence, unfortunately, is that we’re immersing ourselves in a world of brittle software and pretty whales. Although more verbose, the error result style builds the correct mindset: does that function or method have a possible error outcome? How is it being handled? Is that system-interacting function not returning an error? What is being done with the problem that, of course, can happen?

A surprising number of crashes and plain misbehavior is a result of such unconscious negligence.

Read more
niemeyer

This weekend the proper environment settled out for sorting a pet peeve that shows up every once in a while when coding: writing logic that interacts with other applications in the system via their stdin and stdout streams is often more involved than it should be, which seems pretty ironic when sitting in front of a Unix-like system.

Rather than going over the trouble of setting up pipes and hooking them up in a custom way, often applications end up just delegating the job to /bin/sh, which is not ideal for a number of reasons: argument formatting isn’t straightforward, injecting custom application-defined logic is hard, which means even simple tasks that might be easily achieved by the language end up shelling out to further external applications, and so on.

In an attempt to address that, I’ve spent some time working on an experimental Go package that is being released today: pipe.

I hope you like it as well, and please drop me a note if you find any issues.

Read more
niemeyer

There are a number of common misconceptions in software development surrounding the idea of concurrency. This has been coming for decades, and some of the issues have just been reinforced one more time in an otherwise interesting post in LinkedIn’s engineering blog that recommends their development framework.

Such issues may be observed throughout the post, but can be elucidated via this short paragraph:

As we saw with the Scala and JavaScript examples above, for very simple cases, the Evented (asynchronous) code is generally more complicated than Threaded (synchronous) code. However, in most real-world scenarios, you’ll have to make several I/O calls, and to make them fast, you’ll need to do them in parallel.

At a glance, this may look like a sane proposition. There’s agreement that an asynchronous API or framework is one that does not block the flow of execution when faced with a task that has a long or non-predictable deadline, and this coding style is harder for human beings to get right. For example, if you see code such as:

data = read(filename)

There’s less brain work to process and build on it than so called asynchronous logic such as:

read(filename, callback)

It’s also true that there are important interfaces that follow the asynchronous style to prevent resource waste. Some of these exist in the kernel I/O API.

So what’s the issue, then?

There are a few. The first one is the statement that to make I/O scale you have to do it in parallel. That’s clearly not true. Scalable I/O requires your program to not waste an irresponsible amount of memory and CPU per operation. This may be achieved with simple concurrent techniques, and concurrency is not parallelism.

This drives to the next point, which is the strong association between synchronous programming and threads. You can have synchronous programming, and its simplified mental model, without operating system threads. This can be done by having a compiler and runtime that is mindful about performance and resource consumption, building on the efficient interfaces to implement its abstractions.

These ideas have also been covered in this paper from 2003, including benchmark results that debunk the performance myth. What seems most interesting about this paper is that it theorizes such a compiler and runtime that would allow “overcom[ing] limitations in current threads packages and improv[ing] safety, programmer productivity, and performance”, by using techniques such as dynamic stack growth, stack moving, cheaper synchronization, and compile-time data race detection.

That exact mix, including all of the properties described in the paper, are available today in the Go language. You can have synchronous programming, concurrency, parallelism, and performance. We live in the future.

Read more
niemeyer

12 years ago

These ancient entries were taken from my old Advogato diary, written in my early twenties, a year after I joined the development team of Conectiva Linux. I’m copying them for historic purposes, with the content untouched. It’s curious to look back and have such details of what was going on at the time, things that feel good, and things that feel awkward such as the “Dear Diary, …” style of writing, and the amount of exclamations!!


6 Feb 2002

Wow!! It has been a long time since my last diary entry.

I’ve left Linuxconf development team coordination in favor of the Conectiva Linux port to the S390 platform coordination (ok, I’m mostly coordinating myself now :-) . Most of the work is done. I have developed an acceptable installer (in Python!) and most of the packages are ported. We had some problems with IBM OCO modules (ick!), but we are already workarounding them (we gave up on some of our kernel patches, and I patched insmod to recognize OCO modules). Anyway, more information about this later (if I don’t disappear for another year.. ;-)

In the process of porting Conectiva Linux to S390 and PPC (Harald Welve started the PPC port, and I’m keeping it up to date and working on missing stuff) we are learning some lessons. We are trying to use those lessons to build a defacto package building system using the Python language. Unfortunately, we don’t have enough people here to develop it quickly, so we are trying a more realistic and evolutive approach this time. The first part of it is almost done. While devloping it, I’ve studied a little bit about process groups and extended python with a missing killpg() system call. I’ve also discovered that when python spawns a new thread, it blocks all signals. With this information in mind, I have also extended it with a new execv() syscall, that besides doing the usual work, unblocks every signal before the real call to execv(). I hope this project becomes real someday.

I’ve also been playing with Python optimization lately. There’s a big opportunity for somebody wanting to study and implement some concepts there. I’ve read some documentation about Stack Machine Optimization and made some tries (basically, optimizations around the inner loop and the Big Switch, stack caching, and other flavors of this joy). Today I found a paper from Skip Montanaro documenting some of the tries I’ve made (reading it first would save me a lot of time, but this knowledge will be useful anyway). You should have a look at his paper if you have any interest in the topic. Oh, don’t forget to get yourself a copy of Lemburg’s pybench to have a general idea of what you’re doing (don’t trust too much on it, it’s just a benchmark). I’ve written Skip a mail to discuss a little bit about what could be integrated into the interpreter. Let’s see where we get.

Oh… I must not forget to update the people I’ve certified in the past to reflect what they’ve been doing.

27 Jan 2001

I’ve just tested the patch floppy_cs on kernel 2.2.17 and it works just fine!!! I had no problems applying it. Now my Libretto 50CT has a floppy drive. Maybe Conectiva Linux can ship with this patch. The only drawback is that floppy support must be modular.

In the few last days, I’ve implemented support for Inputgrid into gnome-linuxconf. It allows one to define sensitive areas into a drawing area. Gurus are already working with it.

I’ve also created two new commands for Linuxconf’s gui protocol: Splash and Hidesplash. Linuxconf will send them while it is starting, and the graphic frontend is suposed to show a nice splash screen. Support in gnome-linuxconf has also been implemented with an image designed by Everaldo (thanks!!!).

Btw, yesterday I’ve fixed a bug in Python’s bsddb module. It was handling DB_RECNO databases with string keys. As the documentation says, these databases must use, in key’s “data” field, a pointer to a memory location holding a recno_t type.

My parents have arrived yesterday!! They’ll stay here until monday and then will go to Minas Gerais, visiting Raul and Lulude.

22 Jan 2001

Yesterday I’ve implemented a message signing module for Mailman. Darian (aka dmalloc), from openprojects.net has asked if I could do such module to use on lists.openprojects.net. (I said yes… ;-) . I’m going to ask the people here at Conectiva if they want to use this module in some of our lists.

Pybot has a few new features: CTCP handling/sending, timer module, unhandled messages hook, and something else I probably forgot.

About Linuxconf, I have spent the last days fixing a few simple bugs introduced in 1.24r2 and in the last modules developed by Conectiva. I hope to release a Linuxconf update to Conectiva 6.0 tomorrow.

gnome-linuxconf has won a home with screenshots and everything else at http://distro.conectiva.com/projetos/45. I’ve also published it at freshmeat and put files to download at SourceForge.

Oh, good news I forgot to tell: for those of you that are using wxxt-linuxconf, Jack (Jacques Gelinas) has implemented the Treemenu icons into this frontend as well.

16 Jan 2001

Today I’ve fixed a bug in the Pythonmod module of Linuxconf. Linuxconf has a default handler for the SIGCHLD signal that controls all of its child termination. This method has a few disadvantages. Before calling any external processes without using default Linuxconf methods, you must block this handler, otherwise Linuxconf will get on your way. Because of this, If a Pythonmod module tried to fork external processes, they were failing. Now Pythonmod is setting the SIGCHLD signal to SIG_DFL (POSIX doesn’t allow us to SIG_IGN it) before calling python code, and after returning from a few Linuxconf API functions that set the handler back. When the python code returns, popen_initsignal() is called, putting the Linuxconf handler back in place.

On the gnome-linuxconf side, I’ve implemented the drawing context command Defpen. Now we have colored lines and primitives!! (ok… not that good… ;-)

I’ve also spent a few hours in the last two days backing up and restoring data in my colocated machine. Now my personal emails are back online and the server has an updated kernel. I hope it doesn’t bother me for a long time.

Unfortunately, the server stuff didn’t let me work on Pybot, but I had time to implement dynamically loading, unloading and reloading of modules, before I started on the server. This will help a lot in the development, since I don’t have to reboot the bot everytime something is wrong. Anyway, now that the server is ok (I hope so), I’m planning to spend some of my spare time on the bot (yes, I still have some… ;-) .

11 Jan 2001

Today I’ve added the ability of using icons while in the Treemenu mode of Linuxconf. I have just changed some functions to pass the icon name around until it got into the treemenu module and then sent it to the GUI front-end. A little hack on gnome-linuxconf did the work at the front-end side. Following this line of improvements, I’m planning to add a splash screen or something like that soon. Icons would also be welcome in the web interface.

Besides that, I’m also playing with a Python IRC bot. It’s not meant to be a war or a channel control bot. I’m planning to implement useful modules to help making IRC even more useful as an information media (no it won’t be just another infobot clone). The core and a few modules are ready. I’ll post more information later… for now, I’ll just tell that it is a multi-channel, multi-server bot, and that I’m trying to make its commands with natural language (eg. forward messages from #blah on servername to #bloh on servername).

Happy birthday Diogo!!

16 Aug 2000

Created advogato account.

Read more
niemeyer

A small and fun experiment is out:

Read more
niemeyer

Ethics for code reviewers

In the previous post, I explored a bit how ephemeral most of the artifacts of software development processes are. One of these processes is code reviewing, which is arguably a major player in code quality, knowledge acquisition, and even team dynamics.

Even being so important, the outcome of the code review process — the review itself — tends to reach a very limited audience and have a short life time. It’ll be hard to change that picture given the nature of reviews: they are conversational, and address specific issues for the integration of a change in the project. At the same time, even if code reviews are not generally useful as permanent documentation, we can increase their value as reference material by improving the quality of those conversations. Having a good conversation has many other great side effects, of course.

As a small step in that direction, what follows are personal guidelines that I have been evolving empirically over the years as a software developer and code reviewer. They may not bring you fortune and fame, and are not always easy to apply, but hopefully they will help improving your experience as a member of your team and the value of those reviews.

Explain why

Unless the change is about an extremely obvious mistake, explain why you’re suggesting it. If the reasoning was natural to the author, he’d have done it in the first place. Good explanations also help avoiding the same mistake over and over again, and are much more rewarding to the listener. They also become a target for future references.

If you don’t have enough time to justify it and would rather provide the review sooner than later, one approach is to just recommend the change and invite the author for a conversation later if that would be helpful. That said, try to have that conversation over a media that may be shared with the rest of the team, or recollected whenever necessary.

Be respectful

Always keep in mind that there’s a person on the other side of the wire, not a machine, and that it’s hard to understand written words with little context. Avoid letting anger and frustration leak into the review, even if you feel it is justified. There’s no good outcome in those situations.

It doesn’t matter who broke it, or who coded that silly piece of code. If there is broken code, and the project has reviews, multiple people were in the pipeline for that result, and they were trying to get it right. Take shared ownership of the problem, and look for the solution and for how to avoid such issues in the future.

Praise the good work

Reviews carry some low energy feel by their very nature. No matter how positive you are about them, and how much the whole team understands and agrees it is for the best, you are in fact looking for places to put your finger in someone else’s work. For that reason, it is very helpful to take every chance you can of praising logic, design, code organization, or whatever else that you honestly felt was well done. It won’t ever balance it out, but it will at least remind the author that the contributions are welcome.

Suggestions are appreciated

Perhaps a longer variable name would be helpful, or that constant could have a more descriptive name? In many circumstances, the change is indeed subjective, and the gain is pretty marginal. In those cases, if you really can’t resist the urge to say something, a good approach is a suggestion that may be exercised or not at the author’s discretion. Ideally, suggest several options that would feel better to you, so that your point is better understood and agreement is easier. That said, read on.

Avoid trivialities

When reviewing that very simple point, think to yourself: all things considered, does it actually matter? Is the cost of the author’s time, and the potential debate, really worth it? You surely have your opinion about whether to spell “min <= count” or “count >= min“, but so does everybody else. When it’s purely a matter of preference, the author is entitled to have one after all.

Small branches win

Code reviews are useful for a number of secondary reasons, but the primary goal of the code review is to analyze a proposed change, to fix it for inclusion, or to reject it. It’s often tempting to recommend further changes to be bundled onto the same review, but it’s important to keep some focus. Are these additional changes tightly related to the original idea, or would they rather be more appropriate on a future branch?

Also keep an eye on large review submissions. It’s quite rare to see changes of a thousand lines or more that are really an indivisible unit. More often, it ends up like that organically, as a result of the workflow followed by the author. These branches may be very frustrating, both for the author and for reviewers. For the reviewer, it’s hard to keep the necessary level of attention and enthusiasm for the problem over expanded periods of time. For the author, it will be equally problematic to run over a large review. In some extreme cases, it may be worth going back and breaking down the change into more change sets.

Overall, fast iterations on small branches are much more rewarding to work with.

Work with inline comments

This is about tooling, but doing anything else should be considered unethical really. If you don’t have a system that allows the change diff to be seen within the rest of the content, and comments to be made inlined right where you see the issue, implement one right now. Moving to such a system was the most dramatic change in productivity I had as a reviewer in the past several years, and makes the whole experience a lot more bearable for everyone.

Enjoy!

Make sure you’re enjoying what you do, and appreciate what your code reviews are achieving. There’s little point in playing the role of an intelligent computer over extended periods of time if you are unhappy about it. Get yourself your preferred slow-drinking beverage (chimarrão?), perhaps some snacks, a comfortable chair, and relax.

Read more
niemeyer

Lately I’ve been considering the amount of waste we produce during software development, and how to increase the amount of recycled content. I’m not talking about actual trash, though, but rather about software development artifacts.

Over the years, we’ve learned about and put in practice several means for improving the quality and success rate of projects we create or contribute to. We have practices such as sprints to get people together with high communication bandwidth; we have code reviews for sharing knowledge and improving project quality; we’ve got technical leadership roles to mentor developers and guide the progress of projects; we’ve created kanban boards and burndown charts to help people visualize what they’re going through; and so on.

While all of that seems to have helped tremendously, there’s a sad fact about where we stand: the artifacts of most of these processes are local to their context, and very sensitive to time. That burndown chart is meaningless after it’s burned, and a kanban has no relevant history. Our technical leads indeed guide their teams, but their wisdom stays with the few people that had the chance to interact with them, and subjectively so. That brilliant code review from our best developers has a very limited audience, and rarely carries any meaning just days after it has been accomplished.

That last one is specially interesting. The process of reviewing code is an intense task, very expensive, and that takes a significant portion of the life of an active developer, and even then very little is carried forward as the outcome of that process. We have no effective means or even culture of sharing the generated wisdom to other teams. In fact, we rarely share these details even within the team itself. Why was that line changed like this? Why an interface like that is a bad idea? Who will instruct the new guy next week, and where did we record a bit of the wisdom of the brilliant guy that has left the company recently?

Unfortunately there’s probably no easy solution for this problem. At this point, I mainly recognize that most of the efforts I’ve lead to improve software development for the past several years had a very limited scope. The software itself became immediately better as a result of my efforts, its design became more sensible, and hopefully I contributed a bit to the growth of people around me, but at a company or even community-wide scope, all of these code reviews, sprints, and IRC conversations are buried for very rare revives.

I want to start doing something about this, though. There must be a way to shape these conversations in a more reusable format; in a way that knowledge and agreement can be more proactively preserved and scattered. Perhaps it’s more about how than it is about what. Perhaps we just need to write more posts like this, and cover more topics related to daily development findings. Not sure. I’ll be thinking…

Read more
niemeyer

Our son Otávio was born recently. Right in the first few days, we decided to keep tight control on the feeding times for a while, as it is an intense routine pretty unlike anything else, and obviously critical for the health of the baby. I imagined that it wouldn’t be hard to find an Android app that would do that in a reasonable way, and indeed there are quite a few. We went with Baby Care, as it has a polished interface and more features than we’ll ever use. The app also includes some basic statistics, but not enough for our needs. Luckily, though, it is able to export the data as a CSV file, and post-processing that file with the R language is easy, and allows extracting some fun facts about what the routine of a healthy baby can look like in the first month, as shown below.

Otávio

The first thing to do is to import the raw data from the CSV file. It is a one-liner in R:

> info = read.csv("baby-care.csv", header=TRUE)

Then, this file actually comes with other events that won’t be processed now, so we’ll slice it and grab only the rows and columns of interest:

> feeding <- info[info$Event.type == "Breast",
        c("Event.subType", "Start.Time", "End.Time", "Duration")]

This is how it looks like:

> feeding[100:103,]
    Event.subType       Start.Time         End.Time Duration
129          Left 2013/01/04 13:45 2013/01/04 14:01    00:16
132          Left 2013/01/04 16:21 2013/01/04 16:30    00:09
134         Right 2013/01/04 17:46 2013/01/04 17:54    00:08

Now things get more interesting. Let’s extract that duration column into a more useful vector, and do some basic analysis:

> duration <- as.difftime(as.vector(feeding$Duration), "%H:%M")

> length(duration)
[1] 365

> total = sum(duration)
> units(total) = "hours"
> total
Time difference of 63.71667 hours

> mean(duration)
Time difference of 10.47397 mins
> sd(duration)
[1] 5.937172

A total of 63 hours surprised me, but the mean time of around 10 minutes per feeding is within the recommendation, and the standard deviation looks reasonable. It may be more conveniently pictured as a histogram:

> hist(as.numeric(duration), breaks="FD",
    col="blue", main="", xlab="Minutes")

Duration histogram

Another point we were interested on is if both sides are properly balanced:

> sides <- c("  Right", "  Left")
> tapply(duration, feeding$Event.subType, mean)[sides]
   Right     Left 
10.72283 10.22099

Looks good.

All of the analysis so far goes over the whole period, but how has the daily intake changed over time? We’ll need an additional vector to compute this and visualize in a chart:

> day <- format(strptime(feeding$Start.Time, "%Y/%m/%d %H:%M"),
                "%Y/%m/%d")
> perday <- tapply(duration, day, sum)
> mean(perday)
[1] 136.5357
> sd(perday)
[1] 53.72735
> sd(perday[8:length(perday)])
[1] 17.49735

> plot(perday, type="h", col="blue", xlab="Day", ylab="Minutes")

Daily duration

The mean looks good, with about two hours every day. The standard deviation looks high on a first look, but it’s actually not that bad if we take off the first few days. Looking at the graph shows why: the slope on the left-hand side, which is expected as there’s less milk and the baby has more trouble right after birth.

The chart shows a red flag, though: one day seems well below the mean. This is something to be careful about, as babies can get into a loop where they sleep too much and miss being hungry, the lack of feeding causes hypoglycemia, which causes more sleep, and it doesn’t end up well. A rule of thumb is to wake the baby up every two hours in the first few days, and at most every four hours once he stabilizes for the following weeks.

So, this was another point of interest: what are the intervals between feedings?

> start = strptime(feeding$Start.Time, "%Y/%m/%d %H:%M")
> end = strptime(feeding$End.Time, "%Y/%m/%d %H:%M")
> interval <- start[-1]-end[-length(end)]

> hist(as.numeric(interval), breaks="FD", col="blue",
       main="", xlab="Minutes")

Interval histogram

Seems great, with most feedings well under two hours. There's a worrying outlier, though, of more than 6 hours. Unsurprisingly, it happened over night:

> feeding$End.Time[interval > 300]
[1] 2013/01/07 00:52

It wasn't a significant issue, but we don't want that happening often while his body isn't yet ready to hold enough energy for a full night of sleep. That's the kind of reason we've been monitoring him, and is important because our bodies are eager to get full nights of sleep, which opens the door for unintended slack. As a reward for that kind of control, we've got the chance to enjoy not only his health, but also an admirable mood.

Love, Dad.

Read more
niemeyer

I’m glad to announce experimental support for multi-document transactions in the mgo driver that integrates MongoDB with the Go language. The support is done via a driver extension, so it works with any MongoDB release supported by the driver (>= 1.8).

Features

Here is a quick highlight list to get your brain ticking before the details:

  • Supports sharding
  • Operations may span multiple collections
  • Handles changes, inserts and removes
  • Supports pre-conditions
  • Self-healing
  • No additional locks or leases
  • Works with existing data

Let’s see what these actually mean and how the goodness is done.


The problem being addressed

The typical example is a bank transaction: imagine you have two documents representing accounts for different people, and you want to transfer 100 bucks from Aram to Ben. Despite the apparent simplicity in that description, there are a number of edge cases that turn it into a non-trivial change.

Imagine an agent processing the change following these steps:

  1. Is Ben’s account valid?
  2. Take 100 bucks out of Aram’s account if its balance is above 100
  3. Insert 100 bucks into Ben’s account

Note that this description already assumes the availability of some single-document atomic operations as supported by MongoDB. Even then, how many race conditions and crash-related problems can you count? Here are some spoilers that hint at the problem complexity:

  • What if Ben cancels his account after (1)?
  • What if the agent crashes after (2)?

How it works

Thanks to the availability of single-document atomic operations, it is be possible to craft a sequence of changes that manipulate documents in a way that supports multi-document transactional behavior. This works as long as the clients agree to use the same conventions.

This isn’t exactly news, though, and there’s even documentation describing how one can explore these ideas. The challenge is in crafting a generic mechanism that not only does the basics but goes beyond by supporting inserts and removes, being workload agnostic, behaving correctly on crashes (!), and yet remaining pleasant to use. That’s the territory being explored.

The implemented semantics offers an isolation level that allows non-repeatable reads to occur (a partially committed transaction is visible), but the changes are guaranteed to only be visible in the order specified in the transaction, and once any change is done the transaction is guaranteed to be applied completely without intervening changes in the affected documents (no dirty reads). Among other things, this means one can use any existing mechanism at read time.

When writing documents that are affected by the transaction mechanism, one must necessarily use the API of the new mgo/txn package, which ended up surprisingly thin and straightforward. In other words for emphasis: if you modify fields that are affected by the transaction mechanism both with and without mgo/txn, it will misbehave arbitrarily. Fields that are read or written by mgo/txn must only be changed using mgo/txn.

Using the example described above, the bank account transfer might be done as:

runner := txn.NewRunner(tcollection)
ops := []txn.Op{{
        C:      "accounts", 
        Id:     "aram",
        Assert: M{"balance": M{"$gte": 100}},
        Update: M{"$inc": M{"balance": -100}},
}, {
        C:      "accounts",
        Id:     "ben",
        Assert: M{"valid": true},
        Update: M{"$inc": M{"balance": 100}},
}}
id := bson.NewObjectId() // Optional
err := runner.Run(ops, id, nil)

The assert and update values are usual MongoDB querying and updating documents. The tcollection is a MongoDB collection that is used to atomically insert the transaction details into the database. As long as that document makes it into the database, the transaction is guaranteed to be eventually entirely applied or entirely aborted. The exact moment when this happens is defined by whether there are other transactions in progress and whether a communication problem occurs and when it occurs, as described below.

Concurrency and crash-proofness

Perhaps the most interesting piece of the puzzle when coming up with a nice transaction mechanism is defining what happens when an agent misbehaves, even more in a world where there are multiple distributed transaction runners. If there are locks, someone must unlock when a runner crashes, and must know the difference between running slowly and crashing. If there are leases, the lease boundary becomes an issue. In both cases, the speed of the overall system would become bounded by the speed of the slowest runner.

Instead of falling onto those issues, the implemented mechanism observes the transactions being attempted on the affected documents, orders them in a globally agreed way, and pushes all of their operations concurrently.

To illustrate the behavior, imagine again the described scenario of bank transferences:

In this diagram there are two transactions being attempted, T1 and T2. The first is a transference from Aram to Ben, and the second is a transference from Ben to Carl. If a runner starts executing T2 while T1 is still being applied by a different runner, the first runner will pick T1 up and complete it before starting to work on T2 which is its real goal. This works even if the original runner of T1 died while it was in progress. In reality, there’s little difference between the original runner of T1 and another runner that observes T1 on its way.

There’s a chance that T1′s runner died too soon, though, and it hasn’t had a chance to even start the transaction by tagging Ben’s account document as participating in it. In that case, T2 will be pushed forward by its own runner independently, since there’s nothing on its way. T1 isn’t lost, though, and it may be resumed at any point by calling the runner’s Resume or ResumeAll methods.

The whole logic is implemented without introducing any new globally shared point of coordination. It works if documents are in different collections, different shards, and it works even if the transaction collection itself is sharded across multiple backends for scalability purposes.

The testing approach

While a lot of thinking was put onto the way the mechanism works, this is of course non-trivial and bug-inviting logic. In an attempt to nail down bugs early on, a testing environment was put in place to simulate multiple runners in a conflicting workload. To make matters more realistic, this simulation happens in a harsh scenario with faults and artificial slowdowns being randomly injected into the system. At the end, the result is evaluated to see if the changes performed respected the invariants established.

While hundreds of thousands of transactions have been successfully run in this fashion, the package should still be considered experimental at this point, and its API is still prone to change.

There’s one race

There’s one known race that’s worth mentioning, and it was consciously left there for the moment as a tradeoff. The race shows itself when inserting a new document, at the point in time when the decision has been made that the insert was genuinely good. At this exact moment, if that runner is frozen for long enough that would allow for a different runner to insert the document and remove it again, and then the original runner is unfrozen without any errors or timeouts, it will naturally go on and insert the new document.

There are multiple solutions for this problem, but they present their own disadvantages. One solution would be to manipulate the document instead of removing it, but that would leave the collection with ghost content that has to be cared for, and that’s an unwanted side effect. A second solution would be to use the internal applyOps machinery that MongoDB uses in its sharding implementation, but that would mean that collections affected by transactions couldn’t be sharded, which is another unwanted side effect (please vote for SERVER-1439 so we can use it).

Have fun!

I hope the package serves you well, and if you would like to talk further about it, please join the mgo-users mailing list and drop a message.

Read more
niemeyer

?Rob Pike just wrote an article/talk that is the best background on the origins of Go yet.

It surprises me how much his considerations match my world view pre-Go, and in a sense give me a fulfilling explanation about why I got hooked into the language. I still recall sitting in a hotel years ago with Jamu Kakar while we went through the upcoming C++0x standard (now C++11) and got perplexed about how someone could think that having details such as rvalue references and move constructors into the language specification was something reasonable.

Rob also expressed again the initial surprise that developers using languages such as Python and Ruby were more often the ones willing to migrate towards Go, rather than ones using C++, with some reasonable explanations about why that is so. While I agree with his considerations, I see Python going through the same kind of issue that caused C++ to be what it is today.

Consider this excerpt from PEP 0380 as evidence:

If yielding of values is the only concern, this can be performed without much difficulty using a loop such as

for v in g:
    yield v

However, if the subgenerator is to interact properly with the caller in the case of calls to send(), throw() and close(), things become considerably more difficult. As will be seen later, the necessary code is very complicated, and it is tricky to handle all the corner cases correctly.

A new syntax will be proposed to address this issue. In the simplest use cases, it will be equivalent to the above for-loop, but it will also handle the full range of generator behaviour, and allow generator code to be refactored in a simple and straightforward way.

This description has the same DNA that creates the C++ problem Rob talks about. Don’t get me wrong, I’m sure yield from will make a lot of people very happy, and that’s exactly the tricky part. It’s easy and satisfying to please a selection of users, but often that leads to isolated solutions that create new cognitive load and new corner cases that in turn lead to new requirements.

The history of generators in Python is specially telling:

  • PEP 0234 [30-Jan-2001] – Iterators – Accepted
  • PEP 0255 [18-May-2001] – Simple Generators – Accepted
  • PEP 0288 [21-Mar-2002] – Generators Attributes and Exceptions – Withdrawn
  • PEP 0289 [30-Jan-2002] – Generator Expressions – Accepted
  • PEP 0325 [25-Aug-2003] – Resource-Release Support for Generators – Rejected
  • PEP 0342 [10-May-2005] – Coroutines via Enhanced Generators – Accepted
  • PEP 0380 [13-Feb-2009] – Syntax for Delegating to a Subgenerator – Accepted

You see the rabbit hole getting deeper? I’ll clarify it further by rephrasing the previous quote from PEP 0380:

If [feature from PEP 0255] is the only concern, this can be performed without much difficulty using a loop [...] However, if the subgenerator is to interact properly with [changes from PEP 0342] things become considerably more difficult. [So we need feature from PEP 0380.]

Yet, while the language grows handling self-inflicted micro-problems, the real issue is still not solved. All of these features are simplistic forms of concurrency and communication, that don’t satisfy the developers, causing community fragmentation.

This happened to C++, to Python, and to many other languages. Go seems slightly special in that regard in the sense that its core development team has an outstanding respect for simplicity, yet dares to solve the difficult problems at their root, while keeping these solutions orthogonal so that they support each other. Less is more, and is not always straightforward.

Read more
Gustavo Niemeyer

The Easter mgo release

It wasn’t just the bunny that was active over the holidays. The r2012.04.08 release of the mgo MongoDB driver for Go has just been tagged. This release is supposed to be entirely compatible with the last release, and there are some nice improvements and a few important bug fixes, so upgrading is recommended.

For the impatient, here is a quick summary of the changes performed:

  • Bug in Limit method fixed
  • Overflow in marshaling of time.Time fixed
  • omitempty support for time.Time fields
  • Better slave selection
  • Hard per-server connection limit
  • Improved performance of query error checking
  • Sort method arguments simplified
  • Added Session.Fsync, FsyncLock, and FsyncUnlock methods
  • Added Query.Snapshot method

If you want more details about any of these, keep reading.

Bug in Limit method fixed

The Limit method was fixed. It was improperly causing the data to be returned in a single chunk from the server, which would mean less documents being processed if the data retrieved went over 4MB. Thanks to Jeff Wendling for reporting the issue.

Overflow in marshaling of time.Time fixed

The marshaling of time.Time was overflowing due to the use of UnixNano, which means certain times would be marshaled as an arbitrarily wrong time. The zero time is an important case that was mishandled, for example. If there’s a chance your application may be affected, you may look for this date in the database to confirm:

1754-08-30 22:43:41.128654848 +0000 UTC

If you find it, that’s actually a zero time.

The problem was fixed and such times will now be marshaled correctly. Thanks to Mikael for reporting the issue on the mgo-users mailing list.

omitempty support for time.Time fields

The omitempty bson tag flag is now supported for time.Time values as well. If used, the time is only marshalled into the bson document if IsZero is false. For example:

type Person struct {
        Birthday time.Time `bson:”,omitempty”`
}

Better slave selection

The slave selection algorithm was changed so that mgo will now pick the server for which it has the least number of connections open at the moment, rather than being a random selection. More improvements in this area will come soon.

Hard per-server connection limit

There’s now a hard limit for the number of concurrent connections to a single server. If that limit is reached, the client will block waiting for connections to be released.

The default limit is currently 4096 connections per server for each mgo client, but this logic will be improved in the near future as well to take into account the number of connections currently available at the server side.

Note that this is a fire protection mechanism. This limit is not supposed to be touched under normal operation. If you have higher needs right now, please get in touch.

The development of this feature was sponsored by Iron.io.

Improved performance of query error checking

The logic that verified query results for errors was on the expensive side for large documents, as pointed out by Nils Hasenbanck in the mailing list. This has been significantly improved.

Sort method arguments simplified

The Sort method now takes a list of field names as arguments rather than a document. Field names may be potentially prefixed by – (minus) to sort in descending order.

For example, what would previously be written as:

query := c.Find(q).Sort(bson.M{“a”: 1, “b”: -1})

Is now written as:

query := c.Find(q).Sort(“a”, “-b”)

The previous format is still supported right now for compatibility reasons, but it is deprecated and will eventually be dropped entirely.

More details at:

http://godoc.labix.org/mgo#Query.Sort

Added Session.Fsync, FsyncLock, and FsyncUnlock methods

The new Session Fsync method requests a synchronous or asynchronous flush of in-memory changes to disk, while the FsyncLock and FsyncUnlock methods do the opposite: they handle a lock at the server side that blocks any follow up write requests (and read requests too, see the documentation). This is useful, for example, to perform backups reliably.

See the documentation of these methods for more details:

http://godoc.labix.org/mgo#Session.Fsync
http://godoc.labix.org/mgo#Session.FsyncLock
http://godoc.labix.org/mgo#Session.FsyncUnlock

Added Query.Snapshot method

The new helper method wraps the snapshot command. More details in the documentation:

http://godoc.labix.org/mgo#Query.Snapshot

Read more
Gustavo Niemeyer

Back at the Ubuntu Platform Rally last week, I’ve pestered some of the Bazaar team with questions about co-location of branches in the same directory with Bazaar. The great news is that this seems to be really coming for the next release, with first-class integration of the feature in the command set. Unfortunately, though, it’s not quite yet ready for prime time, or even for I’m-crazy-and-want-this-feature time.

Some background on why this feature turns out to be quite important right now may be interesting, since life with Bazaar in the past years hasn’t really brought that up as a blocker. The cause for the new interest lies in some recent changes in the toolset of the Go language. The new go tool not only makes building and interacting with Go packages a breeze, but it also solves a class of problems previously existent. For the go tool to work, though, it requires the use of $GOPATH consistently, and this means that the package has to live in a well defined directory. The traditional way that Bazaar manages branches into their own directories becomes a deal breaker then.

So, last week I had the chance to exchange some ideas with Jelmer Vernooij and Vincent Ladeuil (both Bazaar hackers) on these problems, and they introduced me to the approach of using lightweight checkouts to workaround some of the limitations. Lightweight checkouts in Bazaar makes the working tree resemble a little bit the old-style VCS tools, with the working tree being bound to another location that actually has the core content. The idea is great, and given how well lightweight checkouts work with Bazaar, building a full fledged solution shouldn’t be a lot of work really.

After that conversation, I’ve put a trivial hack together that would make bzr look like git from the outside, by wrapping the command line, and did a lightning talk demo. This got a few more people interested on the concept, which was enough motivation for me to move the idea forward onto a working implementation. Now I just needed the time to do it, but it wasn’t too hard to find it either.

I happen to be part of the unlucky group that too often takes more than 24 hours to get back home from these events. This is not entirely bad, though.. I also happen to be part of the lucky group that can code while flying and riding buses as means to relieve the boredom (reading helps too). This time around, cobzr became the implementation of choice, and given ~10 hours of coding, we have a very neat and over-engineered wrapper for the bzr command.

The core of the implementation is the same as the original hack: wrap bzr and call it from outside to restructure the tree. That said, rather than being entirely lazy and hackish line parsing, it actually parses bzr’s –help output for commands to build a base of supported options, and parses the command line exactly like Bazaar itself would, validating options as it goes and distinguishing between flags with arguments from positional parameters. That enables the proxying to do much more interesting work on the intercepted arguments.

Here is a quick session that shows a branch being created with the tool. It should look fairly familiar for someone used to git:


[~]% bzr branch lp:juju
Branched 443 revisions.

[~]% cd juju
[~/juju]% bzr branch
* master

[~/juju]% bzr checkout -b new-feature
Shared repository with trees (format: 2a)
Location:
shared repository: .bzr/cobzr
Branched 443 revisions.
Branched 443 revisions.
Tree is up to date at revision 443.
Switched to branch: /home/niemeyer/juju/.bzr/cobzr/new-feature/

[~/juju]% bzr branch other-feature
Branched 443 revisions.

[~/juju]% bzr branch
  master
* new-feature
  other-feature

Note that cobzr will not reorganize the tree layout before the multiple branch support is required.

Even though the wrapping is taking place and bzr’s –help output is parsed, there’s pretty much no noticeable overhead given the use of Go for the implementation and also that the processed output of –help is cached (I said it was overengineered).

As an example, the first is the real bzr, while the second is a link to cobzr:


[~/juju]% time /usr/bin/bzr status
/usr/bin/bzr status 0.24s user 0.03s system 88% cpu 0.304 total

[~/juju]% time bzr status
bzr status 0.19s user 0.08s system 88% cpu 0.307 total

This should be more than enough for surviving comfortably until bzr itself comes along with first class support for co-located branches in the next release.

In case you’re interested in using it or are just curious about the command set or other details, please check out the web page for the project:

Read more
Gustavo Niemeyer

A long time before I seriously got into using distributed version control systems (DVCS) such as Bazaar and Git for developing software, it was already well known to me how the mechanics of these systems worked, and why people benefited from them. That said, it wasn’t until I indeed started to use DVCS tools that I understood how much my daily workflow around code bases would be changed and improved.

This weekend, while flying home from MongoSV, I could experience that same feeling in relation to first class concurrency support in programming languages. Everybody knows how the feature may be used, but I have the feeling that until one actually experiences it in practice, it’s very hard to really understand how much the relationship with ordering while developing software may be improved.

I was having some fun working on improvements to Goetveld. This package allows Go programs to communicate with Rietveld servers to manipulate code review entries. The Rietveld API is a bit rough in a few places, and as a result some features of the package actually parse an HTML form to extract some data, before sending it back. You may have done something similar before while attempting to script a web site that wasn’t originally intended to be.

The interesting fact here is that this is an intrinsically serial procedure: load a form, change it, and send it back, right? Well, not really. As one might intuitively expect, establishing an SSL session and its underlying TCP connection are not instantaneous operations.

To give an idea, here is part of a dump of an SSL connection being initiated (that is, no HTTP data was sent yet) to codereview.appspot.com, originated from my home location:

# tcpdump -ttttt -i wlan0 'host codereview.appspot.com and port 443'
(...)
00:00:00.000000 IP (...)
00:00:00.000063 IP (...)
00:00:00.000562 IP (...)
00:00:00.341627 IP (...)
00:00:00.357009 IP (...)
00:00:00.357118 IP (...)
00:00:00.360362 IP (...)
00:00:00.360550 IP (...)
00:00:00.366011 IP (...)
00:00:00.689446 IP (...)
00:00:00.727693 IP (...)

That’s more than half a second before the application layer was even touched. So, turns out that to save that roundtrip time, we can start both the form loading and the form sending requests at the same time. By the time the form loading ends, processing the data locally is extremely fast, and we can complete the sending side by just providing the request body.

At this time you may be thinking something like “Ugh, that’s too much trouble.. why bother?”, and that highlights precisely the point I’d like to make: it is too much trouble because most people are used to languages that turn it into too much trouble, but the issue is not inherently complex. In fact, this is the entire implementation of this logic in Go:

func (r *Rietveld) UpdateIssue(issue *Issue) error {
        op := &opInfo{r: r, issue: issue}
        errs := make(chan error)
        ch := make(chan map[string]string, 1)
        go func() {
                errs <- r.do(&editLoadHandler{op: op, form: ch})
                close(ch)
        }()
        go func() {
                errs <- r.do(&editHandler{op: op, form: ch})
        }()
        return firstError(2, errs)
}

I'm not cheating. The procedure was being done serially before, with very similar logic. Previously it had to take the form variable itself from the first request and manually provide it to the next one. Now, instead of providing the form, it's providing a channel that will be used to send the form across. One might even argue that the channel makes the algorithm more natural, curiously.

This is the kind of procedure that becomes fun and natural to write, after having first class concurrency at hand for some time. But, as in the case of DVCS, it takes a while to get used to the idea that concurrency and simplicity are not necessarily at opposing ends.

Read more
niemeyer

In the past week, I’ve finally stopped to fix something that I’ve been wishing for years: inline code reviews in Launchpad. Well, I haven’t exactly managed fix it in Launchpad, but the integration with Rietveld feels nice enough to be relatively painless.

The integration is done using the lbox tool, that was developed in Go using the lpad package for the communication with Launchpad, and a newly written rietveld package for communication with Rietveld.

If you want to join me in my happines, here are the few steps to get that working for you as well.

First, install lbox from the Launchpad PPA. Since it’s written in Go, it has no dependencies.

$ sudo add-apt-repository ppa:gophers/go
$ sudo apt-get update
$ sudo apt-get install lbox

Now, as an example of using it, let’s suppose we want to perform a change in the lbox code itself. First, we take the branch out of Launchpad.

$ mkdir hacking
$ cd hacking
$ bzr branch lp:lbox
Branched 9 revision(s).

Then, let’s create a feature branch based on the original trunk, and perform a change.

$ bzr branch lbox my-nice-feature
Branched 9 revision(s).

$ cd my-nice-feature
$ echo # Yo >> Makefile
$ bzr commit -m "Yo-ified makefile"
Committing to: /home/user/hacking/my-nice-feature/
modified Makefile
Committed revision 10.

Ok, we’re ready for the magic step, which is actually pushing that branch and proposing the merge on the original branch on both Launchpad and Rietveld. It’s harder to explain than to do it:

$ lbox propose -cr
2011/11/17 23:29:49 Looking up branch information for "."...
2011/11/17 23:29:49 Looking up branch information for "hacking/lbox"...
2011/11/17 23:29:49 Found landing target: bzr+ssh://bazaar.../lbox/
(...)

This command will ask you for a few details interactively, like your authentication details in Launchpad and in Rietveld (your Google Account, details sent over SSL to Google itself; you may have to visit Rietveld first for that to work), and also the change description.

In case something fails, feel free to simply execute the command again, as many times as you want. The command is smart enough to figure that an existing merge proposal and change in Rietveld exist and will update the existing ones with the new details you provide, rather than duplicating work.

Once the command finishes, you can visit the URL for the merge proposal in Launchpad that was printed, and you should see something like this:

Note that the change description already includes a link onto the Rietveld issue at codereview.appspot.com. The issue on Rietveld will look something like this:

Observe how the issue has the same description as the merge proposal, but it links back onto the merge proposal. At the left-hand side, there’s also an interesting detail: the original merge proposal email has been added as the reviewer of this change. This means that any changes performed in Rietveld will be mailed back onto the merge proposal for its record.

In the center you can find the meat of the whole work: the actual change set that is being reviewed. Rietveld works with patch sets, so that you can not only see a given change, but you can also review the history of proposals that the proponent has made, and any inline comments performed in them.

Click on the side-by-side link next to Makefile to get an overview of the actual change, and to make comments on it just click on the desired line:

Your comments won’t be sent immediately. Once you’re done making comments and want to deliver the review, click on the “Publish+Mail Comments” link at the top-right, which will take you onto a page that enables complementing with any heading details if desired.

Since the merge proposal is registered as the reviewer of the issue in Rietveld, publishing the review will deliver a message back onto the merge proposal itself, including context links that enable anyone to be taken to the precise review point back in Rietveld:

Then, once you do make the suggested changes and want to publish a new version of the branch, simply repeat the original command: “lbox propose -cr”. This will push the new diff onto Rietveld and create a new patch set. You’ll also be given the chance to edit the previous description, and any changes there will take place both in the merge proposal and in the Rietveld issue.

lbox also has other useful command line options, such as -bug, -new-bug, to associate Launchpad bugs with the merge proposal and put them in progress, or -bp to associate a blueprint with the branch and bug (if provided) being handled.

This should turn your code reviews in Launchpad into significantly more pleasant tasks, and maybe even save some of your precious life time for more interesting activities.

Happy reviewing!

Read more
niemeyer

Certainly one of the reasons why many people are attracted to the Go language is its first-class concurrency aspects. Features like communication channels, lightweight processes (goroutines), and proper scheduling of these are not only native to the language but are integrated in a tasteful manner.

If you stay around listening to community conversations for a few days there’s a good chance you’ll hear someone proudly mentioning the tenet:

Do not communicate by sharing memory; instead, share memory by communicating.

There is a blog post on the topic, and also a code walk covering it.

That model is very sensible, and being able to approach problems this way makes a significant difference when designing algorithms, but that’s not exactly news. What I address in this post is an open aspect we have today in Go related to this design: the termination of background activity.

As an example, let’s build a purposefully simplistic goroutine that sends lines across a channel:

type LineReader struct {
        Ch chan string
        r  *bufio.Reader
}

func NewLineReader(r io.Reader) *LineReader {
        lr := &LineReader{
                Ch: make(chan string),
                r:  bufio.NewReader(r),
        }
        go lr.loop()
        return lr
}

The type has a channel where the client can consume lines from, and an internal buffer
used to produce the lines efficiently. Then, we have a function that creates an initialized
reader, fires the reading loop, and returns. Nothing surprising there.

Now, let’s look at the loop itself:

func (lr *LineReader) loop() {
        for {
                line, err := lr.r.ReadSlice('n')
                if err != nil {
                        close(lr.Ch)
                        return
                }
                lr.Ch <- string(line)
        }
}

In the loop we'll grab a line from the buffer, close the channel in case of errors and stop, or otherwise send the line to the other side, perhaps blocking while the other side is busy with other activities. Should sound sane and familiar to Go developers.

There are two details related to the termination of this logic, though: first, the error information is being dropped, and then there's no way to interrupt the procedure from outside in a clean way. The error might be easily logged, of course, but what if we wanted to store it in a database, or send it over the wire, or even handle it taking in account its nature? Stopping cleanly is also a valuable feature in many circumstances, like when one is driving the logic from a test runner.

I'm not claiming this is something difficult to do, by any means. What I'm saying is that there isn't today an idiom for handling these aspects in a simple and consistent way. Or maybe there wasn't. The tomb package for Go is an experiment I'm releasing today in an attempt to address this problem.

The model is simple: a Tomb tracks whether the goroutine is alive, dying, or dead, and the death reason.

To understand that model, let's see the concept being applied to the LineReader example. As a first step, creation is tweaked to introduce Tomb support:

type LineReader struct {
        Ch chan string
        r  *bufio.Reader
        t  tomb.Tomb
}

func NewLineReader(r io.Reader) *LineReader {
        lr := &LineReader{
                Ch: make(chan string),
                r:  bufio.NewReader(r),
        }
        go lr.loop()
        return lr
}

Looks very similar. Just a new field in the struct, and the function that creates it hasn't even been touched.

Next, the loop function is modified to support tracking of errors and interruptions:

func (lr *LineReader) loop() {
        defer lr.t.Done()
        for {
                line, err := lr.r.ReadSlice('n')
                if err != nil {
                        close(lr.Ch)
                        lr.t.Kill(err)
                        return
                }
                select {
                case lr.Ch <- string(line):
                case <-lr.t.Dying():
                        close(lr.Ch)
                        return
                }
        }
}

Note a few interesting points here: first, Done is called to track the goroutine termination right before the loop function returns. Then, the previously loose error now goes into the Kill Tomb method, flagging the goroutine as dying. Finally, the channel send was tweaked so that it doesn't block in case the goroutine is dying for whatever reason.

A Tomb has both Dying and Dead channels returned by the respective methods, which are closed when the Tomb state changes accordingly. These channels enable explicit blocking until the state changes, and also to selectively unblock select statements in those cases, as done above.

With the loop modified as above, a Stop method can trivially be introduced to request the clean termination of the goroutine synchronously from outside:

func (lr *LineReader) Stop() error {
        lr.t.Kill(nil)
        return lr.t.Wait()
}

In this case the Kill method will put the tomb in a dying state from outside the running goroutine, and Wait will block until the goroutine terminates itself and notifies via the Done method as seen before. This procedure behaves correctly even if the goroutine was already dead or in a dying state due to internal errors, because only the first call to Kill with an actual error is recorded as the cause for the goroutine death. The nil value provided to t.Kill is used as a reason when terminating cleanly without an actual error, and it causes Wait to return nil once the goroutine terminates, flagging a clean stop per common Go idioms.

This is pretty much all that there is to it. When I started developing in Go I wondered if coming up with a good convention for this sort of problem would require more support from the language, such as some kind of goroutine state tracking in a similar way to what Erlang does with its lightweight processes, but it turns out this is mostly a matter of organizing the workflow with existing building blocks.

The tomb package and its Tomb type are a tangible representation of a good convention for goroutine termination, with familiar method names inspired in existing idioms. If you want to make use of it, go get the package with:

$ go get launchpad.net/tomb

The API documentation with details is available at:

http://gopkgdoc.appspot.com/pkg/launchpad.net/tomb

Have fun!

UPDATE 1: there was a minor simplification in the API since this post was originally written, and the post was changed accordingly.

UPDATE 2: there was a second simplification in the API since this post was originally written, and the post was changed accordingly once again to serve as reference.

Read more
Gustavo Niemeyer

About 1 year after development started in Ensemble, today the stars finally aligned just the right way (review queue mostly empty, no other pressing needs, etc) for me to start writing the specification about the repository system we’ve been jointly planning for a long time. This is the system that the Ensemble client will communicate with for discovering which formulas are available, for publishing new formulas, for obtaining formula files for deployment, and so on.

We of course would have liked for this part of the project to have been specified and written a while ago, but unfortunately that wasn’t possible for several reasons. That said, there are also good sides of having an important piece flying around in minds and conversations for such a long time: sitting down to specify the system and describe the inner-working details has been a breeze. Even details such as the namespacing of formulas, which hasn’t been entirely clear in my mind, was just streamed into the document as the ideas we’ve been evolving finally got together in a written form.

One curious detail: this is the first long term project at Canonical that will be developed in Go, rather than Python or C/C++, which are the most used languages for projects within Canonical. Not only that, but we’ll also be using MongoDB for a change, rather than the traditional PostgreSQL, and will also use (you guessed) the mgo driver which I’ve been pushing entirely as a personal project for about 8 months now.

Naturally, with so many moving parts that are new to the company culture, this is still being seen as a closely watched experiment. Still, this makes me highly excited, because when I started developing mgo, the MongoDB driver for Go, my hopes that the Go, MongoDB, and mgo trio would eventually be used at Canonical were very low, precisely because they were all alien to the culture. We only got here after quite a lot of internal debate, experiments, and trust too.

All of that means these are happy times. Important feature in Ensemble being specified and written, very exciting tools, home grown software being useful..

Awesomeness.

Read more