Canonical Voices

Posts tagged with 'design'

niemeyer

This weekend the proper environment settled out for sorting a pet peeve that shows up every once in a while when coding: writing logic that interacts with other applications in the system via their stdin and stdout streams is often more involved than it should be, which seems pretty ironic when sitting in front of a Unix-like system.

Rather than going over the trouble of setting up pipes and hooking them up in a custom way, often applications end up just delegating the job to /bin/sh, which is not ideal for a number of reasons: argument formatting isn’t straightforward, injecting custom application-defined logic is hard, which means even simple tasks that might be easily achieved by the language end up shelling out to further external applications, and so on.

In an attempt to address that, I’ve spent some time working on an experimental Go package that is being released today: pipe.

I hope you like it as well, and please drop me a note if you find any issues.

Read more
niemeyer

A small and fun experiment is out:

Read more
niemeyer

Lately I’ve been considering the amount of waste we produce during software development, and how to increase the amount of recycled content. I’m not talking about actual trash, though, but rather about software development artifacts.

Over the years, we’ve learned about and put in practice several means for improving the quality and success rate of projects we create or contribute to. We have practices such as sprints to get people together with high communication bandwidth; we have code reviews for sharing knowledge and improving project quality; we’ve got technical leadership roles to mentor developers and guide the progress of projects; we’ve created kanban boards and burndown charts to help people visualize what they’re going through; and so on.

While all of that seems to have helped tremendously, there’s a sad fact about where we stand: the artifacts of most of these processes are local to their context, and very sensitive to time. That burndown chart is meaningless after it’s burned, and a kanban has no relevant history. Our technical leads indeed guide their teams, but their wisdom stays with the few people that had the chance to interact with them, and subjectively so. That brilliant code review from our best developers has a very limited audience, and rarely carries any meaning just days after it has been accomplished.

That last one is specially interesting. The process of reviewing code is an intense task, very expensive, and that takes a significant portion of the life of an active developer, and even then very little is carried forward as the outcome of that process. We have no effective means or even culture of sharing the generated wisdom to other teams. In fact, we rarely share these details even within the team itself. Why was that line changed like this? Why an interface like that is a bad idea? Who will instruct the new guy next week, and where did we record a bit of the wisdom of the brilliant guy that has left the company recently?

Unfortunately there’s probably no easy solution for this problem. At this point, I mainly recognize that most of the efforts I’ve lead to improve software development for the past several years had a very limited scope. The software itself became immediately better as a result of my efforts, its design became more sensible, and hopefully I contributed a bit to the growth of people around me, but at a company or even community-wide scope, all of these code reviews, sprints, and IRC conversations are buried for very rare revives.

I want to start doing something about this, though. There must be a way to shape these conversations in a more reusable format; in a way that knowledge and agreement can be more proactively preserved and scattered. Perhaps it’s more about how than it is about what. Perhaps we just need to write more posts like this, and cover more topics related to daily development findings. Not sure. I’ll be thinking…

Read more
niemeyer

I’m glad to announce experimental support for multi-document transactions in the mgo driver that integrates MongoDB with the Go language. The support is done via a driver extension, so it works with any MongoDB release supported by the driver (>= 1.8).

Features

Here is a quick highlight list to get your brain ticking before the details:

  • Supports sharding
  • Operations may span multiple collections
  • Handles changes, inserts and removes
  • Supports pre-conditions
  • Self-healing
  • No additional locks or leases
  • Works with existing data

Let’s see what these actually mean and how the goodness is done.


The problem being addressed

The typical example is a bank transaction: imagine you have two documents representing accounts for different people, and you want to transfer 100 bucks from Aram to Ben. Despite the apparent simplicity in that description, there are a number of edge cases that turn it into a non-trivial change.

Imagine an agent processing the change following these steps:

  1. Is Ben’s account valid?
  2. Take 100 bucks out of Aram’s account if its balance is above 100
  3. Insert 100 bucks into Ben’s account

Note that this description already assumes the availability of some single-document atomic operations as supported by MongoDB. Even then, how many race conditions and crash-related problems can you count? Here are some spoilers that hint at the problem complexity:

  • What if Ben cancels his account after (1)?
  • What if the agent crashes after (2)?

How it works

Thanks to the availability of single-document atomic operations, it is be possible to craft a sequence of changes that manipulate documents in a way that supports multi-document transactional behavior. This works as long as the clients agree to use the same conventions.

This isn’t exactly news, though, and there’s even documentation describing how one can explore these ideas. The challenge is in crafting a generic mechanism that not only does the basics but goes beyond by supporting inserts and removes, being workload agnostic, behaving correctly on crashes (!), and yet remaining pleasant to use. That’s the territory being explored.

The implemented semantics offers an isolation level that allows non-repeatable reads to occur (a partially committed transaction is visible), but the changes are guaranteed to only be visible in the order specified in the transaction, and once any change is done the transaction is guaranteed to be applied completely without intervening changes in the affected documents (no dirty reads). Among other things, this means one can use any existing mechanism at read time.

When writing documents that are affected by the transaction mechanism, one must necessarily use the API of the new mgo/txn package, which ended up surprisingly thin and straightforward. In other words for emphasis: if you modify fields that are affected by the transaction mechanism both with and without mgo/txn, it will misbehave arbitrarily. Fields that are read or written by mgo/txn must only be changed using mgo/txn.

Using the example described above, the bank account transfer might be done as:

runner := txn.NewRunner(tcollection)
ops := []txn.Op{{
        C:      "accounts", 
        Id:     "aram",
        Assert: M{"balance": M{"$gte": 100}},
        Update: M{"$inc": M{"balance": -100}},
}, {
        C:      "accounts",
        Id:     "ben",
        Assert: M{"valid": true},
        Update: M{"$inc": M{"balance": 100}},
}}
id := bson.NewObjectId() // Optional
err := runner.Run(ops, id, nil)

The assert and update values are usual MongoDB querying and updating documents. The tcollection is a MongoDB collection that is used to atomically insert the transaction details into the database. As long as that document makes it into the database, the transaction is guaranteed to be eventually entirely applied or entirely aborted. The exact moment when this happens is defined by whether there are other transactions in progress and whether a communication problem occurs and when it occurs, as described below.

Concurrency and crash-proofness

Perhaps the most interesting piece of the puzzle when coming up with a nice transaction mechanism is defining what happens when an agent misbehaves, even more in a world where there are multiple distributed transaction runners. If there are locks, someone must unlock when a runner crashes, and must know the difference between running slowly and crashing. If there are leases, the lease boundary becomes an issue. In both cases, the speed of the overall system would become bounded by the speed of the slowest runner.

Instead of falling onto those issues, the implemented mechanism observes the transactions being attempted on the affected documents, orders them in a globally agreed way, and pushes all of their operations concurrently.

To illustrate the behavior, imagine again the described scenario of bank transferences:

In this diagram there are two transactions being attempted, T1 and T2. The first is a transference from Aram to Ben, and the second is a transference from Ben to Carl. If a runner starts executing T2 while T1 is still being applied by a different runner, the first runner will pick T1 up and complete it before starting to work on T2 which is its real goal. This works even if the original runner of T1 died while it was in progress. In reality, there’s little difference between the original runner of T1 and another runner that observes T1 on its way.

There’s a chance that T1′s runner died too soon, though, and it hasn’t had a chance to even start the transaction by tagging Ben’s account document as participating in it. In that case, T2 will be pushed forward by its own runner independently, since there’s nothing on its way. T1 isn’t lost, though, and it may be resumed at any point by calling the runner’s Resume or ResumeAll methods.

The whole logic is implemented without introducing any new globally shared point of coordination. It works if documents are in different collections, different shards, and it works even if the transaction collection itself is sharded across multiple backends for scalability purposes.

The testing approach

While a lot of thinking was put onto the way the mechanism works, this is of course non-trivial and bug-inviting logic. In an attempt to nail down bugs early on, a testing environment was put in place to simulate multiple runners in a conflicting workload. To make matters more realistic, this simulation happens in a harsh scenario with faults and artificial slowdowns being randomly injected into the system. At the end, the result is evaluated to see if the changes performed respected the invariants established.

While hundreds of thousands of transactions have been successfully run in this fashion, the package should still be considered experimental at this point, and its API is still prone to change.

There’s one race

There’s one known race that’s worth mentioning, and it was consciously left there for the moment as a tradeoff. The race shows itself when inserting a new document, at the point in time when the decision has been made that the insert was genuinely good. At this exact moment, if that runner is frozen for long enough that would allow for a different runner to insert the document and remove it again, and then the original runner is unfrozen without any errors or timeouts, it will naturally go on and insert the new document.

There are multiple solutions for this problem, but they present their own disadvantages. One solution would be to manipulate the document instead of removing it, but that would leave the collection with ghost content that has to be cared for, and that’s an unwanted side effect. A second solution would be to use the internal applyOps machinery that MongoDB uses in its sharding implementation, but that would mean that collections affected by transactions couldn’t be sharded, which is another unwanted side effect (please vote for SERVER-1439 so we can use it).

Have fun!

I hope the package serves you well, and if you would like to talk further about it, please join the mgo-users mailing list and drop a message.

Read more
niemeyer

?Rob Pike just wrote an article/talk that is the best background on the origins of Go yet.

It surprises me how much his considerations match my world view pre-Go, and in a sense give me a fulfilling explanation about why I got hooked into the language. I still recall sitting in a hotel years ago with Jamu Kakar while we went through the upcoming C++0x standard (now C++11) and got perplexed about how someone could think that having details such as rvalue references and move constructors into the language specification was something reasonable.

Rob also expressed again the initial surprise that developers using languages such as Python and Ruby were more often the ones willing to migrate towards Go, rather than ones using C++, with some reasonable explanations about why that is so. While I agree with his considerations, I see Python going through the same kind of issue that caused C++ to be what it is today.

Consider this excerpt from PEP 0380 as evidence:

If yielding of values is the only concern, this can be performed without much difficulty using a loop such as

for v in g:
    yield v

However, if the subgenerator is to interact properly with the caller in the case of calls to send(), throw() and close(), things become considerably more difficult. As will be seen later, the necessary code is very complicated, and it is tricky to handle all the corner cases correctly.

A new syntax will be proposed to address this issue. In the simplest use cases, it will be equivalent to the above for-loop, but it will also handle the full range of generator behaviour, and allow generator code to be refactored in a simple and straightforward way.

This description has the same DNA that creates the C++ problem Rob talks about. Don’t get me wrong, I’m sure yield from will make a lot of people very happy, and that’s exactly the tricky part. It’s easy and satisfying to please a selection of users, but often that leads to isolated solutions that create new cognitive load and new corner cases that in turn lead to new requirements.

The history of generators in Python is specially telling:

  • PEP 0234 [30-Jan-2001] – Iterators – Accepted
  • PEP 0255 [18-May-2001] – Simple Generators – Accepted
  • PEP 0288 [21-Mar-2002] – Generators Attributes and Exceptions – Withdrawn
  • PEP 0289 [30-Jan-2002] – Generator Expressions – Accepted
  • PEP 0325 [25-Aug-2003] – Resource-Release Support for Generators – Rejected
  • PEP 0342 [10-May-2005] – Coroutines via Enhanced Generators – Accepted
  • PEP 0380 [13-Feb-2009] – Syntax for Delegating to a Subgenerator – Accepted

You see the rabbit hole getting deeper? I’ll clarify it further by rephrasing the previous quote from PEP 0380:

If [feature from PEP 0255] is the only concern, this can be performed without much difficulty using a loop [...] However, if the subgenerator is to interact properly with [changes from PEP 0342] things become considerably more difficult. [So we need feature from PEP 0380.]

Yet, while the language grows handling self-inflicted micro-problems, the real issue is still not solved. All of these features are simplistic forms of concurrency and communication, that don’t satisfy the developers, causing community fragmentation.

This happened to C++, to Python, and to many other languages. Go seems slightly special in that regard in the sense that its core development team has an outstanding respect for simplicity, yet dares to solve the difficult problems at their root, while keeping these solutions orthogonal so that they support each other. Less is more, and is not always straightforward.

Read more
Gustavo Niemeyer

Back at the Ubuntu Platform Rally last week, I’ve pestered some of the Bazaar team with questions about co-location of branches in the same directory with Bazaar. The great news is that this seems to be really coming for the next release, with first-class integration of the feature in the command set. Unfortunately, though, it’s not quite yet ready for prime time, or even for I’m-crazy-and-want-this-feature time.

Some background on why this feature turns out to be quite important right now may be interesting, since life with Bazaar in the past years hasn’t really brought that up as a blocker. The cause for the new interest lies in some recent changes in the toolset of the Go language. The new go tool not only makes building and interacting with Go packages a breeze, but it also solves a class of problems previously existent. For the go tool to work, though, it requires the use of $GOPATH consistently, and this means that the package has to live in a well defined directory. The traditional way that Bazaar manages branches into their own directories becomes a deal breaker then.

So, last week I had the chance to exchange some ideas with Jelmer Vernooij and Vincent Ladeuil (both Bazaar hackers) on these problems, and they introduced me to the approach of using lightweight checkouts to workaround some of the limitations. Lightweight checkouts in Bazaar makes the working tree resemble a little bit the old-style VCS tools, with the working tree being bound to another location that actually has the core content. The idea is great, and given how well lightweight checkouts work with Bazaar, building a full fledged solution shouldn’t be a lot of work really.

After that conversation, I’ve put a trivial hack together that would make bzr look like git from the outside, by wrapping the command line, and did a lightning talk demo. This got a few more people interested on the concept, which was enough motivation for me to move the idea forward onto a working implementation. Now I just needed the time to do it, but it wasn’t too hard to find it either.

I happen to be part of the unlucky group that too often takes more than 24 hours to get back home from these events. This is not entirely bad, though.. I also happen to be part of the lucky group that can code while flying and riding buses as means to relieve the boredom (reading helps too). This time around, cobzr became the implementation of choice, and given ~10 hours of coding, we have a very neat and over-engineered wrapper for the bzr command.

The core of the implementation is the same as the original hack: wrap bzr and call it from outside to restructure the tree. That said, rather than being entirely lazy and hackish line parsing, it actually parses bzr’s –help output for commands to build a base of supported options, and parses the command line exactly like Bazaar itself would, validating options as it goes and distinguishing between flags with arguments from positional parameters. That enables the proxying to do much more interesting work on the intercepted arguments.

Here is a quick session that shows a branch being created with the tool. It should look fairly familiar for someone used to git:


[~]% bzr branch lp:juju
Branched 443 revisions.

[~]% cd juju
[~/juju]% bzr branch
* master

[~/juju]% bzr checkout -b new-feature
Shared repository with trees (format: 2a)
Location:
shared repository: .bzr/cobzr
Branched 443 revisions.
Branched 443 revisions.
Tree is up to date at revision 443.
Switched to branch: /home/niemeyer/juju/.bzr/cobzr/new-feature/

[~/juju]% bzr branch other-feature
Branched 443 revisions.

[~/juju]% bzr branch
  master
* new-feature
  other-feature

Note that cobzr will not reorganize the tree layout before the multiple branch support is required.

Even though the wrapping is taking place and bzr’s –help output is parsed, there’s pretty much no noticeable overhead given the use of Go for the implementation and also that the processed output of –help is cached (I said it was overengineered).

As an example, the first is the real bzr, while the second is a link to cobzr:


[~/juju]% time /usr/bin/bzr status
/usr/bin/bzr status 0.24s user 0.03s system 88% cpu 0.304 total

[~/juju]% time bzr status
bzr status 0.19s user 0.08s system 88% cpu 0.307 total

This should be more than enough for surviving comfortably until bzr itself comes along with first class support for co-located branches in the next release.

In case you’re interested in using it or are just curious about the command set or other details, please check out the web page for the project:

Read more
Gustavo Niemeyer

A long time before I seriously got into using distributed version control systems (DVCS) such as Bazaar and Git for developing software, it was already well known to me how the mechanics of these systems worked, and why people benefited from them. That said, it wasn’t until I indeed started to use DVCS tools that I understood how much my daily workflow around code bases would be changed and improved.

This weekend, while flying home from MongoSV, I could experience that same feeling in relation to first class concurrency support in programming languages. Everybody knows how the feature may be used, but I have the feeling that until one actually experiences it in practice, it’s very hard to really understand how much the relationship with ordering while developing software may be improved.

I was having some fun working on improvements to Goetveld. This package allows Go programs to communicate with Rietveld servers to manipulate code review entries. The Rietveld API is a bit rough in a few places, and as a result some features of the package actually parse an HTML form to extract some data, before sending it back. You may have done something similar before while attempting to script a web site that wasn’t originally intended to be.

The interesting fact here is that this is an intrinsically serial procedure: load a form, change it, and send it back, right? Well, not really. As one might intuitively expect, establishing an SSL session and its underlying TCP connection are not instantaneous operations.

To give an idea, here is part of a dump of an SSL connection being initiated (that is, no HTTP data was sent yet) to codereview.appspot.com, originated from my home location:

# tcpdump -ttttt -i wlan0 'host codereview.appspot.com and port 443'
(...)
00:00:00.000000 IP (...)
00:00:00.000063 IP (...)
00:00:00.000562 IP (...)
00:00:00.341627 IP (...)
00:00:00.357009 IP (...)
00:00:00.357118 IP (...)
00:00:00.360362 IP (...)
00:00:00.360550 IP (...)
00:00:00.366011 IP (...)
00:00:00.689446 IP (...)
00:00:00.727693 IP (...)

That’s more than half a second before the application layer was even touched. So, turns out that to save that roundtrip time, we can start both the form loading and the form sending requests at the same time. By the time the form loading ends, processing the data locally is extremely fast, and we can complete the sending side by just providing the request body.

At this time you may be thinking something like “Ugh, that’s too much trouble.. why bother?”, and that highlights precisely the point I’d like to make: it is too much trouble because most people are used to languages that turn it into too much trouble, but the issue is not inherently complex. In fact, this is the entire implementation of this logic in Go:

func (r *Rietveld) UpdateIssue(issue *Issue) error {
        op := &opInfo{r: r, issue: issue}
        errs := make(chan error)
        ch := make(chan map[string]string, 1)
        go func() {
                errs <- r.do(&editLoadHandler{op: op, form: ch})
                close(ch)
        }()
        go func() {
                errs <- r.do(&editHandler{op: op, form: ch})
        }()
        return firstError(2, errs)
}

I'm not cheating. The procedure was being done serially before, with very similar logic. Previously it had to take the form variable itself from the first request and manually provide it to the next one. Now, instead of providing the form, it's providing a channel that will be used to send the form across. One might even argue that the channel makes the algorithm more natural, curiously.

This is the kind of procedure that becomes fun and natural to write, after having first class concurrency at hand for some time. But, as in the case of DVCS, it takes a while to get used to the idea that concurrency and simplicity are not necessarily at opposing ends.

Read more
niemeyer

Certainly one of the reasons why many people are attracted to the Go language is its first-class concurrency aspects. Features like communication channels, lightweight processes (goroutines), and proper scheduling of these are not only native to the language but are integrated in a tasteful manner.

If you stay around listening to community conversations for a few days there’s a good chance you’ll hear someone proudly mentioning the tenet:

Do not communicate by sharing memory; instead, share memory by communicating.

There is a blog post on the topic, and also a code walk covering it.

That model is very sensible, and being able to approach problems this way makes a significant difference when designing algorithms, but that’s not exactly news. What I address in this post is an open aspect we have today in Go related to this design: the termination of background activity.

As an example, let’s build a purposefully simplistic goroutine that sends lines across a channel:

type LineReader struct {
        Ch chan string
        r  *bufio.Reader
}

func NewLineReader(r io.Reader) *LineReader {
        lr := &LineReader{
                Ch: make(chan string),
                r:  bufio.NewReader(r),
        }
        go lr.loop()
        return lr
}

The type has a channel where the client can consume lines from, and an internal buffer
used to produce the lines efficiently. Then, we have a function that creates an initialized
reader, fires the reading loop, and returns. Nothing surprising there.

Now, let’s look at the loop itself:

func (lr *LineReader) loop() {
        for {
                line, err := lr.r.ReadSlice('n')
                if err != nil {
                        close(lr.Ch)
                        return
                }
                lr.Ch <- string(line)
        }
}

In the loop we'll grab a line from the buffer, close the channel in case of errors and stop, or otherwise send the line to the other side, perhaps blocking while the other side is busy with other activities. Should sound sane and familiar to Go developers.

There are two details related to the termination of this logic, though: first, the error information is being dropped, and then there's no way to interrupt the procedure from outside in a clean way. The error might be easily logged, of course, but what if we wanted to store it in a database, or send it over the wire, or even handle it taking in account its nature? Stopping cleanly is also a valuable feature in many circumstances, like when one is driving the logic from a test runner.

I'm not claiming this is something difficult to do, by any means. What I'm saying is that there isn't today an idiom for handling these aspects in a simple and consistent way. Or maybe there wasn't. The tomb package for Go is an experiment I'm releasing today in an attempt to address this problem.

The model is simple: a Tomb tracks whether the goroutine is alive, dying, or dead, and the death reason.

To understand that model, let's see the concept being applied to the LineReader example. As a first step, creation is tweaked to introduce Tomb support:

type LineReader struct {
        Ch chan string
        r  *bufio.Reader
        t  tomb.Tomb
}

func NewLineReader(r io.Reader) *LineReader {
        lr := &LineReader{
                Ch: make(chan string),
                r:  bufio.NewReader(r),
        }
        go lr.loop()
        return lr
}

Looks very similar. Just a new field in the struct, and the function that creates it hasn't even been touched.

Next, the loop function is modified to support tracking of errors and interruptions:

func (lr *LineReader) loop() {
        defer lr.t.Done()
        for {
                line, err := lr.r.ReadSlice('n')
                if err != nil {
                        close(lr.Ch)
                        lr.t.Kill(err)
                        return
                }
                select {
                case lr.Ch <- string(line):
                case <-lr.t.Dying():
                        close(lr.Ch)
                        return
                }
        }
}

Note a few interesting points here: first, Done is called to track the goroutine termination right before the loop function returns. Then, the previously loose error now goes into the Kill Tomb method, flagging the goroutine as dying. Finally, the channel send was tweaked so that it doesn't block in case the goroutine is dying for whatever reason.

A Tomb has both Dying and Dead channels returned by the respective methods, which are closed when the Tomb state changes accordingly. These channels enable explicit blocking until the state changes, and also to selectively unblock select statements in those cases, as done above.

With the loop modified as above, a Stop method can trivially be introduced to request the clean termination of the goroutine synchronously from outside:

func (lr *LineReader) Stop() error {
        lr.t.Kill(nil)
        return lr.t.Wait()
}

In this case the Kill method will put the tomb in a dying state from outside the running goroutine, and Wait will block until the goroutine terminates itself and notifies via the Done method as seen before. This procedure behaves correctly even if the goroutine was already dead or in a dying state due to internal errors, because only the first call to Kill with an actual error is recorded as the cause for the goroutine death. The nil value provided to t.Kill is used as a reason when terminating cleanly without an actual error, and it causes Wait to return nil once the goroutine terminates, flagging a clean stop per common Go idioms.

This is pretty much all that there is to it. When I started developing in Go I wondered if coming up with a good convention for this sort of problem would require more support from the language, such as some kind of goroutine state tracking in a similar way to what Erlang does with its lightweight processes, but it turns out this is mostly a matter of organizing the workflow with existing building blocks.

The tomb package and its Tomb type are a tangible representation of a good convention for goroutine termination, with familiar method names inspired in existing idioms. If you want to make use of it, go get the package with:

$ go get launchpad.net/tomb

The API documentation with details is available at:

http://gopkgdoc.appspot.com/pkg/launchpad.net/tomb

Have fun!

UPDATE 1: there was a minor simplification in the API since this post was originally written, and the post was changed accordingly.

UPDATE 2: there was a second simplification in the API since this post was originally written, and the post was changed accordingly once again to serve as reference.

Read more
Gustavo Niemeyer

About 1 year after development started in Ensemble, today the stars finally aligned just the right way (review queue mostly empty, no other pressing needs, etc) for me to start writing the specification about the repository system we’ve been jointly planning for a long time. This is the system that the Ensemble client will communicate with for discovering which formulas are available, for publishing new formulas, for obtaining formula files for deployment, and so on.

We of course would have liked for this part of the project to have been specified and written a while ago, but unfortunately that wasn’t possible for several reasons. That said, there are also good sides of having an important piece flying around in minds and conversations for such a long time: sitting down to specify the system and describe the inner-working details has been a breeze. Even details such as the namespacing of formulas, which hasn’t been entirely clear in my mind, was just streamed into the document as the ideas we’ve been evolving finally got together in a written form.

One curious detail: this is the first long term project at Canonical that will be developed in Go, rather than Python or C/C++, which are the most used languages for projects within Canonical. Not only that, but we’ll also be using MongoDB for a change, rather than the traditional PostgreSQL, and will also use (you guessed) the mgo driver which I’ve been pushing entirely as a personal project for about 8 months now.

Naturally, with so many moving parts that are new to the company culture, this is still being seen as a closely watched experiment. Still, this makes me highly excited, because when I started developing mgo, the MongoDB driver for Go, my hopes that the Go, MongoDB, and mgo trio would eventually be used at Canonical were very low, precisely because they were all alien to the culture. We only got here after quite a lot of internal debate, experiments, and trust too.

All of that means these are happy times. Important feature in Ensemble being specified and written, very exciting tools, home grown software being useful..

Awesomeness.

Read more
Gustavo Niemeyer

Back in 2009 I quickly talked about the obvious revolution in computing that was rolling in the form of mobile phone as computer, and mentioned as well the fact that touch-based interfaces were going to dominate the marketplace because of that.

Move forward a couple of years, and last week I got my first tablet, running Android (a Samsung Galaxy Tab 10.1, if you’re curious). I didn’t know exactly why I needed one, but being in the tech industry I always have that nice excuse for myself of buying things early on for learning about the experience of using them. Last night, I could clearly see this can be a real claim in some cases (in others it’s just an excuse for the wife).

After getting the tablet last week, I’ve started by experimenting with the usual stuff any person would (email, browser, etc), and then downloaded a few games to take on board a longish flight. Some of them were pretty good.. a vertical scrolling shooter, a puzzle-solver, and so on. On all of them, though, it took just a few minutes before the novelty of holding the screen in my hands for interacting with the game got old, and the interest went away with it.

This last night, though, I’ve decided to try another game from the top list, named Cut the Rope, and this time I was immediately hooked into it. That was certainly one of the most enjoyable gaming experiences I had in quite a while, and when going to bed I started to ponder about what was different there.

The game is obviously well executed, with cute drawings and sounds, and also smooth, but I think there was something else as well. In retrospect, the other games felt a lot like ports of a desktop/laptop experience. The side scrolling game, for instance, was quite well suited for a joystick, and at least one other game had an actual joystick emulated on the screen, which is an enabler, but far from nice to be honest.

This one game, though, felt very well suited for a hands-based interaction: quickly drawing lines for cutting ropes, tapping on balloons to push air out, moving levers around, etc. In some more advanced levels, it was clear that my dexterity (or lack thereof) was playing a much more important role in accomplishing the tasks than the traditional button/joystick version of it. This felt like an entirely novel gaming experience that just hadn’t happened yet.

It’s funny and ironic that I had this experience within a week from Microsoft reportedly saying (again!) that a tablet is just another PC. It’s not, and if they tried it out with some minimum attention they’d see why it’s so clearly not.

In that experience, the joystick felt familiar but at the same quite awkward to use, but using my hands naturally in an environment where that was suitable felt very pleasing. We can generalize that a bit and note a common way to relate to innovation: we first try to reuse the knowledge we have when facing a new concept, but when we understand the concept better quite often we’re able to come up with more effective and interesting ways to relate to it.

In the tablet vs. laptop/desktop thread, you probably won’t want to be typing long documents in a tablet, but would most likely prefer to shuffle items in an agenda with your fingers. Also, you likely wouldn’t want to do that detailed CAD work with a fat finger in a screen, but would certainly be happy to review code or a document sitting in your backyard with the birds (no whales).

So, let’s please put that hammer away for a second while creating a most enjoyable touch-based experience.

Read more