Canonical Voices

Posts tagged with 'architecture'

niemeyer

Over the last several months there has been noticeable and growing pain associated with the evolving integration tests around snapd, and given the project goal of being a cross-distribution platform, we are very keen on solving this problem appropriately so that stability is guaranteed everywhere.

With that mindset a more focused effort was made over the last few weeks to produce a tool that can get the project out of those problems, and onto a runway of more pleasant stability. Despite the short amount of time, I’m very happy about the Spread project which resulted from this effort.

Spread is not Jenkins or Travis, and is not a language or library either. Spread is a tool that will very conveniently ship your code to one or more systems, in parallel, and then offer the right set of options so you can run whatever you need to run to make sure the logic is working, and drive it all from the local system. That implies you can run Spread inside Travis, Jenkins, or your terminal, in a similar way to how your unit tests work.

Here is a short list of interesting facts about Spread:

  • Full-system tests with on demand machine allocation.
  • Multi-backend with Linode and LXD (for local runs) out of the box for now.
  • Multi-language since it can run arbitrary remote code.
  • Agent-less and driven via embedded ssh (kudos to Go team).
  • Convenient harness with project+backend+suite+test prepare and restore scripts.
  • Variants feature for test duplication without copy & paste.
  • Great debugging support – add -debug and stop with a shell inside every failure.
  • Reuse of servers – server allocation is fast, but not allocating is faster.
  • Reasonable test outputs with the shell’s +x mode on failures.
  • … and so forth.

This is all well documented, so I’ll just provide one example here to offer a real taste of how the system feels like.

This is spread.yaml, put in the project root to define the basics:

project: spread

backends:
    lxd:
        systems:
            - ubuntu-16.04
            - ubuntu-14.04

path: /home/test

prepare: |
    echo Entering project...
restore: |
    echo Leaving project...

suites:
    tests/: 
        summary: Integration tests
        prepare: |
            echo Entering suite...
        restore: |
            echo Leaving suite...

The suite name is also the path under which the tests are found.

Then, this is tests/hello/task.yaml:

summary: Greet the world
prepare: |
    echo "Entering task..."
restore: |
    echo "Leaving task..."
environment:
    FOO/a: one
    FOO/b: two
execute: |
    echo "Hello world!"
    [ $FOO = one ] || exit 1

The outcome should be almost obvious (intended feature :-). The one curious detail here is the FOO/a and FOO/b environment variables. This is how to introduce variants, which means this one test will in fact become two: first with FOO=one, and then with FOO=two. Now consider that such environment variables can be defined at any level – project, backend, suite, and task – and imagine how easy it is to test small variations without any copy & paste. After cascading takes place (project→backend→suite→task) all environment variables using a given variant key will be present at once on the same execution.

Now let’s try to run this configuration, including the -debug flag so we get a shell on the failures. Note how with a single test we get four different jobs, two variants over two systems, with the variant b failing as instructed:

$ spread -debug

2016/06/11 19:09:27 Allocating lxd:ubuntu-14.04...
2016/06/11 19:09:27 Allocating lxd:ubuntu-16.04...
2016/06/11 19:09:41 Waiting for LXD container to have an address...
2016/06/11 19:09:43 Waiting for LXD container to have an address...
2016/06/11 19:09:44 Allocated lxd:ubuntu-14.04.
2016/06/11 19:09:44 Connecting to lxd:ubuntu-14.04...
2016/06/11 19:09:48 Allocated lxd:ubuntu-16.04.
2016/06/11 19:09:48 Connecting to lxd:ubuntu-16.04...
2016/06/11 19:09:52 Connected to lxd:ubuntu-14.04.
2016/06/11 19:09:52 Sending project data to lxd:ubuntu-14.04...
2016/06/11 19:09:53 Connected to lxd:ubuntu-16.04.
2016/06/11 19:09:53 Sending project data to lxd:ubuntu-16.04...

2016/06/11 19:09:54 Error executing lxd:ubuntu-14.04:tests/hello:b :
-----
+ echo Hello world!
Hello world!
+ [ two = one ]
+ exit 1
-----

2016/06/11 19:09:54 Starting shell to debug...

lxd:ubuntu-14.04 ~/tests/hello# echo $FOO
two
lxd:ubuntu-14.04 ~/tests/hello# cat /etc/os-release | grep ^PRETTY
PRETTY_NAME="Ubuntu 14.04.4 LTS"
lxd:ubuntu-14.04 ~/tests/hello# exit
exit

2016/06/11 19:09:55 Error executing lxd:ubuntu-16.04:tests/hello:b :
-----
+ echo Hello world!
Hello world!
+ [ two = one ]
+ exit 1
-----

2016/06/11 19:09:55 Starting shell to debug...

lxd:ubuntu-16.04 ~/tests/hello# echo $FOO
two
lxd:ubuntu-16.04 ~/tests/hello# cat /etc/os-release | grep ^PRETTY
PRETTY_NAME="Ubuntu 16.04 LTS"
lxd:ubuntu-16.04 ~/tests/hello# exit
exit


2016/06/11 19:10:33 Discarding lxd:ubuntu-14.04 (spread-129)...
2016/06/11 19:11:04 Discarding lxd:ubuntu-16.04 (spread-130)...
2016/06/11 19:11:05 Successful tasks
2016/06/11 19:11:05 Aborted tasks: 0
2016/06/11 19:11:05 Failed tasks: 2
    - lxd:ubuntu-14.04:tests/hello:b
    - lxd:ubuntu-16.04:tests/hello:b
error: unsuccessful run

This demonstrates many of the stated goals (parallelism, clarity, convenience, debugging, …) while running on a local system. Running on a remote system is just as easy by using an appropriate backend. The snapd project on GitHub, for example, is hooked up on Travis to run Spread and then ship its tests over to Linode. Here is a real run output with the initial tests being ported, and a basic smoke test.

If you like what you see, by all means please go ahead and make good use of it.

We’re all for more stability and sanity everywhere.

@gniemeyer

Read more
niemeyer

As much anticipated, this week Ubuntu 16.04 LTS was released with integrated support for snaps on classic Ubuntu.

Snappy 2.0 is a modern software platform, that includes the ability to define rich interfaces between snaps that control their security and confinement, comprehensive observation and control of system changes, completion and undoing of partial system changes across restarts/reboots/crashes, macaroon-based authentication for local access and store access, preliminary development mode, a polished filesystem layout and CLI experience, modern sequencing of revisions, and so forth.

The previous post in this series described the reassuring details behind how snappy does system changes. This post will now cover Snappy interfaces, the mechanism that controls the confinement and integration of snaps with other snaps and with the system itself.

A snap interface gives one snap the ability to use resources provided by another snap, including the operating system snap (ubuntu-core is itself a snap!). That’s quite vague, and intentionally so. Software interacts with other software for many reasons and in diverse ways, and Snappy is a platform that has to mediate all of that according to user needs.

In practice, though, the mechanism is straightforward and pleasant to deal with. Without any snaps in the system, there are no interfaces available:

% sudo snap interfaces
error: no interfaces found

If we install the ubuntu-core snap alone (done implicitly when the first snap is installed), we can already see some interface slots being provided by it, but no plugs connected to them:

% sudo snap install ubuntu-core
75.88 MB / 75.88 MB [=====================] 100.00 % 355.56 KB/s 

% snap interfaces
Slot                 Plug
:firewall-control    -
:home                -
:locale-control      -
(...)
:opengl              -
:timeserver-control  -
:timezone-control    -
:unity7              -
:x11                 -

The syntax is <snap>:<slot> and <snap>:<plug>. The lack of a snap name is a shorthand notation for slots and plugs on the operating system snap.

Now let’s install an application:

% sudo snap install ubuntu-calculator-app
120.01 MB / 120.01 MB [=====================] 100.00 % 328.88 KB/s 

% snap interfaces
Slot                 Plug
:firewall-control    -
:home                -
:locale-control      -
(...)
:opengl              ubuntu-calculator-app
:timeserver-control  -
:timezone-control    -
:unity7              ubuntu-calculator-app
:x11                 -

At this point the application should work fine. But let’s instead see what happens if we take away one of these interfaces:

% sudo snap disconnect \
             ubuntu-calculator-app:unity7 ubuntu-core:unity7 

% /snap/bin/ubuntu-calculator-app.calculator
QXcbConnection: Could not connect to display :0

The application installed depends on unity7 to be able to display itself properly, which is itself based on X11. When we disconnected the interface that gave it permission to be accessing these resources, the application was unable to touch them.

The security minded will observe that X11 is not in fact a secure protocol. A number of system abuses are possible when we hand an application this permission. Other interfaces such as home would give the snap access to every non-hidden file in the user’s $HOME directory (those that do not start with a dot), which means a malicious application might steal personal information and send it over the network (assuming it also defines a network plug).

Some might be surprised that this is the case, but this is a misunderstanding about the role of snaps and Snappy as a software platform. When you install software from the Ubuntu archive, that’s a statement of trust in the Ubuntu and Debian developers. When you install Google’s Chrome or MongoDB binaries from their respective archives, that’s a statement of trust in those developers (these have root on your system!). Snappy is not eliminating the need for that trust, as once you give a piece of software access to your personal files, web camera, microphone, etc, you need to believe that it won’t be using those allowances maliciously.

The point of Snappy’s confinement in that picture is to enable a software ecosystem that can control exactly what is allowed and to whom in a clear and observable way, in addition to the same procedural care that we’ve all learned to appreciate in the Linux world, not instead of it. Preventing people from using all relevant resources in the system would simply force them to use that same software over less secure mechanisms instead of fixing the problem.

And what we have today is just the beginning. These interfaces will soon become much richer and more fine grained, including resource selection (e.g. which serial port?), and some of them will disappear completely in favor of more secure choices (Unity 8, for instance).

These are exciting times for Ubuntu and the software world.

@gniemeyer

Read more
niemeyer

As announced last Saturday, Snappy Ubuntu Core 2.0 has just been tagged and made its way into the archives of Ubuntu 16.04, which is due for the final release in the next days. So this is a nice time to start covering interesting aspects of what is being made available in this release.

A good choice for the first post in this series is talking about how snappy performs changes in the system, as that knowledge will be useful in observing and understanding what is going on in your snappy platform. Let’s start with the first operation you will likely do when first interacting with the snappy platform — install:

% sudo snap install ubuntu-calculator-app
120.01 MB / 120.01 MB [===============================] 100.00 % 1.45 MB/s

This operation is traditionally done on analogous systems in an ephemeral way. That is, the software has either a local or a remote database of options to install, and once the change is requested the platform of choice will start acting on it with all state for the modification kept in memory. If something doesn’t go so well, such as a reboot or even a crash, the modification is lost.. in the best case. Besides being completely lost, it might also be partially applied to the system, with some files spread through the filesystem, and perhaps some of the involved hooks run. After the restart, the partial state remains until some manual action is taken.

Snappy instead has an engine that tracks and controls such changes in a persistent manner. All the recent changes, pending or not, may be observed via the API and the command line:

% snap changes
ID   Status  ...  Summary
1    Done    ...  Install "ubuntu-calculator-app" snap

(the spawn and ready date/time columns have been hidden for space)

The output gives an overview of what happened recently in the system, whether pending or not. If one of these changes is unintendedly interrupted for whatever reason, the daemon will attempt to continue the requested change at the next opportunity.

Continuing is not always possible, though, because there are external factors that such a change will generally depend upon (the snap being available, the system state remaining similar, etc). In those cases, the change will fail, and any relevant modifications performed on the system while attempting to accomplish the defined goal will be undone.

Because such partial states are possible and need to be handled properly by the system, changes are in fact broken down into finer grained tasks which are also tracked and observable while in progress or after completion. Using the change ID obtained in the former command, we can get a better picture of what that changed involved:

% snap changes 1
Status ...  Summary
Done   ...  Download snap "ubuntu-core" from channel "stable"
Done   ...  Mount snap "ubuntu-core"
Done   ...  Copy snap "ubuntu-core" data
Done   ...  Setup snap "ubuntu-core" security profiles
Done   ...  Make snap "ubuntu-core" available
Done   ...  Download snap "ubuntu-calculator-app"
Done   ...  Mount snap "ubuntu-calculator-app"
Done   ...  Copy snap "ubuntu-calculator-app" data
Done   ...  Setup snap "ubuntu-calculator-app" security profiles
Done   ...  Make snap "ubuntu-calculator-app" available

(the spawn and ready date/time columns have been hidden for space)

Here we can observe an interesting implementation detail of the snappy integration into Ubuntu: the ubuntu-core snap is at the moment ~80MB, and contains the software bundled with the snappy platform itself. Instead of having it pre-installed, it’s only pulled in when the first snap is installed.

Another interesting implementation detail that surfaces here is the fact snaps are in fact mounted rather than copied into the system as traditional packaging systems do, and they’re mounted read-only. That means the operation of having the content of a snap in the filesystem is instantaneous and atomic, and so is removing it. There are no partial states for that specific aspect, and the content cannot be modified.

Coming back into the task list, we can see above that all the tasks that the change involved are ready and did succeed, as expected from the earlier output we had seen for the change itself. Being more of an introspection view, though, this tasks view will often also show logs and error messages for the individual tasks, whether in progress or not.

The following view presents a similar change but with an error due to an intentionally corrupted system state that snappy could not recover from (path got a busy mountpoint hacked in):

% sudo snap install xkcd-webserver
[\] Make snap "xkcd-webserver" available to the system
error: cannot perform the following tasks:
- Make snap "xkcd-webserver" available to the system
  (symlink 13 /snap/xkcd-webserver/current: file exists)

% sudo snap changes 2
Status  ...  Summary
Undone  ...  Download snap "xkcd-webserver" from channel "stable"
Undone  ...  Mount snap "xkcd-webserver"
Undone  ...  Copy snap "xkcd-webserver" data
Undone  ...  Setup snap "xkcd-webserver" security profiles
Error   ...  Make snap "xkcd-webserver" available to the system

.................................................................
Make snap "xkcd-webserver" available to the system

2016-04-20T14:14:30-03:00 ERROR symlink 13
    /snap/xkcd-webserver/current: file exists

Note how reassuring that report looks. It says exactly what went wrong, at which stage of the process, and it also points out that all the prior tasks that previously succeeded had their modifications undone. The security profiles were removed, the mount point was taken down, and so on.

This sort of behavior is to be expected of modern operating systems, and is fundamental when considering systems that should work unattended. Whether in a single execution or across restarts and reboots, changes either succeed or they don’t, and the system remains consistent, reliable, observable, and secure.

In the next blog post we’ll see details about the interfaces feature in snappy, which controls aspects of confinement and integration between snaps.

@gniemeyer

Read more
niemeyer

Ubuntu and Snappy community, it’s time to celebrate!

After another intense week and a long Saturday focused on observing and fine tuning the user experience, the development team is proud to announce that Snappy 2.0 has been tagged. As has been recently announced, this release of Snappy Ubuntu Core will be available inside Ubuntu proper, extending it with new capabilities in a seamless manner.

This is an important moment for the project, as it materializes most of the agreements that were made over the past year, and does so with the promise of stability. So you may trust that the important external APIs of the project (filesystem layout, snap format, REST API, etc) will not change from now on.

The features that went into this release are way too rich for me to describe in this post, but you may expect us to be covering the many interesting aspects of Snappy 2.0 in the coming weeks. Rich interfaces between snaps that control security and confinement, comprehensive observation and control of system changes, completion and undoing of partial system changes across restarts/reboots/crashes, macaroon-based authentication for local access and store access, preliminary development mode, a polished filesystem layout and CLI experience, modern sequencing of revisions, and so forth.

Still, the most remarkable aspect about this release to me is that it is a solid foundation. This release exports APIs and is constructed in a way to be proud of, and together with this team I will be delighted to spend the foreseeable future building a platform the world has never seen.

As a final note, I can’t thank the development team enough for the dedication they have put into the project over the past year, and specially over these last two weeks. You were the make it or break it of this project, and you made it.

Thank you!

@gniemeyer

Read more
niemeyer

The qml package is right now one of the best choices for creating graphic applications under the Go language. Part of the reason why this is true comes from the convenience of QML, a high-level domain-specific language that allows describing visual components, events, animations, and content in general in a succinct and pleasing way. The integration of such a language with Go means having both a good mechanism for describing visual content, and a good platform for doing general development under, which can range from simple data manipulation to involved OpenGL content rendering.

On the practical side, one of the implications of using such a language partnership is that every Go qml application will have some sort of resource content to deal with, carrying the QML logic. Such content may be loaded either from files on disk, or from strings in memory. Loading from a file means the content may be organized in multiple files that directly reference each other without changing the Go application, and may be updated and tested without rebuilding. Loading from a string in memory means the content needs to be self-contained, but results in a standalone binary (linking aside – still depends on Qt libraries).

There’s a well known trick to have both benefits at once, though, and the basic solution has already been made available in multiple packages: have the content on disk, and use an external tool to pack it up into a Go file that is built into the binary when the content is updated. Unfortunately, this trick alone is not enough with the qml package, because the QML engine needs to know what resources are available and where so that the right thing happens when it sees a directory being imported or an image path being referenced.

To solve that problem, the qml package has been enhanced with functionality that leverages the existing Qt resource system to pack content into the binary itself. Rather than using the upstream C++ and XML-based resource compiler, though, a new resource packer was implemented inside the qml package and made available both under a friendly Go API, and as a tool that follows common Go idioms.

The help text for the genqrc tool describes it in detail:

Usage: genqrc [options] <subdir1> [<subdir2> ...]

The genqrc tool packs all resource files under the provided subdirectories into
a single qrc.go file that may be built into the generated binary. Bundled files
may then be loaded by Go or QML code under the URL "qrc:///some/path", where
"some/path" matches the original path for the resource file locally.

Starting with Go 1.4, this tool may be conveniently run by the "go generate"
subcommand by adding a line similar to the following one to any existent .go
file in the project (assuming the subdirectories ./code/ and ./images/ exist):

    //go:generate genqrc code images

Then, just run "go generate" to update the qrc.go file.

During development, the generated qrc.go can repack the filesystem content at
runtime to avoid the process of regenerating the qrc.go file and rebuilding the
application to test every minor change made. Runtime repacking is enabled by
setting the QRC_REPACK environment variable to 1:

    export QRC_REPACK=1

This does not update the static content in the qrc.go file, though, so after
the changes are performed, genqrc must be run again to update the content that
will ship with built binaries.

The tool may be installed via go get as usual:

go get gopkg.in/qml.v1/cmd/genqrc

and once the qrc.go file has been generated, the main qml file may be
loaded with logic equivalent to:

component, err := engine.LoadFile("qrc:///path/to/file.qml")

The loaded file can in turn reference any other content that was bundled
into the Go binary.

For a better picture, this example demonstrates the use of the tool.

Read more
niemeyer

There were a few long standing issues in the yaml.v1 package which were being postponed so that they could be done at once in a single incompatible change, and the time has come: yaml.v2 is now available.

Besides these incompatible changes, other compatible fixes and improvements were performed in that push, and those were also applied to the existing yaml.v1 package so that dependent applications benefit immediately and without modifications.

The subtopics below outline exactly what changed, and how to adapt existent code when necessary.

Type errors

With yaml.v1, decoding a YAML value that is not compatible with the Go value being unmarshaled into will silently drop the offending value without notice. In many cases continuing with degraded behavior by ignoring the problem is intended, but this was the one and only option.

In yaml.v2, these problems will cause a *yaml.TypeError to be returned, containing helpful information about what happened. For example:

yaml: unmarshal errors:
  line 3: cannot unmarshal !!str `foo` into int

On such errors the decoding process still continues until the end of the YAML document, so ignoring the TypeError will produce logic equivalent to the old yaml.v1 behavior.

New Marshaler and Unmarshaler interfaces

The way that yaml.v1 allowed custom types to implement marshaling and unmarshaling of YAML content was slightly confusing and troublesome. For example, considering a CustomType with a keys field:

type CustomType struct {
        keys map[string]int
}

and supposing the goal is to unmarshal this YAML map into it:

custom:
    a: 1
    b: 2
    c: 3

With yaml.v1, one would need to implement logic similar to the following for that:

func (v *CustomType) SetYAML(tag string, value interface{}) bool {
        if tag == "!!map" {
                m := value.(map[interface{}]interface{})
                // ... iterate/validate/convert key/value pairs 
        }
        return goodValue
}

This is too much trouble when the package can easily do those conversions internally already. To fix that, in yaml.v2 the Getter and Setter interfaces are both gone and were replaced by the Marshaler and Unmarshaler interfaces.

Using the new mechanism, the example above would be implemented as follows:

func (v *CustomType) UnmarshalYAML(unmarshal func(interface{}) error) error {
        return unmarshal(&v.keys)
}

Custom-ordered maps

By default both yaml.v1 and yaml.v2 will marshal keys in a stable order which is increasing within the same type and arbitrarily defined across types. So marshaling is already performed in a sensible order, but it cannot be customized in yaml.v1, and there’s also no way to tell which order the map was originally in, as some applications require.

To fix that, there is a new pair of types that support preserving the order of map keys both when marshaling and unmarshaling: MapSlice and MapItem.

Such an ordered map literal would look like:

m := yaml.MapSlice{{"c", 3}, {"b", 2}, {"a", 1}}

The MapSlice type may be used for variables going in and out of the yaml package, or in struct fields, map values, or anywhere else sensible.

Binary values

Strings in YAML must be valid UTF-8 or UTF-16 (with a byte order mark for the latter), and for binary data the specification defines a standard !!binary tag which represents the raw data encoded (encrypted?) as base64. This is now supported both in yaml.v1 and yaml.v2, transparently. That is, any string value that is not valid UTF-8 will be base64-encoded and appropriately tagged so that it roundtrips as the same string. Short strings are inlined, while long ones are automatically broken into several lines and represented in a proper style. For example:

one: !!binary gICA
two: !!binary |
  gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgI
  CAgICAgICAgICAgICAgICAgICAgICAgICAgICA

Multi-line strings

Any string that contains new-line characters (‘\n’) will now be encoded using the literal style. For example, a value that would before be encoded as:

key: "line 1\nline 2\nline 3\n"

is now encoded by both yaml.v1 and yaml.v2 as:

key: |
  line 1
  line 2
  line 3

Other improvements

Besides these major changes, some assorted minor improvements were also performed:

  • Better handling of top-level “null”s (issue #35)
  • Marshal base 60 floats quoted for YAML 1.1 compatibility (issue #34)
  • Better error on invalid map keys (issue #25)
  • Allow non-ASCII characters in plain strings (issue #11).
  • Do not catch unrelated panics by mistake (commit a6dc653f)

For obtaining the yaml.v1 improvements:

go get -u gopkg.in/yaml.v1

For updating to yaml.v2, adapt the code as necessary given the points above, replace the import path, and run:

go get -u gopkg.in/yaml.v2

Read more
niemeyer

As detailed in the preliminary release of qml.v1 for Go a couple of weeks ago, my next task was to finish the improvements in its OpenGL API. Good progress has happened since then, and the new API is mostly done and available for experimentation. At the same time, there’s still work to do on polishing edges and on documenting the extensive API. This blog post aims to present the improvements made, their internal design, and also to invite help for finishing the pending details.

Before diving into the new, let’s first have a quick look at how a Go application using OpenGL might look like with qml.v0. This is an excerpt from the old painting example:

import (
        "gopkg.in/qml.v0"
        "gopkg.in/qml.v0/gl"
)

func (r *GoRect) Paint(p *qml.Painter) {
        width := gl.Float(r.Int("width"))
        height := gl.Float(r.Int("height"))
        gl.Enable(gl.BLEND)
        // ...
}

The example imports both the qml and the gl packages, and then defines a Paint method that makes use of the GL types, functions, and constants from the gl package. It looks quite reasonable, but there are a few relevant shortcomings.

One major issue in the current API is that it offers no means to tell even at a basic level what version of the OpenGL API is being coded against, because the available functions and constants are the complete set extracted from the gl.h header. For example, OpenGL 2.0 has GL_ALPHA and GL_ALPHA4/8/12/16 constants, but OpenGL ES 2.0 has only GL_ALPHA. This simplistic choice was a good start, but comes with a number of undesired side effects:

  • Many trivial errors that should be compile errors fail at runtime instead
  • When the code does work, the developer is not sure about which API version it is targeting
  • Symbols for unsupported API versions may not be available for linking, even if unused

That last point also provides a hint of another general issue: portability. Every system has particularities for how to load the proper OpenGL API entry points. For example, which libraries should be linked with, where they are in the local system, which entry points they support, etc.

So this is the stage for the improvements that are happening. Before detailing the solution, let’s have a look at the new painting example in qml.v1, that makes use of the improved API:

import (
        "gopkg.in/qml.v1"
        "gopkg.in/qml.v1/gl/2.0"
)

func (r *GoRect) Paint(p *qml.Painter) {
        gl := GL.API(p)
        width := float32(r.Int("width"))
        height := float32(r.Int("height"))
        gl.Enable(GL.BLEND)
        // ...
}

With the new API, rather than importing a generic gl package, a version-specific gl/2.0 package is imported under the name GL. That choice of package name allows preserving familiar OpenGL terms for both the functions and the constants (gl.Enable and GL.BLEND, for example). Inside the new Paint method, the gl value obtained from GL.API holds only the functions that are defined for the specific OpenGL API version imported, and the constants in the GL package are also constrained to those available in the given version. Any improper references become build time errors.

To support all the various OpenGL versions and profiles, there are 23 independent packages right now. These packages are of course not being hand-built. Instead, they are generated all at once by a tool that gathers information from various sources. The process can be tersely described as:

  1. A ragel-based parser processes Qt’s qopenglfunctions_*.h header files to collect version-specific functions
  2. The Khronos OpenGL Registry XML is parsed to collect version-specific constants
  3. A number of tweaks defined in the tool’s code is applied to the state
  4. Packages are generated by feeding the state to text templates

Version-specific functions might also be extracted from the Khronos Registry, but there’s a good reason to use information from the Qt headers instead: Qt already solved the portability issue. It works in several platforms, and if somebody is using QML successfully, it means Qt is already using that system’s OpenGL capabilities. So rather than designing a new mechanism to solve the same problem, the qml package now leverages Qt for resolving all the GL function entry points and the linking against available libraries.

Going back to the example, it also demonstrates another improvement that comes with the new API: plain types that do not carry further meaning such as gl.Float and gl.Int were replaced by their native counterparts, float32 and int32. Richer types such as Enum were preserved, and as suggested by David Crawshaw some new types were also introduced to represent entities such as programs, shaders, and buffers. The custom types are all available under the common gl/glbase package that all version-specific packages make use of.

So this is all working and available for experimentation right now. What is left to do is almost exclusively improving the list of function tweaks with two goals in mind, which will be highlighted below as those are areas where help would be appreciated, mainly due to the footprint of the API.

Documentation importing

There are a few hundred functions to document, but a large number of these are variations of the same function. The previous approach was to simply link to the upstream documentation, but it would be much better to have polished documentation attached to the functions themselves. This is the new documentation for MultMatrixd, for example. For now the documentation is being imported manually, but the final process will likely consist of some automation and some manual polishing.

Function polishing

The standard C OpenGL API can often be translated automatically (see BindBuffer or BlendColor), but in other cases the function prototype has to be tweaked to become friendly to Go. The translation tool already has good support for defining most of these tweaks independently from the rest of the machinery. For example, the following logic changes the ShaderSource function from its standard from into something convenient in Go:

name: "ShaderSource",
params: paramTweaks{
        "glstring": {rename: "source", retype: "...string"},
        "length":   {remove: true},
        "count":    {remove: true},
},
before: `
        count := len(source)
        length := make([]int32, count)
        glstring := make([]unsafe.Pointer, count)
        for i, src := range source {
                length[i] = int32(len(src))
                if len(src) > 0 {
                        glstring[i] = *(*unsafe.Pointer)(unsafe.Pointer(&src))
                } else {
                        glstring[i] = unsafe.Pointer(uintptr(0))
                }
        }
`,

Other cases may be much simpler. The MultMatrixd tweak, for instance, simply ensures that the parameter has the proper length, and injects the documentation:

name: "MultMatrixd",
before: `
        if len(m) != 16 {
                panic("parameter m must have length 16 for the 4x4 matrix")
        }
`,
doc: `
        multiplies the current matrix with the provided matrix.
        ...
`,

and as an even simpler example, CreateProgram is tweaked so that it returns a glbase.Program instead of the default uint32.

name:   "CreateProgram",
result: "glbase.Program",

That kind of polishing is where contributions would be most appreciated right now. One valid way of doing this is picking a range of functions and importing and polishing their documentation manually, and while doing that keeping an eye on required tweaks that should be performed on the function based on its documentation and prototype.

If you’d like to help somehow, or just ask questions or report your experience with the new API, please join us in the project mailing list.

Read more
niemeyer

As part of the on going work on Ubuntu Touch phones, I was invited to contribute a Go package to interface with ubuntuoneauth, a C++ and Qt library that authenticates against Ubuntu One using the system account made available by the phone owner. The details of that library and its use case are not interesting for most people right now, but the work to interface with it is a good example to talk about because, besides the result (uoneauth) being an external and independent Go package that extends the qml package, ubuntuoneauth is not a QML library, but rather a plain Qt library. Some of the callbacks use even traditional C++ types that do not inherit from QObject and have no Qt metadata, so offering that functionality from Go nicely takes a bit more work.

What follows are some of the highlights of that integration logic, to serve as a reference for similar extensions in the future. Note that if your interest is in creating QML applications with Go, none of this is necessary and the documentation is a better place to start.

As an initial step, the following examples demonstrate how the original C++ library and the Go package being designed are meant to be used. The first snippet contains the relevant logic taken out of the examples/signing-main.cpp file, tweaked for clarity:

int main(int argc, char *argv[]) {
    (...)
    UbuntuOne::SigningExample *example = new UbuntuOne::SigningExample(&a);
    QTimer::singleShot(0, example, SLOT(doExample()));
    (...)
}

SigningExample::SigningExample(QObject *parent) : QObject(parent) {
    QObject::connect(&service, SIGNAL(credentialsFound(const Token&)),
                     this, SLOT(handleCredentialsFound(Token)));
    QObject::connect(&service, SIGNAL(credentialsNotFound()),
                     this, SLOT(handleCredentialsNotFound()));
    (...)
}

void SigningExample::doExample() {
    service.getCredentials();
}

void SigningExample::handleCredentialsFound(Token token) {
    QString authHeader = token.signUrl(url, QStringLiteral("GET"));
    (...)
}
 
void SigningExample::handleCredentialsNotFound() {
    qDebug() << "No credentials were found.";
}

The example hooks into various signals in the service, one for each possible outcome, and then calls the service’s getCredentials method to initiate the process. If successful, the credentialsFound signal is emitted with a Token value that is able to sign URLs, returning an HTTP header that can authenticate a request.

That same process is more straightforward when using the library from Go:

service := uoneauth.NewService(engine)
token, err := service.Token()
if err != nil {
        return err
}
signature := token.HeaderSignature("GET", url)

Again, this gets a service, a token from it, and signs a URL, in a “forward” way.

So the goal is turning the initial C++ workflow into this simpler Go API. A good next step is looking into how the NewService function is implemented:

func NewService(engine *qml.Engine) *Service {
        s := &Service{reply: make(chan reply, 1)}

        qml.RunMain(func() {
                s.obj = *qml.CommonOf(C.newSSOService(), engine)
        })

        runtime.SetFinalizer(s, (*Service).finalize)

        s.obj.On("credentialsFound", s.credentialsFound)
        s.obj.On("credentialsNotFound", s.credentialsNotFound)
        s.obj.On("twoFactorAuthRequired", s.twoFactorAuthRequired)
        s.obj.On("requestFailed", s.requestFailed)
        return s
}

NewService creates the service instance, and then asks the qml package to run some logic in the main Qt thread via RunMain. This is necessary because a number of operations in Qt, including the creation of objects, are associated with the currently running thread. Using RunMain in this case ensures that the creation of the C++ object performed by newSSOService happens in the main Qt thread (the “GUI thread”).

Then, the address of the C++ UbuntuOne::SSOService type is handed to CommonOf to obtain a Common value that implements all the common logic supported by C++ types that inherit from QObject. This is an unsafe operation as there’s no way for CommonOf to guarantee that the provided address indeed points to a C++ value with a type that inherits from QObject, so the call site must necessarily import the unsafe package to provide the unsafe.Pointer parameter. That’s not a problem in this context, though, since such extension packages are necessarily dealing with unsafe logic in either case.

The obtained Common value is then assigned to the service’s obj field. In most cases, that value is instead assigned to an anonymous Common field, as done in qml.Window for example. Doing so means qml.Window values implement the qml.Object interface, and may be manipulated as a generic object. For the new Service type, though, the fact that this is a generic object won’t be disclosed for the moment, and instead a simpler API will be offered.

Following the function logic further, a finalizer is then registered to ensure the C++ value gets deallocated if the developer forgets to Close the service explicitly. When doing that, it’s important to ensure the Close method drops the finalizer when called, not only to facilitate the garbage collection of the object, but also to avoid deallocating the same value twice.

The next four lines in the function should be straightforward: they register methods of the service to be called when the relevant signals are emitted. Here is the implementation of two of these methods:

func (s *Service) credentialsFound(token *Token) {
        s.sendReply(reply{token: token})
}

func (s *Service) credentialsNotFound() {
        s.sendReply(reply{err: ErrNoCreds})
}

func (s *Service) sendReply(r reply) {
        select {
        case s.reply <- r:
        default:
                panic("internal error: multiple results received")
        }
}

Handling the signals consists of just sending the reply over the channel to whoever initiated the request. The select statement in sendReply just ensures that the invariant of having a reply per request is not broken without being noticed, as that would require a slightly different design.

There’s one more point worth observing in this logic: the token value received as a parameter in credentialsFound was already converted into the local Token type. In most cases, this is unnecessary as the parameter is directly useful as a qml.Object or as another native type (int, etc), but in this case UbuntuOne::Token is a plain C++ type that does not inherit from QObject, so the default signal parameter that would arrive in the Go method has only a type name and the value address.

Instead of taking the plain value, it is turned into a more useful one by registering a converter with the qml package:

func convertToken(engine *qml.Engine, obj qml.Object) interface{} {
        // Copy as the one held by obj is passed by reference.
        addr := unsafe.Pointer(obj.Property("plainAddr").(uintptr))
        token := &Token{C.tokenCopy(addr)}
        runtime.SetFinalizer(token, (*Token).finalize)
        return token
}

func init() {
        qml.RegisterConverter("Token", convertToken)
}

Given that setup, the Service.Token method may simply call the getCredentials method from the underlying UbuntuOne::SSOService method to fire a request, and block waiting for a reply:

func (s *Service) Token() (*Token, error) {
        s.mu.Lock()
        qml.RunMain(func() {
                C.ssoServiceGetCredentials(unsafe.Pointer(s.obj.Addr()))
        })
        reply := <-s.reply
        s.mu.Unlock()
        return reply.token, reply.err
}

The lock ensures that a second request won’t take place before the first one is done, forcing the correct sequencing of replies. Given the current logic, this isn’t strictly necessary since all requests are equivalent, but this will remain correct even if other methods from SSOService are added to this interface.

The returned Token value may then be used to sign URLs by simply calling the respective underlying method:

func (t *Token) HeaderSignature(method, url string) string {
        cmethod, curl := C.CString(method), C.CString(url)
        cheader := C.tokenSignURL(t.addr, cmethod, curl, 0)
        defer freeCStrings(cmethod, curl, cheader)
        return C.GoString(cheader)
}

No care about using qml.RunMain has to be taken in this case, because UbuntuOne::Token is a simple C++ type that does not interact with the Qt machinery.

This completes the journey of creating a package that provides access to the ubuntuoneauth library from Go code. In many cases it’s a better idea to simply rewrite the logic in Go, but there will be situations similar to this library, where either rewriting would take more time than reasonable, or perhaps delegating the maintenance and stabilization of the underlying logic to a different team is the best thing to do. In those cases, an approach such as the one exposed here can easily solve the problem.

Read more
niemeyer

As originally shared on Google+, and as a follow up of the previous post covering OpenGL on Go QML, a new screencast was published to demonstrate the latest features introduced around OpenGL support in Go QML:

Refrences:

Read more
niemeyer

As recently announced, my latest endeavor at Canonical is to enable graphical client-side development with the Go language via a new qml Go package that integrates the language with Qt’s QML framework.

The QML framework solves the problem of designing graphic applications via a language that offers a convenient mix of declarative and procedural features. As a very simple example, if the following QML content is loaded by itself, it will draw a square centralized inside a window:

import QtQuick 2.0

Rectangle {
        width: 640
        height: 280
        color: "black"

        Rectangle {
                width: 250; height: 250
                anchors.centerIn: parent
                color: "red"

                MouseArea {
                        anchors.fill: parent
                        onClicked: console.log("Stop poking me!")
                }
        }
}

As expected, it would look similar to:

Centralized Rectangle

The red rectangle will remain centered as the window is resized, and will print a message every time it is clicked. The clicked signal was hooked into logic implemented in JavaScript in this case, but it might as well have been hooked into custom Go logic with the same ease using the qml package. With very minor changes, it could also be something much more interesting than a red rectangle, such as a modern web browser:

import QtQuick 2.0
import QtWebKit 3.0

Rectangle {
        width: 640
        height: 280
        color: "black"

        WebView {
                width: 250; height: 250
                anchors.centerIn: parent
                url: "http://golang.org"
        }
}

The result would be:

Web Page Rectangle

It’s worth noting that this wasn’t just a screenshot of a web page, but rather a tiny fully functional web browser accessing a web page, easily embedded into the QML application to satisfy whatever browsey needs the author might have.

Being able to leverage all these existing building blocks so comfortably is what makes a graphics platform attractive to develop in, and is what inspired the on going effort to have that platform working under the Go language. As we can observe on some of the work published, that side of things seems to be going well.

Now, the next level up is to enable people to create such building blocks without leaving the Go language. The initial step towards that is already committed to the code repository. It enables Go types to be created and seamlessly integrated into the QML language. For example, consider this simple Go type:

type GoType struct {
        Text string
}

func (v *GoType) OnTextChanged() {
        fmt.Println("Text changed...")
}

If this type is made available to QML content via the qml.RegisterTypes function, it may then be used as any other native type. This would work as a QML file, for instance:

import GoExtensions 1.0

GoType {
        text: "Have you signed up for GopherCon yet?"
}

There’s a relevant detail, though: this type has no visible content by itself. This means that displaying something must be done via interactions with other QML items, or via external systems (dbus, for example).

Solving that problem has been one of my objectives in the last month, and although it’s not yet publicly available, the work is in a good-enough state that I feel comfortable talking about it.

As described, the main goal is enabling these custom types to paint. This will be achieved by exposing a Go package that offers an OpenGL API, and defining methods that the custom types have to implement for being able to render at appropriate times. Although the details aren’t finalized, the current draft can run familiar OpenGL code similar to the following:

func (v *GoType) Paint(p *qml.Painter) {
        obj := p.Object()
        width := gl.Float(obj.Int("width"))
        height := gl.Float(obj.Int("height"))

        gl.Enable(gl.BLEND)
        gl.BlendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
        gl.Color4f(1.0, 1.0, 1.0, 0.8)
        gl.Begin(gl.QUADS)
        gl.Vertex2f(0, 0)
        gl.Vertex2f(width, 0)
        gl.Vertex2f(width, height)
        gl.Vertex2f(0, height)
        gl.End()

        gl.LineWidth(2.5)
        gl.Color4f(0.0, 0.0, 0.0, 1.0)
        gl.Begin(gl.LINES)
        gl.Vertex2f(0, 0)
        gl.Vertex2f(width, height)
        gl.Vertex2f(width, 0)
        gl.Vertex2f(0, height)
        gl.End()
}

One interesting point to realize is that, again, this isn’t just rendering into a lonely OpenGL context. The rendered content goes into a framebuffer that lives within the overall QML scene graph, and has the same integration capabilities as every other QML item.

In the following animated image, for example, the white square was generated with the above GL code, and was rendered next to other native items with this QML source:

Go + OpenGL Example

Note the smooth blending and proper overlapping established in the scene graph based on the ordering of elements in the QML source. The animation was also driven by QML without special handling of the content rendered from Go.

Another relevant point in terms of the integration with Go is that the GL code demonstrated above is considered old-school these days. The modern version uses fewer calls, making better use of the graphics card memory by transferring, tracking, and handling data such as vertexes in contiguous arrays. This reduces the impact of the cross-context calls that depart and rejoin the Go runtime, and can unlock interesting use cases that the overhead might otherwise prevent.

In terms of availability of these features, we’re about to enter a holiday season, so they should be polished enough for a first public review at some point in January. Looking forward to it.

UPDATE (2014-01-27): These features are now publicly available. The painting example demonstrates the exact scenario pictured above.

UPDATE (2014-02-18): New screencast demonstrates some of the OpenGL features of Go QML.

Read more
niemeyer

About a year ago I ordered a pack of 10 atmega328p processors from China to play with. They took a while to get here, and it took even longer for me to get back to them, but a few days ago the motivation to start doing something finally appeared.

I’ve never actually played with AVRs before, and felt a bit like I was jumping a step in my electronics enthusiast progress by not diving into its architecture a bit more deeply. Also, despite the obvious advantages of ARM-based chips these days, the platform is still interesting in some perspectives, such as its widespread availability, low price in small quantities, and the ability to plug them in a breadboard and do things without pretty much any circuitry.

To get acquainted with the architecture and to depart from things I work on more frequently, the project is so far taking the shape of an assembly library of functionality relevant for developing small projects, built mainly around binutils for the AVR. I did end up cheating a bit and compiling the assembly code via avr-gcc, just to get the __do_copy_data initialization routine injected, so that I don’t have to pull up the .data section from program memory into RAM manually.

I started running the test programs with the chip itself, with the help of a Pirate Bus, to see if the whole setup was sound. Once it worked a few times, I moved on to use the simulavr simulator to make the process of running and debugging more comfortable. In addition to being able to attach gdb, and trace execution, one of the nice features of simulavr is being able to map a port from the emulated CPU and get bytes written into it sent to an arbitrary file in the outer world. That means we can easily implement a trivial println-like function in assembly:

.set    STDOUT, 0x20

loop:   ld  r17, Z+
        cpi r17, 0
        breq done

        sts STDOUT, r17
        rjmp loop

done:   ldi r17, '\n
        sts STDOUT, r17

Printing strings is only helpful if we do have strings, though, and with such a skeleton system there are no interesting ones yet. What we do have are registers, lots of them (32 in total). A good candidate for the next function would then be an itoa-like function that would put the proper bytes in memory for printing.

So, after going down that road for a bit longer, the lack of a proper way to run tests on the created code was an evident show stopper. There’s no way the created code will be sane without being able to exercise it, and write tests that can be rerun at will. Fortunately, it’s easy enough to apply traditional testing practices to such an environment, given the simulator features mentioned.

To drive those tests, a small tool named avrtest was written in Go. It takes an avrtest.list file that looks like this:

devices: atmega328

[div8u]

main:
        ldi     r24, 128 ; dividend
        ldi     r22, 10  ; divisor
        call    div8u
        prnt8u  r24      ; result
        prnt8u  r22      ; divisor
        prnt8u  r20      ; remainder

want:
        12
        10
        8

[itoa8u]

cycle-limit: 400

main:
        ldi     r24, 128
        call    itoa8u
        prntz

want:
        128

and runs it, showing the typical test runner output:

% ./avrtest
div8u   ok  (784 cycles)
itoa8u  ok  (356 cycles)

or the typical failure, when appropriate:

div8u   failed: unexpected output
        want:
                13
                10
                8
        
        got:
                12
                10
                8

If the failure feels a bit cryptic, all of the intermediary files are kept under the ./_avrtest directory, including a detailed trace file. Here is a snippet of such a trace:

div8u.elf 0x0194: itoa8u      LDI R30, 0x0a 
div8u.elf 0x0196: itoa8u+0x1  LDI R31, 0x01 
div8u.elf 0x0198: itoa8u+0x2  PUSH R17 SP=0x8f6 0x1 
div8u.elf 0x0198: itoa8u+0x2  CPU-waitstate
div8u.elf 0x019a: itoa8u+0x3  LDI R17, 0x30 
div8u.elf 0x019c: itoa8u+0x4  LDI R22, 0x0a 
div8u.elf 0x019e: itoa8u_loop CALL 0x178 SP=0x8f5 0xd1 SP=0x8f4 0x0

Besides that, we should be able to attach gdb to any given test by running the command avrtest gdb <name>. That’s not yet there, but should be pretty soon, after the next cryptic breakage. :-)

That tooling is not organized for a proper release, but I’ll certainly push it up to a public repository as soon as I get a chance to clean up the sandbox.

Read more
niemeyer

As part of one of the projects we’ve been pushing at Canonical, I spent a few days researching about the possibility of extending a compiled Go application with a tiny language that would allow expressing simple procedural logic in a controlled environment. Although we’re not yet sure of the direction we’ll take, the result of this short experiment is being released as the twik language for open fiddling.

The implementation is straightforward, with under 400 lines for the parser and evaluator, and under 350 lines in the default functions provided for the language skeleton: var, func, do, if, and, or, etc.

It also comes with an interactive interpreter to play with. You can install it with:

$ go get launchpad.net/twik/cmd/twik

This is a short sample session:

> (var x 1)
> x
1
> (set x 2)
> x
2
> (set x (func (n) (+ n 1)))
> x
#func
> (x 1)
2
> (func inc (n) (+ n 1))
#func
> (inc 42)
43

Another one demonstrating the lexical scoping:

> (var add
.      (do
.          (var n 0)
.          (func (m) (set n (+ n m)) n)
.      )
. )
> (add 5)
5
> (add -1)
4
> n
twik source:1:1: undefined symbol: n

New functionality may be plugged in by providing Go functions. For example, here is a simple printf function:

func printf(args []interface{}) (interface{}, error) {
        if len(args) > 0 {
                if format, ok := args[0].(string); ok {
                        _, err := fmt.Printf(format, args[1:]...)
                        return nil, err
                }
        }
        return nil, fmt.Errorf("printf takes a format string")
}

func main() {
        ...
        err = scope.Create("printf", printf)
        ...
}

It can now greet the world:

$ cat test.twik

(func hello (name)
      (printf "Hello %s!\n" name)
)

(hello "world")

$ time ./twik test.twik
Hello world!
./twik test.twik  0.00s user 0.00s system 74% cpu 0.005 total

Read more
niemeyer

Note: This is a candidate version of the specification. This note will be removed once v1 is closed, and any changes will be described at the end. Please get in touch if you’re implementing it.

Contents


Introduction

This specification defines strepr, a stable representation that enables computing hashes and cryptographic signatures out of a defined set of composite values that is commonly found across a number of languages and applications.

Although the defined representation is a serialization format, it isn’t meant to be used as a traditional one. It may not be seen entirely in memory at once, or written to disk, or sent across the network. Its role is specifically in aiding the generation of hashes and signatures for values that are serialized via other means (JSON, BSON, YAML, HTTP headers or query parameters, configuration files, etc).

The format is designed with the following principles in mind:

Understandable — The representation must be easy to understand to increase the chances of it being implemented correctly.

Portable — The defined logic works properly when the data is being transferred across different platforms and implementations, independently from the choice of protocol and serialization implementation.

Unambiguous — As a natural requirement for producing stable hashes, there is a single way to process any supported value being held in the native form of the host language.

Meaning-oriented — The stable representation holds the meaning of the data being transferred, not its type. For example, the number 7 must be represented in the same way whether it’s being held in a float64 or in an uint16.


Supported values

The following values are supported:

  • nil: the nil/null/none singleton
  • bool: the true and false singletons
  • string: raw sequence of bytes
  • integers: positive, zero, and negative integer numbers
  • floats: IEEE754 binary floating point numbers
  • list: sequence of values
  • map: associative value→value pairs


Representation

nil = 'z'

The nil/null/none singleton is represented by the single byte 'z' (0x7a).

bool = 't' / 'f'

The true and false singletons are represented by the bytes 't' (0x74) and 'f' (0x66), respectively.

unsigned integer = 'p' <value>

Positive and zero integers are represented by the byte 'p' (0x70) followed by the variable-length encoding of the number.

For example, the number 131 is always represented as {0x70, 0x81, 0x03}, independently from the type that holds it in the host language.

negative integer = 'n' <absolute value>

Negative integers are represented by the byte 'n' (0x6e) followed by the variable-length encoding of the absolute value of the number.

For example, the number -131 is always represented as {0x6e, 0x81, 0x03}, independently from the type that holds it in the host language.

string = 's' <num bytes> <bytes>

Strings are represented by the byte 's' (0x73) followed by the variable-length encoding of the number of bytes in the string, followed by the specified number of raw bytes. If the string holds a list of Unicode code points, the raw bytes must contain their UTF-8 encoding.

For example, the string hi is represented as {0x73, 0x02, 'h', 'i'}

Due to the complexity involved in Unicode normalization, it is not required for the implementation of this specification. Consequently, Unicode strings that if normalized would be equal may have different stable representations.

binary float = 'd' <binary64>

32-bit or 64-bit IEEE754 binary floating point numbers that are not holding integers are represented by the byte 'd' (0x64) followed by the big-endian 64-bit IEEE754 binary floating point encoding of the number.

There are two exceptions to that rule:

1. If the floating point value is holding a NaN, it must necessarily be encoded by the following sequence of bytes: {0x64, 0x7f, 0xf8, 0x00 0x00, 0x00, 0x00, 0x00, 0x00}. This ensures all NaN values have a single representation.

2. If the floating point value is holding an integer number it must instead be encoded as an unsigned or negative integer, as appropriate. Floating point values that hold integer numbers are defined as those where floor(v) == v && abs(v) != ∞.

For example, the value 1.1 is represented as {0x64, 0x3f, 0xf1, 0x99, 0x99, 0x99, 0x99, 0x99, 0x9a}, but the value 1.0 is represented as {0x70, 0x01}, and -0.0 is represented as {0x70, 0x00}.

This distinction means all supported numbers have a single representation, independently from the data type used by the host language and serialization format.

list = 'l' <num items> [<item> ...]

Lists of values are represented by the byte 'l' (0x6c), followed by the variable-length encoding of the number of pairs in the list, followed by the stable representation of each item in the list in the original order.

For example, the value [131, -131] is represented as {0x6c, 0x70, 0x81, 0x03, 0x6e, 0x81, 0x03, 0x65}

map = 'm' <num pairs> [<item key> <item value>  ...]

Associative maps of values are represented by the byte 'm' (0x6d) followed by the variable-length encoding of the number of pairs in the map, followed by an ordered sequence of the stable representation of each key and value in the map. The pairs must be sorted so that the stable representation of the keys is in ascending lexicographical order. A map must not have multiple keys with the same representation.

For example, the map {"a": 4, 5: "b"} is always represented as {0x6d, 0x02, 0x70, 0x05, 0x73, 0x01, 'b', 0x73, 0x01, 'a', 0x70, 0x04}.


Variable-length encoding

Integers are variable-length encoded so that they can be represented in short space and with unbounded size. In an encoded number, the last byte holds the 7 least significant bits of the unsigned value, and zero as the eight bit. If there are remaining non-zero bits, the previous byte holds the next 7 bits, and the eight bit is set on to flag the continuation to the next byte. The process continues until there are non-zero bits remaining. The most significant bits end up in the first byte of the encoded value, which must necessarily not be 0x80.

For example, the number 128 is variable-length encoded as {0x81, 0x00}.


Reference implementation

A reference implementation is available, including a test suite which should be considered when implementing the specification.


Changes

draft1 → draft2

  • Enforce the use of UTF-8 for Unicode strings and explain why normalization is being left out.
  • Enforce a single NaN representation for floats.
  • Explain that map key uniqueness refers to the representation.
  • Don’t claim the specification is easy to implement; floats require attention.
  • Mention reference implementation.

Read more
niemeyer

The very first time the concepts behind the juju project were presented, by then still under the prototype name of Ubuntu Pipes, was about four years ago, in July of 2009. It was a short meeting with Mark Shuttleworth, Simon Wardley, and myself, when Canonical still had an office on a tall building by the Thames. That was just the seed of a long road of meetings and presentations that eventually led to the codification of these ideas into what today is a major component of the Ubuntu strategy on servers.

Despite having covered the core concepts many times in those meetings and presentations, it recently occurred to me that they were never properly written down in any reasonable form. This is an omission that I’ll attempt to fix with this post while still holding the proper context in mind and while things haven’t changed too much.

It’s worth noting that I’ve stepped aside as the project technical lead in January, which makes more likely for some of these ideas to take a turn, but they are still of historical value, and true for the time being.

Contents

This post is long enough to deserve an index, but these sections do build up concepts incrementally, so for a full understanding sequential reading is best:


Classical deployments

In a simplistic sense, deploying an application means configuring and running a set of processes in one or more machines to compose an integrated system. This procedure includes not only configuring the processes for particular needs, but also appropriately interconnecting the processes that compose the system.

The following figure depicts a simple example of such a scenario, with two frontend machines that had the Wordpress software configured on them to serve the same content out of a single backend machine running the MySQL database.

Deploying even that simple environment already requires the administrator to deal with a variety of tasks, such as setting up physical or virtual machines, provisioning the operating system, installing the applications and the necessary dependencies, configuring web servers, configuring the database, configuring the communication across the processes including addresses and credentials, firewall rules, and so on. Then, once the system is up, the deployed system must be managed throughout its whole lifecycle, with upgrades, configuration changes, new services integrated, and more.

The lack of a good mechanism to turn all of these tasks into high-level operations that are convenient, repeatable, and extensible, is what motivated the development of juju. The next sections provide an overview of how these problems are solved.


Preparing a blank slate

Before diving into the way in which juju environments are organized, a few words must be said about what a juju environment is in the first place.

All resources managed by juju are said to be within a juju environment, and such an environment may be prepared by juju itself as long as the administrator has access to one of the supported infrastructure providers (AWS, OpenStack, MAAS, etc).

In practice, creating an environment is done by running juju’s bootstrap command:

$ juju bootstrap

This will start a machine in the configured infrastructure provider and prepare the machine for running the juju state server to control the whole environment. Once the machine and the state server are up, they’ll wait for future instructions that are provided via follow up commands or alternative user interfaces.


Service topologies

The high-level perspective that juju takes about an environment and its lifecycle is similar to the perspective that a person has about them. For instance, although the classical deployment example provided above is simple, the mental model that describes it is even simpler, and consists of just a couple of communicating services:

That’s pretty much the model that an administrator using juju has to input into the system for that deployment to be realized. This may be achieved with the following commands:

$ juju deploy cs:precise/wordpress
$ juju deploy cs:precise/mysql
$ juju add-relation wordpress mysql

These commands will communicate with the previously bootstrapped environment, and will input into the system the desired model. The commands themselves don’t actually change the current state of the deployed software, but rather inform the juju infrastructure of the state that the environment should be in. After the commands take place, the juju state server will act to transform the current state of the deployment into the desired one.

In the example described, for instance, juju starts by deploying two new machines that are able to run the service units responsible for Wordpress and MySQL, and configures the machines to run agents that manipulate the system as needed to realize the requested model. An intermediate stage of that process might conceptually be represented as:

topology-step-1

The service units are then provided with the information necessary to configure and start the real software that is responsible for the requested workload (Wordpress and MySQL themselves, in this example), and are also provided with a mechanism that enables service units that were related together to easily exchange data such as addresses, credentials, and so on.

At this point, the service units are able to realize the requested model:

topology-step-2

This is close to the original scenario described, except that there’s a single frontend machine running Wordpress. The next section details how to add that second frontend machine.


Scaling services horizontally

The next step to match the original scenario described is to add a second service unit that can run Wordpress, and that can be achieved by the single command:

$ juju add-unit wordpress

No further commands or information are necessary, because the juju state server understands what the model of the deployment is. That model includes both the configuration of the involved services and the fact that units of the wordpress service should talk to units of the mysql service.

This final step makes the deployed system look equivalent to the original scenario depicted:

topology-step-3

Although that is equivalent to the classic deployment first described, as hinted by these examples an environment managed by juju isn’t static. Services may be added, removed, reconfigured, upgraded, expanded, contracted, and related together, and these actions may take place at any time during the lifetime of an environment.

The way that the service reacts to such changes isn’t enforced by the juju infrastructure. Instead, juju delegates service-specific decisions to the charm that implements the service behavior, as described in the following section.


Charms

A juju-managed environment wouldn't be nearly as interesting if all it could do was constrained by preconceived ideas that the juju developers had about what services should be supported and how they should interact among themselves and with the world.

Instead, the activities within a service deployed by juju are all orchestrated by a juju charm, which is generally named after the main software it exposes. A charm is defined by its metadata, one or more executable hooks that are called after certain events take place, and optionally some custom content.

The charm metadata contains basic declarative information, such as the name and description of the charm, relationships the charm may participate in, and configuration options that the charm is able to handle.

The charm hooks are executable files with well-defined names that may be written in any language. These hooks are run non-concurrently to inform the charm that something happened, and they give a chance for the charm to react to such events in arbitrary ways. There are hooks to inform that the service is supposed to be first installed, or started, or configured, or for when a relation was joined, departed, and so on.

This means that in the previous example the service units depicted are in fact reporting relevant events to the hooks that live within the wordpress charm, and those hooks are the ones responsible for bringing the Wordpress software and any other dependencies up.

wordpress-service-unit

The interface offered by juju to the charm implementation is the same, independently from which infrastructure provider is being used. As long as the charm author takes some care, one can create entire service stacks that can be moved around among different infrastructure providers.


Relations

In the examples above, the concept of service relationships was introduced naturally, because it’s indeed a common and critical aspect of any system that depends on more than a single process. Interestingly, despite it being such a foundational idea, most management systems in fact pay little attention to how the interconnections are modeled.

With juju, it’s fair to say that service relations were part of the system since inception, and have driven the whole mindset around it.

Relations in juju have three main properties: an interface, a kind, and a name.

The relation interface is simply a unique name that represents the protocol that is conventionally followed by the service units to exchange information via their respective hooks. As long as the name is the same, the charms are assumed to have been written in a compatible way, and thus the relation is allowed to be established via the user interface. Relations with different interfaces cannot be established.

The relation kind informs whether a service unit that deploys the given charm will act as a provider, a requirer, or a peer in the relation. Providers and requirers are complementary, in the sense that a service that provides an interface can only have that specific relation established with a service that requires the same interface, and vice-versa. Peer relations are automatically established internally across the units of the service that declares the relation, and enable easily clustering together these units to setup masters and slaves, rings, or any other structural organization that the underlying software supports.

The relation name uniquely identifies the given relation within the charm, and allows a single charm (and service and service units that use it) to have multiple relations with the same interface but different purposes. That identifier is then used in hook names relative to the given relation, user interfaces, and so on.

For example, the two communicating services described in examples might hold relations defined as:

wordpress-mysql-relation-details

When that service model is realized, juju will eventually inform all service units of the wordpress service that a relation was established with the respective service units of the mysql service. That event is communicated via hooks being called on both units, in a way resembling the following representation:

wordpress-mysql-relation-workflow

As depicted above, such an exchange might take the following form:

  1. The administrator establishes a relation between the wordpress service and the mysql service, which causes the service units of these services (wordpress/1 and mysql/0 in the example) to relate.
  2. Both service units concurrently call the relation-joined hook for the respective relation. Note that the hook is named after the local relation name for each unit. Given the conventions established for the mysql interface, the requirer side of the relation does nothing, and the provider informs the credentials and database name that should be used.
  3. The requirer side of the relation is informed that relation settings have changed via the relation-changed hook. This hook implementation may pick up the provided settings and configure the software to talk to the remote side.
  4. The Wordpress software itself is run, and establishes the required TCP connection to the configured database.

In that workflow, neither side knows for sure what service is being related to. It would be feasible (and probably welcome) to have the mysql service replaced by a mariadb service that provided a compatible mysql interface, and the wordpress charm wouldn’t have to be changed to communicate with it.

Also, although this example and many real world scenarios will have relations reflecting TCP connections, this may not always be the case. It’s reasonable to have relations conveying any kind of metadata across the related services.


Configuration

Service configuration follows the same model of metadata plus executable hooks that was described above for relations. A charm can declare what configuration settings it expects in its metadata, and how to react to setting changes in an executable hook named config-changed. Then, once a valid setting is changed for a service, all of the respective service units will have that hook called to reflect the new configuration.

Changing a service setting via the command line may be as simple as:

$ juju set wordpress title="My Blog"

This will communicate with the juju state server, record the new configuration, and consequently incite the service units to realize the new configuration as described. For clarity, this process may be represented as:

config-changed


Taking from here

This conceptual overview hopefully provides some insight into the original thinking that went into designing the juju project. For more in-depth information on any of the topics covered here, the following resources are good starting points:

Read more
niemeyer

Since relatively early in the public life of the Go language, I’ve been involved in pushing forward packages that might be used in Ubuntu, including making the compiler suite itself happier in such packaged environments. In due time, these packages were moved over to an automatic build system, so that people wouldn’t have to rely on my good will to have up-to-date packages, nor would I have to be regularly spending time maintaining those packages. Or so was the theory.

It’s well known that the real world is not so plain, though, and issues became much more regular than hoped. Some of the issues were caused by changes in the build conventions of Go, others self-inflicted due to my limited knowledge of the extensive conventions around packaging, or bugs in indirect dependencies of the process, and more recently the sub-optimal scheduling algorithm used by the build farm has driven the builds to a halt.

So, the question is how to get out of this rabbit hole, but still give people a convenient way to use Go in Ubuntu.

Enter godeb, an experiment that dynamically translates the upstream builds of Go into deb packages. In practice, it’s a simple standalone Go program that can parse the build list, fetch the requested version, and in memory translate the contents into a correct binary deb package.

Since you cannot build a Go application without a Go compiler first, there’s an x86 32-bit binary and an x86 64-bit binary of godeb available for download. After the compiler is installed, godeb may be fetched and rebuilt locally by running go get launchpad.net/godeb.

Once the godeb binary is available, it’s easy to get up-to-date packages:

$ ./godeb install
processing https://go.googlecode.com/files/go1.1.1.linux-amd64.tar.gz
package go_1.1.1-godeb1_amd64.deb ready
Selecting previously unselected package go.
(Reading database ... 488515 files and (...) installed.)
Unpacking go (from go_1.1.1-godeb1_amd64.deb) ...
Setting up go (1.1.1-godeb1) ...

It figures what the most recent build available is, downloads, translates, and installs it, asking for a password via sudo if necessary. Running godeb install again will fetch the latest version (or the requested one) and replace the currently installed package. Package installs default to the same architecture of godeb itself, and may be changed by setting the GOARCH environment variable to 386 or amd64, borrowing from a Go convention.

New releases of Go are immediately available, and so are the old ones:

$ ./godeb list
1.2
1.2rc5
1.2rc4
1.2rc3
1.2rc2
1.2rc1
1.1.2
1.1.1
1.1
(...)

$ ./godeb -h
Usage: godeb <command> [<options> ...]

Available commands:

    list
    install [<version>]
    download [<version>]
    remove

For the time being, I’m holding up maintenance of the Go PPA in Launchpad in favor of this system. Of course, you can still install the golang-* packages on Ubuntu 12.10 and 13.04 from the official repositories as usual.

Read more
niemeyer

A few years ago, when I started pondering about the possibility of porting juju to the Go language, one of the first pieces of the puzzle that were put in place was goyaml: a Go package to parse and serialize a yaml document. This was just an experiment and, as a sane route to get started, a Go layer that does all the language-specific handling was written on top of the libyaml C scanner, parser, and serializer library.

This was a good initial plan, but for a number of reasons the end goal was always to have a pure Go implementation. Having a C layer in a Go program slows down builds significantly due to the time taken to build the C code, makes compiling in other platforms and cross-compiling harder, has certain runtime penalties, and also forces the application to drop the memory safety guarantees offered by Go.

For these reasons, over the last couple of weeks I took a few hours a day to port the C backend to Go. The total time, considering full time work days, would be equivalent to about a week worth of work.

The work started on the scanner and parser side of the library. This took most of the time, not only because it encompassed more than half of the code base, but also because the shared logic had to be ported too, and there was a need to understand which patterns were used in the old code and how they would be converted across in a reasonable way.

The whole scanner and parser plus header files, or around 5000 code lines of C, were ported over in a single shot without intermediate runs. To steer the process in a sane direction, gofmt was called often to reformat the converted code, and then the project was compiled every once in a while to make sure that the pieces were hanging together properly enough.

It’s worth highlighting how useful gofmt was in that process. The C code was converted in the most convenient way to type it, and then gofmt would quickly put it all together in a familiar form for analysis. Not rarely, it would also point out trivial syntactic issues. A double win.

After the scanner and parser were finally converted completely, the pre-existing Go unmarshaling logic was shifted to the new pure implementation, and the reading side of the test suite could run as-is. Naturally, though, it didn’t work out of the box.

To quickly pick up the errors in the new implementation, the C logic and the Go port were put side-by-side to run the same tests, and tracing was introduced in strategic points of the scanner and parser. With that, it was easy to spot where they diverged and pinpoint the human errors.

It took about two hours to get the full suite to run successfully, with a handful of bugs uncovered. Out of curiosity, the issues were:

  • An improperly dropped parenthesis affected the precedence of an expression
  • A slice was being iterated with copying semantics where a reference was necessary
  • A pointer arithmetic conversion missed the base where there was base+offset addressing
  • An inner scoped variable improperly shadowed the outer scope

The same process of porting and test-fixing was then repeated on the the serializing side of the project, in a much shorter time frame for the reasons cited.

The resulting code isn’t yet idiomatic Go. There are several signs in it that it was ported over from C: the name conventions, the use of custom solutions for buffering and reader/writer abstractions, the excessive copying of data due to the need of tracking data ownership so the simple deallocating destructors don’t double-free, etc. It’s also been deoptimized, due to changes such as the removal of macros and in many cases its inlining, and the direct expansion of large unions which causes some core objects to grow significantly.

At this point, though, it’s easy to gradually move the code base towards the common idiom in small increments and as time permits, and cleaning up those artifacts that were left behind.

This code will be made public over the next few days via a new goyaml release. Meanwhile, some quick facts about the process and outcome follows.

Lines of code

According to cloc, there was a total of 7070 lines of C code in .c and .h files. Of those, 6727 were ported, and 342 were 12 functions that were left unconverted as being unnecessary right now. Those 6727 lines of C became 5039 lines of Go code in a mostly one-to-one dumb translation.

That difference comes mainly from garbage collection, lack of forward declarations, standard helpers such as append, range-based for loops, first class slice type with length and capacity, internal OOM handling, and so on.

Future work code can easily increase the difference further by replacing some of the logic ported with more sensible options available in Go, such as standard abstractions for readers and writers, buffered writing support as availalbe in the standard library, etc.

Code clarity and safety

In the specific context of the work done, which is of a scanner, parser and serializer, the slice abstraction is responsible for noticeable clarity gains in the code, when compared to the equivalent logic based on pointer arithmetic. It also gives a much more comforting guarantee of correctness of the written code due to bound-checking.

Performance

While curious, this shouldn’t be taken as a performance comparison between the two languages, as it is comparing a fine tuned C implementation with something that is worse than a direct one-to-one port: not only it hasn’t seen any time at all on preventing waste, but the original logic was deoptimized due to changes such as the removal of inlining macros and the expansion of large unions. There are many obvious changes to be done for improving performance.

With that out of the way, in a simple decoding benchmark the C-backed decoder runs on about 37% of the time taken by the out-of-the-box deoptimized Go port.

Output size

The previous goyaml.a Go package file had 1463kb. The new one has 1016kb. This difference includes glue code generated for the integration.

Considering only the .c and .h files involved in the port, the C object code generated with the standard flags used by the go build tool (-g -O2) sums up to 789kb. The equivalent Go code with the standard settings compiles to 664kb. The 12 functions not ported are also part of that difference, so the difference is pretty much negligible.

Build time

Building the 8 .c files alone takes 3.6 seconds with the standard flags used by the go build tool (-g -O2). After the port, building the entire Go project with the standard settings takes 0.3 seconds.

Mechanical changes

Many of the mechanical changes were done using regular expressions. Excluding the trivial ones, about a dozen regular expressions were used to swap variable and type names, drop parenthesis, place brackets in the right locations, convert function declarations, and so on.

Read more
niemeyer

Last week I was part of a rant with a couple of coworkers around the fact Go handles errors for expected scenarios by returning an error value instead of using exceptions or a similar mechanism. This is a rather controversial topic because people have grown used to having errors out of their way via exceptions, and Go brings back an improved version of a well known pattern previously adopted by a number of languages — including C — where errors are communicated via return values. This means that errors are in the programmer’s face and have to be dealt with all the time. In addition, the controversy extends towards the fact that, in languages with exceptions, every unadorned error comes with a full traceback of what happened and where, which in some cases is convenient.

All this convenience has a cost, though, which is rather simple to summarize:

Exceptions teach developers to not care about errors.

A sad corollary is that this is relevant even if you are a brilliant developer, as you’ll be affected by the world around you being lenient towards error handling. The problem will show up in the libraries that you import, in the applications that are sitting in your desktop, and in the servers that back your data as well.

Raymond Chen described the issue back in 2004 as:

Writing correct code in the exception-throwing model is in a sense harder than in an error-code model, since anything can fail, and you have to be ready for it. In an error-code model, it’s obvious when you have to check for errors: When you get an error code. In an exception model, you just have to know that errors can occur anywhere.

In other words, in an error-code model, it is obvious when somebody failed to handle an error: They didn’t check the error code. But in an exception-throwing model, it is not obvious from looking at the code whether somebody handled the error, since the error is not explicit.
(…)
When you’re writing code, do you think about what the consequences of an exception would be if it were raised by each line of code? You have to do this if you intend to write correct code.

That’s exactly right. Every line that may raise an exception holds a hidden “else” branch for the error scenario that is very easy to forget about. Even if it sounds like a pointless repetitive task to be entering that error handling code, the exercise of writing it down forces developers to keep the alternative scenario in mind, and pretty often it doesn’t end up empty.

It isn’t the first time I write about that, and given the controversy that surrounds these claims, I generally try to find one or two examples that bring the issue home. So here is the best example I could find today, within the pty module of Python’s 3.3 standard library:

def spawn(argv, master_read=_read, stdin_read=_read):
    """Create a spawned process."""
    if type(argv) == type(''):
        argv = (argv,)
    pid, master_fd = fork()
    if pid == CHILD:
        os.execlp(argv[0], *argv)
    (...)

Every time someone calls this logic with an improper executable in argv there will be a new Python process lying around, uncollected, and unknown to the application, because execlp will fail, and the process just forked will be disregarded. It doesn’t matter if a client of that module catches that exception or not. It’s too late. The local duty wasn’t done. Of course, the bug is trivial to fix by adding a try/except within the spawn function itself. The problem, though, is that this logic looked fine for everybody that ever looked at that function since 1994 when Guido van Rossum first committed it!

Here is another interesting one:

$ make clean
Sorry, command-not-found has crashed! Please file a bug report at:

https://bugs.launchpad.net/command-not-found/+filebug

Please include the following information with the report:

command-not-found version: 0.3
Python version: 3.2.3 final 0
Distributor ID: Ubuntu
Description:    Ubuntu 13.04
Release:        13.04
Codename:       raring
Exception information:

unsupported locale setting
Traceback (most recent call last):
  File "/.../CommandNotFound/util.py", line 24, in crash_guard
    callback()
  File "/usr/lib/command-not-found", line 69, in main
    enable_i18n()
  File "/usr/lib/command-not-found", line 40, in enable_i18n
    locale.setlocale(locale.LC_ALL, '')
  File "/usr/lib/python3.2/locale.py", line 541, in setlocale
    return _setlocale(category, locale)
locale.Error: unsupported locale setting

That’s a pretty harsh crash for the lack of locale data in a system-level application that is, ironically, supposed to tell users what packages to install when commands are missing. Note that at the top of the stack there’s a reference to crash_guard. This function has the intent of catching all exceptions right at the edge of the call stack, and displaying a detailed system specification and traceback to aid in fixing the problem.

Such “parachute catching” is a fairly common pattern in exception-oriented programming and tends to give developers the false sense of having good error handling within the application. Rather than actually guarding the application, though, it’s just a useful way to crash. The proper thing to have done in the case above would be to print a warning, if at all, and then let the program run as usual. This would have been achieved by simply wrapping that one line as in:

try:
    locale.setlocale(locale.LC_ALL, '')
except Exception as e:
    print("Cannot change locale:", e)

Clearly, it was easy to handle that one. The problem, again, is that it was very natural to not do it in the first place. In fact, it’s more than natural: it actually feels good to not be looking at the error path. It’s less code, more linear, and what’s left is the most desired outcome.

The consequence, unfortunately, is that we’re immersing ourselves in a world of brittle software and pretty whales. Although more verbose, the error result style builds the correct mindset: does that function or method have a possible error outcome? How is it being handled? Is that system-interacting function not returning an error? What is being done with the problem that, of course, can happen?

A surprising number of crashes and plain misbehavior is a result of such unconscious negligence.

Read more
niemeyer

This weekend the proper environment settled out for sorting a pet peeve that shows up every once in a while when coding: writing logic that interacts with other applications in the system via their stdin and stdout streams is often more involved than it should be, which seems pretty ironic when sitting in front of a Unix-like system.

Rather than going over the trouble of setting up pipes and hooking them up in a custom way, often applications end up just delegating the job to /bin/sh, which is not ideal for a number of reasons: argument formatting isn’t straightforward, injecting custom application-defined logic is hard, which means even simple tasks that might be easily achieved by the language end up shelling out to further external applications, and so on.

In an attempt to address that, I’ve spent some time working on an experimental Go package that is being released today: pipe.

I hope you like it as well, and please drop me a note if you find any issues.

Read more
niemeyer

There are a number of common misconceptions in software development surrounding the idea of concurrency. This has been coming for decades, and some of the issues have just been reinforced one more time in an otherwise interesting post in LinkedIn’s engineering blog that recommends their development framework.

Such issues may be observed throughout the post, but can be elucidated via this short paragraph:

As we saw with the Scala and JavaScript examples above, for very simple cases, the Evented (asynchronous) code is generally more complicated than Threaded (synchronous) code. However, in most real-world scenarios, you’ll have to make several I/O calls, and to make them fast, you’ll need to do them in parallel.

At a glance, this may look like a sane proposition. There’s agreement that an asynchronous API or framework is one that does not block the flow of execution when faced with a task that has a long or non-predictable deadline, and this coding style is harder for human beings to get right. For example, if you see code such as:

data = read(filename)

There’s less brain work to process and build on it than so called asynchronous logic such as:

read(filename, callback)

It’s also true that there are important interfaces that follow the asynchronous style to prevent resource waste. Some of these exist in the kernel I/O API.

So what’s the issue, then?

There are a few. The first one is the statement that to make I/O scale you have to do it in parallel. That’s clearly not true. Scalable I/O requires your program to not waste an irresponsible amount of memory and CPU per operation. This may be achieved with simple concurrent techniques, and concurrency is not parallelism.

This drives to the next point, which is the strong association between synchronous programming and threads. You can have synchronous programming, and its simplified mental model, without operating system threads. This can be done by having a compiler and runtime that is mindful about performance and resource consumption, building on the efficient interfaces to implement its abstractions.

These ideas have also been covered in this paper from 2003, including benchmark results that debunk the performance myth. What seems most interesting about this paper is that it theorizes such a compiler and runtime that would allow “overcom[ing] limitations in current threads packages and improv[ing] safety, programmer productivity, and performance”, by using techniques such as dynamic stack growth, stack moving, cheaper synchronization, and compile-time data race detection.

That exact mix, including all of the properties described in the paper, are available today in the Go language. You can have synchronous programming, concurrency, parallelism, and performance. We live in the future.

Read more
niemeyer

Lately I’ve been considering the amount of waste we produce during software development, and how to increase the amount of recycled content. I’m not talking about actual trash, though, but rather about software development artifacts.

Over the years, we’ve learned about and put in practice several means for improving the quality and success rate of projects we create or contribute to. We have practices such as sprints to get people together with high communication bandwidth; we have code reviews for sharing knowledge and improving project quality; we’ve got technical leadership roles to mentor developers and guide the progress of projects; we’ve created kanban boards and burndown charts to help people visualize what they’re going through; and so on.

While all of that seems to have helped tremendously, there’s a sad fact about where we stand: the artifacts of most of these processes are local to their context, and very sensitive to time. That burndown chart is meaningless after it’s burned, and a kanban has no relevant history. Our technical leads indeed guide their teams, but their wisdom stays with the few people that had the chance to interact with them, and subjectively so. That brilliant code review from our best developers has a very limited audience, and rarely carries any meaning just days after it has been accomplished.

That last one is specially interesting. The process of reviewing code is an intense task, very expensive, and that takes a significant portion of the life of an active developer, and even then very little is carried forward as the outcome of that process. We have no effective means or even culture of sharing the generated wisdom to other teams. In fact, we rarely share these details even within the team itself. Why was that line changed like this? Why an interface like that is a bad idea? Who will instruct the new guy next week, and where did we record a bit of the wisdom of the brilliant guy that has left the company recently?

Unfortunately there’s probably no easy solution for this problem. At this point, I mainly recognize that most of the efforts I’ve lead to improve software development for the past several years had a very limited scope. The software itself became immediately better as a result of my efforts, its design became more sensible, and hopefully I contributed a bit to the growth of people around me, but at a company or even community-wide scope, all of these code reviews, sprints, and IRC conversations are buried for very rare revives.

I want to start doing something about this, though. There must be a way to shape these conversations in a more reusable format; in a way that knowledge and agreement can be more proactively preserved and scattered. Perhaps it’s more about how than it is about what. Perhaps we just need to write more posts like this, and cover more topics related to daily development findings. Not sure. I’ll be thinking…

Read more