Canonical Voices

Posts tagged with 'go'

niemeyer

mgo r2016.08.01

A major new release of the mgo MongoDB driver for Go is out, including the following enhancements and fixes:


Decimal128 support

Introduces the new bson.Decimal128 type, upcoming in MongoDB 3.4 and already available in the 3.3 development releases.

Extended JSON support

Introduces support for marshaling and unmarshaling MongoDB’s extended JSON specification, which extend the syntax of JSON to include support for other BSON types and also allow for extensions such as unquoted map keys and trailing commas.

The new functionality is available via the bson.MarshalJSON and bson.UnmarshalJSON functions.

New Iter.Done method

The new Iter.Done method allows querying whether an iterator is completely done or there’s some likelyhood of more items being returned on the next call to Iter.Next.

Feature implemented by Evan Broder.

Retry on Upsert key-dup errors

Curiously, as documented the server can actually report a key conflict error on upserts. The driver will now retry a number of times on these situations.

Fix submitted by Christian Muirhead.

Switched test suite to daemontools

Support for supervisord has been removed and replaced by daemontools, as the latter is easier to support across environments.

Travis CI support

All pull requests and master branch changes now being tested on several server releases.

Initial collation support in indexes

Support for collation is being widely introduced in 3.4 release, with experimental support already visible in 3.3.

This releases introduces the Index.Collation field which may be set to a mgo.Collation value.

Removed unnecessary unmarshal when running commands

Code which marshaled the command result for debugging purposes was being run out of debug mode. This has been fixed.

Reported by John Morales.

Fixed Secondary mode over mongos

Secondary mode wasn’t behaving properly when connecting to the cluster over a mongos. This has been fixed.

Reported by Gabriel Russell.

Fixed BuildInfo.VersionAtLeast

The VersionAtLeast comparison was broken when comparing certain strings. Logic was fixed and properly tested.

Reported by John Morales.

Fixed unmarshaling of ,inline structs on 1.6

Go 1.6 changed the behavior on unexported anonymous structs.

Livio Soares submitted a fix addressing that.

Fixed Apply on results containing errmsg

The Apply method was confusing a resulting object containing an errmsg field as an actual error.

Reported by Moshe Revah.

Improved documentation

Several contributions on documentation improvements and fixes.

Read more
niemeyer

One of my “official side projects” is the Go language driver for MongoDB, started a few years back while looking for a way to store data conveniently from the Go language while leaving aside some of the problems we have mapping code into table-based approaches.

Nowadays this is used in projects at Canonical, in the MongoDB tooling itself, and also in some of my own personal projects. For the personal server-side projects I’ve been using docker containers to conveniently deploy the database tooling, but this week when I went to update some of my older images pulled from the docker hub I found that the docker installed on that server was a bit too old, so it was time to update the servers.

But that got me wondering: what if I replaced all those containers by snaps? This would allow me to keep the convenience and safety of the isolation, while making a lot of things simpler. Unlike docker, snaps make the installed tooling more easily accessible to the host system (bins in the search path, processes as actual children from shell and systemd, etc), and can even use system resources directly assuming interfaces allow it (e.g. home files).

So I got into that, and perhaps ended up overdoing it a little bit. Based on my experience testing the driver, I really appreciate having all versions available for playing with, rather than just the latest one. This is how my local development system looks like right now:

$ snap list | grep 'Name\|mongo'
Name                  Version               Rev     Developer   Notes
mongo22               2.2.7                 1       niemeyer    -
mongo24               2.4.14                1       niemeyer    -
mongo26               2.6.12                1       niemeyer    -
mongo30               3.0.12                1       niemeyer    -
mongo32               3.2.7                 1       niemeyer    -
mongo33               3.3.9                 1       niemeyer    -

These are backed by upstream tarballs (snapcraft downloaded them pre-built from mongodb.com), and are all installed, running, and with tooling available for playing with. They are also published to the snap store which means you can easily make use of them locally as well. If you want to do that, here is a crash course on snaps and on how I packaged the database together specifically.

If you’re using Ubuntu, update to release 16.04 which has snaps working out of the box. For other distributions have a look at snapcraft.io to see what command must be run for ensuring it is available.

Then, pick your version and install it. For example, if you want to play with the features just announced at MongoDB World this week, you want the unstable version 3.3.9:

$ snap install mongo33
93.59 MB / 93.59 MB [============================] 100.00 % 1.33 MB/s 

Name     Version  Rev  Developer  Notes
mongo33  3.3.9    1    niemeyer   -

After that you already have the tooling available in your path (assuming you have /snap/bin there), and the daemon started. So go ahead and fire the client to talk to the database:

$ mongo33
MongoDB shell version: 3.3.9
connecting to: 127.0.0.1:3317/test
MongoDB server version: 3.3.9
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        http://docs.mongodb.org/
Questions? Try the support group
        http://groups.google.com/group/mongodb-user
Server has startup warnings: 
(...) 
(...) ** NOTE: This is a development version (3.3.9) of MongoDB.
(...) **       Not recommended for production.
> 

Note that the server started on the non-standard port 33017 to allow multiple versions to be running together. The pattern is that the snap mongoNN will run on port NN017.

The tools installed also follow a similar pattern. Where upstream uses mongo, mongod, mongodump, etc, the snaps will have them under /snap/bin as mongo33, mongo33.d, mongo33.dump, and so on.

The systemd unit, if you want to interact with it for restarting, improving, debugging, etc, is named snap.mongo33.mongod.service, as usual for any snap that contains a daemon (snap.<snap name>.<app name>.service).

The data for the process that runs under systemd lives in the the standard writable area that the confinement system opens up for snaps in general – in this specific case /var/snap/mongo33/common/. The common directory is reused untouched on updates across snap revisions, which implies snapd won’t copy the data over when doing such updates. This compromises slightly the update safety, but is worth it for bulk data that we don’t want to copy on every snap refresh.

So this ended up quite nicely. Next up is the Go server code itself, which will be packed as a snap too, for deploying it into the servers in a similar way. The exact details for how these snaps are being built are publicly available too.

If you want a hand doing something similar, we have some helpful people at #snappy on FreeNode and also snapcraft@lists.snapcraft.io.

@gniemeyer

Read more
niemeyer

Over the last several months there has been noticeable and growing pain associated with the evolving integration tests around snapd, and given the project goal of being a cross-distribution platform, we are very keen on solving this problem appropriately so that stability is guaranteed everywhere.

With that mindset a more focused effort was made over the last few weeks to produce a tool that can get the project out of those problems, and onto a runway of more pleasant stability. Despite the short amount of time, I’m very happy about the Spread project which resulted from this effort.

Spread is not Jenkins or Travis, and is not a language or library either. Spread is a tool that will very conveniently ship your code to one or more systems, in parallel, and then offer the right set of options so you can run whatever you need to run to make sure the logic is working, and drive it all from the local system. That implies you can run Spread inside Travis, Jenkins, or your terminal, in a similar way to how your unit tests work.

Here is a short list of interesting facts about Spread:

  • Full-system tests with on demand machine allocation.
  • Multi-backend with Linode and LXD (for local runs) out of the box for now.
  • Multi-language since it can run arbitrary remote code.
  • Agent-less and driven via embedded ssh (kudos to Go team).
  • Convenient harness with project+backend+suite+test prepare and restore scripts.
  • Variants feature for test duplication without copy & paste.
  • Great debugging support – add -debug and stop with a shell inside every failure.
  • Reuse of servers – server allocation is fast, but not allocating is faster.
  • Reasonable test outputs with the shell’s +x mode on failures.
  • … and so forth.

This is all well documented, so I’ll just provide one example here to offer a real taste of how the system feels like.

This is spread.yaml, put in the project root to define the basics:

project: spread

backends:
    lxd:
        systems:
            - ubuntu-16.04
            - ubuntu-14.04

path: /home/test

prepare: |
    echo Entering project...
restore: |
    echo Leaving project...

suites:
    tests/: 
        summary: Integration tests
        prepare: |
            echo Entering suite...
        restore: |
            echo Leaving suite...

The suite name is also the path under which the tests are found.

Then, this is tests/hello/task.yaml:

summary: Greet the world
prepare: |
    echo "Entering task..."
restore: |
    echo "Leaving task..."
environment:
    FOO/a: one
    FOO/b: two
execute: |
    echo "Hello world!"
    [ $FOO = one ] || exit 1

The outcome should be almost obvious (intended feature :-). The one curious detail here is the FOO/a and FOO/b environment variables. This is how to introduce variants, which means this one test will in fact become two: first with FOO=one, and then with FOO=two. Now consider that such environment variables can be defined at any level – project, backend, suite, and task – and imagine how easy it is to test small variations without any copy & paste. After cascading takes place (project→backend→suite→task) all environment variables using a given variant key will be present at once on the same execution.

Now let’s try to run this configuration, including the -debug flag so we get a shell on the failures. Note how with a single test we get four different jobs, two variants over two systems, with the variant b failing as instructed:

$ spread -debug

2016/06/11 19:09:27 Allocating lxd:ubuntu-14.04...
2016/06/11 19:09:27 Allocating lxd:ubuntu-16.04...
2016/06/11 19:09:41 Waiting for LXD container to have an address...
2016/06/11 19:09:43 Waiting for LXD container to have an address...
2016/06/11 19:09:44 Allocated lxd:ubuntu-14.04.
2016/06/11 19:09:44 Connecting to lxd:ubuntu-14.04...
2016/06/11 19:09:48 Allocated lxd:ubuntu-16.04.
2016/06/11 19:09:48 Connecting to lxd:ubuntu-16.04...
2016/06/11 19:09:52 Connected to lxd:ubuntu-14.04.
2016/06/11 19:09:52 Sending project data to lxd:ubuntu-14.04...
2016/06/11 19:09:53 Connected to lxd:ubuntu-16.04.
2016/06/11 19:09:53 Sending project data to lxd:ubuntu-16.04...

2016/06/11 19:09:54 Error executing lxd:ubuntu-14.04:tests/hello:b :
-----
+ echo Hello world!
Hello world!
+ [ two = one ]
+ exit 1
-----

2016/06/11 19:09:54 Starting shell to debug...

lxd:ubuntu-14.04 ~/tests/hello# echo $FOO
two
lxd:ubuntu-14.04 ~/tests/hello# cat /etc/os-release | grep ^PRETTY
PRETTY_NAME="Ubuntu 14.04.4 LTS"
lxd:ubuntu-14.04 ~/tests/hello# exit
exit

2016/06/11 19:09:55 Error executing lxd:ubuntu-16.04:tests/hello:b :
-----
+ echo Hello world!
Hello world!
+ [ two = one ]
+ exit 1
-----

2016/06/11 19:09:55 Starting shell to debug...

lxd:ubuntu-16.04 ~/tests/hello# echo $FOO
two
lxd:ubuntu-16.04 ~/tests/hello# cat /etc/os-release | grep ^PRETTY
PRETTY_NAME="Ubuntu 16.04 LTS"
lxd:ubuntu-16.04 ~/tests/hello# exit
exit


2016/06/11 19:10:33 Discarding lxd:ubuntu-14.04 (spread-129)...
2016/06/11 19:11:04 Discarding lxd:ubuntu-16.04 (spread-130)...
2016/06/11 19:11:05 Successful tasks
2016/06/11 19:11:05 Aborted tasks: 0
2016/06/11 19:11:05 Failed tasks: 2
    - lxd:ubuntu-14.04:tests/hello:b
    - lxd:ubuntu-16.04:tests/hello:b
error: unsuccessful run

This demonstrates many of the stated goals (parallelism, clarity, convenience, debugging, …) while running on a local system. Running on a remote system is just as easy by using an appropriate backend. The snapd project on GitHub, for example, is hooked up on Travis to run Spread and then ship its tests over to Linode. Here is a real run output with the initial tests being ported, and a basic smoke test.

If you like what you see, by all means please go ahead and make good use of it.

We’re all for more stability and sanity everywhere.

@gniemeyer

Read more
niemeyer

As much anticipated, this week Ubuntu 16.04 LTS was released with integrated support for snaps on classic Ubuntu.

Snappy 2.0 is a modern software platform, that includes the ability to define rich interfaces between snaps that control their security and confinement, comprehensive observation and control of system changes, completion and undoing of partial system changes across restarts/reboots/crashes, macaroon-based authentication for local access and store access, preliminary development mode, a polished filesystem layout and CLI experience, modern sequencing of revisions, and so forth.

The previous post in this series described the reassuring details behind how snappy does system changes. This post will now cover Snappy interfaces, the mechanism that controls the confinement and integration of snaps with other snaps and with the system itself.

A snap interface gives one snap the ability to use resources provided by another snap, including the operating system snap (ubuntu-core is itself a snap!). That’s quite vague, and intentionally so. Software interacts with other software for many reasons and in diverse ways, and Snappy is a platform that has to mediate all of that according to user needs.

In practice, though, the mechanism is straightforward and pleasant to deal with. Without any snaps in the system, there are no interfaces available:

% sudo snap interfaces
error: no interfaces found

If we install the ubuntu-core snap alone (done implicitly when the first snap is installed), we can already see some interface slots being provided by it, but no plugs connected to them:

% sudo snap install ubuntu-core
75.88 MB / 75.88 MB [=====================] 100.00 % 355.56 KB/s 

% snap interfaces
Slot                 Plug
:firewall-control    -
:home                -
:locale-control      -
(...)
:opengl              -
:timeserver-control  -
:timezone-control    -
:unity7              -
:x11                 -

The syntax is <snap>:<slot> and <snap>:<plug>. The lack of a snap name is a shorthand notation for slots and plugs on the operating system snap.

Now let’s install an application:

% sudo snap install ubuntu-calculator-app
120.01 MB / 120.01 MB [=====================] 100.00 % 328.88 KB/s 

% snap interfaces
Slot                 Plug
:firewall-control    -
:home                -
:locale-control      -
(...)
:opengl              ubuntu-calculator-app
:timeserver-control  -
:timezone-control    -
:unity7              ubuntu-calculator-app
:x11                 -

At this point the application should work fine. But let’s instead see what happens if we take away one of these interfaces:

% sudo snap disconnect \
             ubuntu-calculator-app:unity7 ubuntu-core:unity7 

% /snap/bin/ubuntu-calculator-app.calculator
QXcbConnection: Could not connect to display :0

The application installed depends on unity7 to be able to display itself properly, which is itself based on X11. When we disconnected the interface that gave it permission to be accessing these resources, the application was unable to touch them.

The security minded will observe that X11 is not in fact a secure protocol. A number of system abuses are possible when we hand an application this permission. Other interfaces such as home would give the snap access to every non-hidden file in the user’s $HOME directory (those that do not start with a dot), which means a malicious application might steal personal information and send it over the network (assuming it also defines a network plug).

Some might be surprised that this is the case, but this is a misunderstanding about the role of snaps and Snappy as a software platform. When you install software from the Ubuntu archive, that’s a statement of trust in the Ubuntu and Debian developers. When you install Google’s Chrome or MongoDB binaries from their respective archives, that’s a statement of trust in those developers (these have root on your system!). Snappy is not eliminating the need for that trust, as once you give a piece of software access to your personal files, web camera, microphone, etc, you need to believe that it won’t be using those allowances maliciously.

The point of Snappy’s confinement in that picture is to enable a software ecosystem that can control exactly what is allowed and to whom in a clear and observable way, in addition to the same procedural care that we’ve all learned to appreciate in the Linux world, not instead of it. Preventing people from using all relevant resources in the system would simply force them to use that same software over less secure mechanisms instead of fixing the problem.

And what we have today is just the beginning. These interfaces will soon become much richer and more fine grained, including resource selection (e.g. which serial port?), and some of them will disappear completely in favor of more secure choices (Unity 8, for instance).

These are exciting times for Ubuntu and the software world.

@gniemeyer

Read more
niemeyer

As announced last Saturday, Snappy Ubuntu Core 2.0 has just been tagged and made its way into the archives of Ubuntu 16.04, which is due for the final release in the next days. So this is a nice time to start covering interesting aspects of what is being made available in this release.

A good choice for the first post in this series is talking about how snappy performs changes in the system, as that knowledge will be useful in observing and understanding what is going on in your snappy platform. Let’s start with the first operation you will likely do when first interacting with the snappy platform — install:

% sudo snap install ubuntu-calculator-app
120.01 MB / 120.01 MB [===============================] 100.00 % 1.45 MB/s

This operation is traditionally done on analogous systems in an ephemeral way. That is, the software has either a local or a remote database of options to install, and once the change is requested the platform of choice will start acting on it with all state for the modification kept in memory. If something doesn’t go so well, such as a reboot or even a crash, the modification is lost.. in the best case. Besides being completely lost, it might also be partially applied to the system, with some files spread through the filesystem, and perhaps some of the involved hooks run. After the restart, the partial state remains until some manual action is taken.

Snappy instead has an engine that tracks and controls such changes in a persistent manner. All the recent changes, pending or not, may be observed via the API and the command line:

% snap changes
ID   Status  ...  Summary
1    Done    ...  Install "ubuntu-calculator-app" snap

(the spawn and ready date/time columns have been hidden for space)

The output gives an overview of what happened recently in the system, whether pending or not. If one of these changes is unintendedly interrupted for whatever reason, the daemon will attempt to continue the requested change at the next opportunity.

Continuing is not always possible, though, because there are external factors that such a change will generally depend upon (the snap being available, the system state remaining similar, etc). In those cases, the change will fail, and any relevant modifications performed on the system while attempting to accomplish the defined goal will be undone.

Because such partial states are possible and need to be handled properly by the system, changes are in fact broken down into finer grained tasks which are also tracked and observable while in progress or after completion. Using the change ID obtained in the former command, we can get a better picture of what that changed involved:

% snap changes 1
Status ...  Summary
Done   ...  Download snap "ubuntu-core" from channel "stable"
Done   ...  Mount snap "ubuntu-core"
Done   ...  Copy snap "ubuntu-core" data
Done   ...  Setup snap "ubuntu-core" security profiles
Done   ...  Make snap "ubuntu-core" available
Done   ...  Download snap "ubuntu-calculator-app"
Done   ...  Mount snap "ubuntu-calculator-app"
Done   ...  Copy snap "ubuntu-calculator-app" data
Done   ...  Setup snap "ubuntu-calculator-app" security profiles
Done   ...  Make snap "ubuntu-calculator-app" available

(the spawn and ready date/time columns have been hidden for space)

Here we can observe an interesting implementation detail of the snappy integration into Ubuntu: the ubuntu-core snap is at the moment ~80MB, and contains the software bundled with the snappy platform itself. Instead of having it pre-installed, it’s only pulled in when the first snap is installed.

Another interesting implementation detail that surfaces here is the fact snaps are in fact mounted rather than copied into the system as traditional packaging systems do, and they’re mounted read-only. That means the operation of having the content of a snap in the filesystem is instantaneous and atomic, and so is removing it. There are no partial states for that specific aspect, and the content cannot be modified.

Coming back into the task list, we can see above that all the tasks that the change involved are ready and did succeed, as expected from the earlier output we had seen for the change itself. Being more of an introspection view, though, this tasks view will often also show logs and error messages for the individual tasks, whether in progress or not.

The following view presents a similar change but with an error due to an intentionally corrupted system state that snappy could not recover from (path got a busy mountpoint hacked in):

% sudo snap install xkcd-webserver
[\] Make snap "xkcd-webserver" available to the system
error: cannot perform the following tasks:
- Make snap "xkcd-webserver" available to the system
  (symlink 13 /snap/xkcd-webserver/current: file exists)

% sudo snap changes 2
Status  ...  Summary
Undone  ...  Download snap "xkcd-webserver" from channel "stable"
Undone  ...  Mount snap "xkcd-webserver"
Undone  ...  Copy snap "xkcd-webserver" data
Undone  ...  Setup snap "xkcd-webserver" security profiles
Error   ...  Make snap "xkcd-webserver" available to the system

.................................................................
Make snap "xkcd-webserver" available to the system

2016-04-20T14:14:30-03:00 ERROR symlink 13
    /snap/xkcd-webserver/current: file exists

Note how reassuring that report looks. It says exactly what went wrong, at which stage of the process, and it also points out that all the prior tasks that previously succeeded had their modifications undone. The security profiles were removed, the mount point was taken down, and so on.

This sort of behavior is to be expected of modern operating systems, and is fundamental when considering systems that should work unattended. Whether in a single execution or across restarts and reboots, changes either succeed or they don’t, and the system remains consistent, reliable, observable, and secure.

In the next blog post we’ll see details about the interfaces feature in snappy, which controls aspects of confinement and integration between snaps.

@gniemeyer

Read more
niemeyer

Ubuntu and Snappy community, it’s time to celebrate!

After another intense week and a long Saturday focused on observing and fine tuning the user experience, the development team is proud to announce that Snappy 2.0 has been tagged. As has been recently announced, this release of Snappy Ubuntu Core will be available inside Ubuntu proper, extending it with new capabilities in a seamless manner.

This is an important moment for the project, as it materializes most of the agreements that were made over the past year, and does so with the promise of stability. So you may trust that the important external APIs of the project (filesystem layout, snap format, REST API, etc) will not change from now on.

The features that went into this release are way too rich for me to describe in this post, but you may expect us to be covering the many interesting aspects of Snappy 2.0 in the coming weeks. Rich interfaces between snaps that control security and confinement, comprehensive observation and control of system changes, completion and undoing of partial system changes across restarts/reboots/crashes, macaroon-based authentication for local access and store access, preliminary development mode, a polished filesystem layout and CLI experience, modern sequencing of revisions, and so forth.

Still, the most remarkable aspect about this release to me is that it is a solid foundation. This release exports APIs and is constructed in a way to be proud of, and together with this team I will be delighted to spend the foreseeable future building a platform the world has never seen.

As a final note, I can’t thank the development team enough for the dedication they have put into the project over the past year, and specially over these last two weeks. You were the make it or break it of this project, and you made it.

Thank you!

@gniemeyer

Read more
niemeyer

mgo r2016.02.04

This is one of the most packed releases of the mgo driver for Go in recent times. There are new features, important fixes, and relevant
internal reestructuring to support the on-going server improvements.

As usual for the driver, compatibility is being preserved both with old applications and with old servers, so updating should be a smooth experience.

Release r2016.02.04 of mgo includes the following changes which were requested, proposed, and performed by a great community.

Enjoy!

Exposed access to individual bulk error cases

Accessing the individual errors obtained while attempting a set of bulk operations is now possible via the new mgo.BulkError error type and its Cases method which returns a slice of mgo.BulkErrorCases which are properly indexed according to the operation order used. There are documented server limitations for MongoDB version 2.4 and older.

This change completes the bulk API. It can now perform optimized bulk queries when communicating with recent servers (MongoDB 2.6+), perform the same operations in older servers using compatible but less performant options, and in both cases provide more details about obtained errors.

Feature first requested by pjebs.

New fields in CollectionInfo

The CollectionInfo type has new fields for dealing with the recently introduced document validation MongoDB feature, and also the storage engine-specific options.

Features requested by nexcode and pjebs.

New Find and GetMore command support

MongoDB is moving towards replacing the old wire protocol with a command-based based implementation, and every recent release introduced changes around that. This release of mgo introduces support for the find and getMore commands which were added to MongoDB 3.2. These are exercised whenever querying the database or iterating over queried results.

Previous server releases will continue to use the classical mechanism, and the two approaches should be compatible. Please report any issues in that regard.

Do not fallback to Monotonic mode improperly

Recent driver changes adapted the Pipe.Iter, Collection.Indexes, and Database.CollectionNames methods to work with recent server releases. These changes also introduced a bug that could cause the driver to talk to a secondary server improperly, when that operation was the first operation performed on the session. This has been fixed.

Problem reported by Sundar.

Fix crash in new bulk update API

The new methods introduced in the bulk update API in the last release were crashing when a connection error occurred.

Fix contributed by Maciej Galkowski.

Enable TCP keep-alives for all connections

As requested by developers, TCP keep-alives are now enabled for all connections. No timing is specified, so the default operating system setting will be used.

Feature requested by Hunor Kovács, Berni Varga, and Martin Garton.

ChangeInfo.Updated now behaves as documented

The ChangeInfo.Updated field is documented to report the number of documents that were changed, but in fact that was not possible in old releases of the driver, since the server did not provide that information. Instead, the server only reported the number of documents matched by the selection document.

This has been fixed, so starting with MongoDB 2.6, the driver will behave as documented, and inform the number of documents that were indeed updated. This is related to the next driver change:

New ChangeInfo.Matched field

The new ChangeInfo.Matched field will report the number of documents that matched the selection document, whether the performed change was a removal, an update, or an upsert.

Feature requested by Žygimantas and other list members.

ObjectId now supports TextMarshaler/TextUnmarshaler

ObjectId now knows how to marshal/unmarshal itself as text in hex format when using its encoding.TextMarshaler and TextUnmarshaler interfaces.

Contributed by Jack Spirou.

Created GridFS index is now unique

The index on {“files_id”, “n”} automatically created for GridFS chunks when a file write completes now enforces the uniqueness of the key.

Contributed by Wisdom Omuya.

Use SIGINT in dbtest.DBServer

The dbtest.DBServer was stopping the server with SIGKILL, which would not give it enough time for a clean shutdown. It will now stop it with SIGINT.

Contributed by Haijun Wang.

Ancient field tag logic dropped

The very old field tag format parser, in use several years back even before Go 1 was released, was still around in the code base for no benefit.

This has been removed by Alexandre Cesaro.

Documentation improvements

Documentation improvements were contributed by David Glasser, Ryan Chipman, and Shawn Smith.

Fixed BSON skipping of incorrect slice types

The BSON parser was collapsing when an array value was unmarshaled into an existing field that was not of an appropriate type for such values. This has been fixed so that the the bogus field is ignored and the value skipped.

Fix contributed by Gabriel Russel.

Read more
Dustin Kirkland


Canonical is delighted to sponsor ContainerCon 2015, a Linux Foundation event in Seattle next week, August 17-19, 2015. It's quite exciting to see the A-list of sponsors, many of them newcomers to this particular technology, teaming with energy around containers. 

From chroots to BSD Jails and Solaris Zones, the concepts behind containers were established decades ago, and in fact traverse the spectrum of server operating systems. At Canonical, we've been working on containers in Ubuntu for more than half a decade, providing a home and resources for stewardship and maintenance of the upstream Linux Containers (LXC) project since 2010.

Last year, we publicly shared our designs for LXD -- a new stratum on top of LXC that endows the advantages of a traditional hypervisor into the faster, more efficient world of containers.

Those designs are now reality, with the open source Golang code readily available on Github, and Ubuntu packages available in a PPA for all supported releases of Ubuntu, and already in the Ubuntu 15.10 beta development tree. With ease, you can launch your first LXD containers in seconds, following this simple guide.

LXD is a persistent daemon that provides a clean RESTful interface to manage (start, stop, clone, migrate, etc.) any of the containers on a given host.

Hosts running LXD are handily federated into clusters of container hypervisors, and can work as Nova Compute nodes in OpenStack, for example, delivering Infrastructure-as-a-Service cloud technology at lower costs and greater speeds.

Here, LXD and Docker are quite complementary technologies. LXD furnishes a dynamic platform for "system containers" -- containers that behave like physical or virtual machines, supplying all of the functionality of a full operating system (minus the kernel, which is shared with the host). Such "machine containers" are the core of IaaS clouds, where users focus on instances with compute, storage, and networking that behave like traditional datacenter hardware.

LXD runs perfectly well along with Docker, which supplies a framework for "application containers" -- containers that enclose individual processes that often relate to one another as pools of micro services and deliver complex web applications.

Moreover, the Zen of LXD is the fact that the underlying container implementation is actually decoupled from the RESTful API that drives LXD functionality. We are most excited to discuss next week at ContainerCon our work with Microsoft around the LXD RESTful API, as a cross-platform container management layer.

Ben Armstrong, a Principal Program Manager Lead at Microsoft on the core virtualization and container technologies, has this to say:
“As Microsoft is working to bring Windows Server Containers to the world – we are excited to see all the innovation happening across the industry, and have been collaborating with many projects to encourage and foster this environment. Canonical’s LXD project is providing a new way for people to look at and interact with container technologies. Utilizing ‘system containers’ to bring the advantages of container technology to the core of your cloud infrastructure is a great concept. We are looking forward to seeing the results of our engagement with Canonical in this space.”
Finally, if you're in Seattle next week, we hope you'll join us for the technical sessions we're leading at ContainerCon 2015, including: "Putting the D in LXD: Migration of Linux Containers", "Container Security - Past, Present, and Future", and "Large Scale Container Management with LXD and OpenStack". Details are below.
Date: Monday, August 17 • 2:20pm - 3:10pm
Title: Large Scale Container Management with LXD and OpenStack
Speaker: Stéphane Graber
Abstracthttp://sched.co/3YK6
Location: Grand Ballroom B
Schedulehttp://sched.co/3YK6 
Date: Wednesday, August 19 10:25am-11:15am
Title: Putting the D in LXD: Migration of Linux Containers
Speaker: Tycho Andersen
Abstract: http://sched.co/3YTz
Location: Willow A
Schedule: http://sched.co/3YTz
Date: Wednesday, August 19 • 3:00pm - 3:50pm
Title: Container Security - Past, Present and Future
Speaker: Serge Hallyn
Abstract: http://sched.co/3YTl
Location: Ravenna
Schedule: http://sched.co/3YTl
Cheers,
Dustin

Read more
niemeyer

mgo r2015.05.29

Another release of mgo hits the shelves, just in time for the upcoming MongoDB World event.

A number of of relevant improvements have landed since the last stable release:


New package for having a MongoDB server in test suites

The new gopkg.in/mgo.v2/dbtest package makes it comfortable to plug a real MongoDB server into test suites. Its simple interface consists of a handful of methods, which together allow obtaining a new mgo session to the server, wiping all existent data, or stopping it altogether once the suite is done. This design encourages an efficient use of resources, by only starting the server if necessary, and then quickly cleaning data across runs instead of restarting the server.

See the documentation for more details.

(UPDATE: The type was originally testserver.TestServer and was renamed dbtest.DBServer to improve readability within test suites. The old package and name remain working for the time being, to avoid breakage)

Full support for write commands

This release includes full support for the write commands first introduced in MongoDB 2.6. This was done in a compatible way, both in the sense that the driver will continue to use the wire protocol to perform writes on older servers, and also in the sense that the public API has not changed.

Tests for the new code path have been successfully run against MongoDB 2.6 and 3.0. Even then, as an additional measure to prevent breakage of existent applications, in this release the new code path will be enabled only when interacting with MongoDB 3.0+. The next stable release should then enable it for earlier releases as well, after some additional real world usage took place.

New ParseURL function

As perhaps one of the most requested features of all times, there’s now a public ParseURL function which allows code to parse a URL in any of the formats accepted by Dial into a DialInfo value which may be provided back into DialWithInfo.

New BucketSize field in mgo.Index

The new BucketSize field in mgo.Index supports the use of indexes of type geoHaystack.

Contributed by Deiwin Sarjas.

Handle Setter and Getter interfaces in slice types

Slice types that implement the Getter and/or Setter interfaces will now be custom encoded/decoded as usual for other types.

Problem reported by Thomas Bouldin.

New Query.SetMaxTime method

The new Query.SetMaxTime method enables the use of the special $maxTimeMS query parameter, which constrains the query to stop after running for the specified time. See the method documentation for details.

Feature implemented by Min-Young Wu.

New Query.Comment method

The new Query.Comment method may be used to annotate queries for further analysis within the profiling data.

Feature requested by Mike O’Brien.

sasl sub-package moved into internal

The sasl sub-package is part of the implementation of SASL support in mgo, and is not meant to be accessed directly. For that reason, that package was moved to internal/sasl, which according to recent Go conventions is meant to explicitly flag that this is part of mgo’s implementation rather than its public API.

Improvements in txn’s PurgeMissing

The PurgeMissing logic was improved to work better in older server versions which retained all aggregation pipeline results in memory.

Improvements made by Menno Smits.

Fix connection statistics bug

Change prevents the number of slave connections from going negative on a particular case.

Fix by Oleg Bulatov.

EnsureIndex support for createIndexes command

The EnsureIndex method will now use the createIndexes command where available.

Feature requested by Louisa Berger.

Support encoding byte arrays

Support encoding byte arrays in an equivalent way to byte slices.

Contributed by Tej Chajed.

Read more
Dustin Kirkland

Gratuitous picture of my pets, the day after we rescued them
The PetName libraries (Shell, Python, Golang) can generate infinite combinations of human readable UUIDs


Some Background

In March 2014, when I first started looking after MAAS as a product manager, I raised a minor feature request in Bug #1287224, noting that the random, 5-character hostnames that MAAS generates are not ideal. You can't read them or pronounce them or remember them easily. I'm talking about hostnames like: sldna, xwknd, hwrdz or wkrpb. From that perspective, they're not very friendly. Certainly not very Ubuntu.

We're not alone, in that respect. Amazon generates forgettable instance names like i-15a4417c, along with most virtual machine and container systems.


Meanwhile, there is a reasonably well-known concept -- Zooko's Triangle -- which says that names should be:
  • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable
  • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
  • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
And, of course we know what XKCD has to say on a somewhat similar matter :-)

So I proposed a few different ways of automatically generating those names, modeled mostly after Ubuntu's beloved own code naming scheme -- Adjective Animal. To get the number of combinations high enough to model any reasonable MAAS user, though, we used Adjective Noun instead of Adjective Animal.

I collected a Adjective list and a Noun list from a blog run by moms, in the interest of having a nice, soft, friendly, non-offensive source of words.

For the most part, the feature served its purpose. We now get memorable, pronounceable names. However, we get a few odd balls in there from time to time. Most are humorous. But some combinations would prove, in fact, to be inappropriate, or perhaps even offensive to some people.

Accepting that, I started thinking about other solutions.

In the mean time, I realized that Docker had recently launched something similar, their NamesGenerator, which pairs an Adjective with a Famous Scientist's Last Name (except they have explicitly blacklisted boring_wozniak, because "Steve Wozniak is not boring", of course!).


Similarly, Github itself now also "suggests" random repo names.



I liked one part of the Docker approach better -- the use of proper names, rather than random nouns.

On the other hand, their approach is hard-coded into the Docker Golang source itself, and not usable or portable elsewhere, easily.

Moreover, there's only a few dozen Adjectives (57) and Names (76), yielding only about 4K combinations (4332) -- which is not nearly enough for MAAS's purposes, where we're shooting for 16M+, with minimal collisions (ie, covering a Class A network).

Introducing the PetName Libraries

I decided to scrap the Nouns list, and instead build a Names list. I started with Last Names (like Docker), but instead focused on First Names, and built a list of about 6,000 names from public census data.  I also built a new list of nearly 38,000 Adjectives.

The combination actually works pretty well! While smelly-Susan isn't particularly charming, it's certainly not an ad hominem attack targeted at any particular Susan! That 6,000 x 38,000 gives us well over 228 million unique combinations!

Moreover, I also thought about how I could actually make it infinitely extensible... The simple rules of English allow Adjectives to modify Nouns, while Adverbs can recursively modify other Adverbs or Adjectives.   How convenient!

So I built a word list of Adverbs (13,000) as well, and added support for specifying the "number" of words in a PetName.
  1. If you want 1, you get a random Name 
  2. If you want 2, you get a random Adjective followed by a Name 
  3. If you want 3 or more, you get N-2 Adverbs, an Adjective and a Name 
Oh, and the separator is now optional, and can be any character or string, with a default of a hyphen, "-".

In fact:
  • 2 words will generate over 221 million unique combinations, over 227 combinations
  • 3 words will generate over 2.8 trillion unique combinations, over 241 combinations (more than 32-bit space)
  • 4 words can generate over 255 combinations
  • 5 words can generate over 268 combinations (more than 64-bit space)
Interestingly, you need 10 words to cover 128-bit space!  So it's

unstoutly-clashingly-assentingly-overimpressibly-nonpermissibly-unfluently-chimerically-frolicly-irrational-wonda

versus

b9643037-4a79-412c-b7fc-80baa7233a31

Shell

So once the algorithm was spec'd out, I built and packaged a simple shell utility and text word lists, called petname, which are published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install petname
$ petname
itchy-Marvin
$ petname -w 3
listlessly-easygoing-Radia
$ petname -s ":" -w 5
onwardly:unflinchingly:debonairly:vibrant:Chandler

Python

That's only really useful from the command line, though. In MAAS, we'd want this in a native Python library. So it was really easy to create python-petname, source now published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:python-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install python-petname
$ python-petname
flaky-Megan
$ python-petname -w 4
mercifully-grimly-fruitful-Salma
$ python-petname -s "" -w 2
filthyLaurel

Using it in your own Python code looks as simple as this:

$ python
⟫⟫⟫ import petname
⟫⟫⟫ foo = petname.Generate(3, "_")
⟫⟫⟫ print(foo)
boomingly_tangible_Mikayla

Golang


In the way that NamesGenerator is useful to Docker, I though a Golang library might be useful for us in LXD (and perhaps even usable by Docker or others too), so I created:
Of course you can use "go get" to fetch the Golang package:

$ export GOPATH=$HOME/go
$ mkdir -p $GOPATH
$ export PATH=$PATH:$GOPATH/bin
$ go get github.com/dustinkirkland/golang-petname

And also, the packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:golang-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install golang-petname
$ golang-petname
quarrelsome-Cullen
$ golang-petname -words=1
Vivian
$ golang-petname -separator="|" -words=10
snobbily|oracularly|contemptuously|discordantly|lachrymosely|afterwards|coquettishly|politely|elaborate|Samir

Using it in your own Golang code looks as simple as this:

package main
import (
"fmt"
"math/rand"
"time"
"github.com/dustinkirkland/golang-petname"
)
func main() {
flag.Parse()
rand.Seed(time.Now().UnixNano())
fmt.Println(petname.Generate(2, ""))
}
Gratuitous picture of my pets, 7 years later.
Cheers,
happily-hacking-Dustin

Read more
niemeyer

After the updates to gopkg.in itself, it’s time for gopkg.in/yaml.v2 to receive some attention. The following improvements are now available in the yaml package:

Support for omitempty on struct values

The omitempty attribute can now be used in tags of fields with a struct type. In those cases, the given field and its value only become part of the generated yaml document if one or more of the fields exported by the field type contain non-empty values, according to the usual conventions for omitempty .

For instance, considering these two types:

type TypeA struct {
        Maybe TypeB `yaml:",omitempty"` 
}

type TypeB struct {
        N int
}

the yaml package would only serialize the Maybe mapping into the generated yaml document if its N field was non-zero.

Support for inlined maps

The yaml package was previously able to handle the inlining of structs. For example, in the following snippet TypeB would be handled as if its fields were part of TypeA during serialization or deserialization:

type TypeA struct {
        Field TypeB `yaml:",inline"`
}

This convention for inlining differs from the standard json package, which inlines anonymous fields instead of considering such an attribute. That difference is mainly historical: the base of the yaml package was copied from mgo’s bson package, which had this convention before the standard json package supported any inlining at all.

Now the support for inlining maps, previously available in the bson package, is also being copied over. In practice, it allows unmarshaling a yaml document such as

a: 1
b: 2
c: 3

into a type that looks like

type T struct {
        A int
        Rest map[string]int `yaml:",inline"`
}

and obtaining in the resulting Rest field the value map[string]int{“b”: 2, “c”: 3} , while field A is set to 1 as usual. Serializing out that resulting value would reverse the process, and generate the original document including the extra fields.

That’s a convenient way to read documents with a partially known structure and manipulating them in a non-destructive way.

Bug fixes

A few problems were also fixed in this release. Most notably:

  • A spurious error was reported when custom unmarshalers handled errors internally by retrying. Reported a few times and fixed by Brian Bland.

  • An empty yaml list ([]) is now decoded into a struct field as an empty slice instead of a nil one. Reported by Christian Neumann.

  • Unmarshaling into a struct with a slice field would append to it instead of overwriting. Reported by Dan Kinder.

  • Do not use TextMarshaler interface on types that implement yaml.Getter. Reported by Sam Ghods.

Read more
niemeyer

This post provides the background for a deliberate and important decision in the design of gopkg.in that people often wonder about: while the service does support full versions in tag and branch names (as in “v1.2” or “v1.2.3”), the URL must contain only the major version (as in “gopkg.in/mgo.v2”) which gets mapped to the best matching version in the repository.

As will be detailed, there are multiple reasons for that behavior. The critical one is ensuring all packages in a build tree that depend on the same API of a given dependency (different majors means different APIs) may use the exact same version of that dependency. Without that, an application might easily get multiple copies unnecessarily and perhaps incorrectly.

Consider this example:

  • Application A depends on packages B and C
  • Package B depends on D 3.0.1
  • Package C depends on D 3.0.2

Under that scenario, when someone executes go get on application A, two independent copies of D would be embedded in the binary. This happens because both B and C have exact control of the version in use. When everybody can pick their own preferred version, it’s easy to end up with multiple of these.

The current gopkg.in implementation solves that problem by requiring that both B and C necessarily depend on the major version which defines the API version they were coded against. So the scenario becomes:

  • Application A depends on packages B and C
  • Package B depends on D 3.*
  • Package C depends on D 3.*

With that approach, when someone runs go get to import the application it would get the newest version of D that is still compatible with both B and C (might be 3.0.3, 3.1, etc), and both would use that one version. While by default this would just pick up the most recent version, the package might also be moved back to 3.0.2 or 3.0.1 without touching the code. So the approach in fact empowers the person assembling the binary to experiment with specific versions, and gives package authors a framework where the default behavior tends to remain sane.

This is the most important reason why gopkg.in works like this, but there are others. For example, to encode the micro version of a dependency on a package, the import paths of dependent code must be patched on every single minor release of the package (internal and external to the package itself), and the code must be repositioned in the local system to please the go tool. This is rather inconvenient in practice.

It’s worth noting that the issues above describe the problem in terms of minor and patch versions, but the problem exists and is intensified when using individual source code revision numbers to refer to import paths, as it would be equivalent in this context to having a minor version on every single commit.

Finally, when you do want exact control over what builds, godep may be used as a complement to gopkg.in. That partnership offers exact reproducibility via godep, and gives people stable APIs they can rely upon over longer periods with gopkg.in. Good match.

Read more
niemeyer

Improvements on gopkg.in

Early last year the gopkg.in service was introduced with the goal of encouraging Go developers to establish strategies that enable existent software to remain working while package APIs evolve. After the initial discussions and experimentation that went into defining the (simple) design and feature set of the service, it’s great to see that the approach is proving reasonable in practice, with steady growth in usage. Meanwhile, the service has been up and unchanged for several months while we learned more about which areas needed improvement.

Now it’s time to release some of these improvements:

Source code links

Thanks to Gary Burd, godoc.org was improved to support custom source code links, which means all packages in gopkg.in can now properly reference, for any given package version, the exact location of functions, methods, structs, etc. For example, the function name in the documentation at gopkg.in/mgo.v2#Dial is clickable, and redirects to the correct source code line in GitHub.

Unstable releases

As detailed in the gopkg.in documentation, a major version must not have any breaking changes done to it so that dependent packages can rely on the exposed API once it goes live. Often, though, there’s a need to experiment with the upcoming changes before they are ready for prime time, and while small packages can afford to have that testing done locally, it’s usual for non-trivial software to have external validation with experienced developers before the release goes fully public.

To support that scenario properly, gopkg.in now allows the version string in exposed branch and tag names to be suffixed with “-unstable”. For example:

Such unstable versions are hidden from the version list in the package page, except for the specific version being looked at, and their use in released software is also explicitly discouraged via a warning message.

For the package to work properly during development, any imports (both internal and external to the package) must be modified to import the unstable version. While doing that by hand is easy, thanks to Roger Peppe’s govers there’s a very convenient way to do that.

For example, to use mgo.v2-unstable, run:

govers gopkg.in/mgo.v2-unstable

and to go back:

govers gopkg.in/mgo.v2

Repositories with no master branch

Some people have opted to omit the traditional “master” branch altogether and have only versioned branches and tags. Unfortunately, gopkg.in did not accept such repositories as valid. This was fixed.

These changes are all live right now at gopkg.in.

Read more
niemeyer

mgo r2014.10.12

A new release of the mgo MongoDB driver for Go is out, packed with contributions and features. But before jumping into the change list, there’s a note in the release of MongoDB 2.7.7 a few days ago that is worth celebrating:

New Tools!
– The MongoDB tools have been completely re-written in Go
– Moved to a new repository: https://github.com/mongodb/mongo-tools
– Have their own JIRA project: https://jira.mongodb.org/browse/TOOLS

So far this is part of an unstable release of the MongoDB server, but it implies that if the experiment works out every MongoDB server release will be carrying client tools developed in Go and leveraging the mgo driver. This extends the collaboration with MongoDB Inc. (mgo is already in use in the MMS product), and some of the features in release r2014.10.12 were made to support that work.

The specific changes available in this release are presented below. These changes do not introduce compatibility issues, and most of them are new features.

Fix in txn package

The bug would be visible as an invariant being broken, and the transaction application logic would panic until the txn metadata was cleaned up. The bug does not cause any data loss nor incorrect transactions to be silently applied. More stress tests were added to prevent that kind of issue in the future.

Debug information contributed by the juju team at Canonical.

MONGODB-X509 auth support

The MONGODB-X509 authentication mechanism, which allows authentication via SSL client certificates, is now supported.

Feature contributed by Gabriel Russel.

SCRAM-SHA-1 auth support

The MongoDB server is changing the default authentication protocol to SCRAM-SHA-1. This release of mgo defaults to authenticating over SCRAM-SHA-1 if the server supports it (2.7.7 and later).

Feature requested by Cailin Nelson.

GSSAPI auth on Windows too

The driver can now authenticate with the GSSAPI (Kerberos) mechanism on Windows using the standard operating system support (SSPI). The GSSAPI support on Linux remains via the cyrus-sasl library.

Feature contributed by Valeri Karpov.

Struct document ids on txn package

The txn package can now handle documents that use struct value keys.

Feature contributed by Jesse Meek.

Improved text index support

The EnsureIndex family of functions may now conveniently define text indexes via the usual shorthand syntax ("$text:field"), and Sort can use equivalent syntax ("$textScore:field") to inject the text indexing score.

Feature contributed by Las Zenow.

Support for BSON’s deprecated DBPointer

Although the BSON specification defines DBPointer as deprecated, some ancient applications still depend on it. To enable the migration of these applications to Go, the type is now supported.

Feature contributed by Mike O’Brien.

Generic Getter/Setter document types

The Getter/Setter interfaces are now respected when unmarshaling documents on any type. Previously they would only be respected on maps and structs.

Feature requested by Thomas Bouldin.

Improvements on aggregation pipelines

The Pipe.Iter method will now return aggregation results using cursors when possible (MongoDB 2.6+), and there are also new methods to tweak the aggregation behavior: Pipe.AllowDiskUse, Pipe.Batch, and Pipe.Explain.

Features requested by Roman Konz.

Decoding into custom bson.D types

Unmarshaling will now work for types that are slices of bson.DocElem in an equivalent way to bson.D.

Feature requested by Daniel Gottlieb.

Indexes and CommandNames via commands

The Indexes and CollectionNames methods will both attempt to use the new command-based protocol, and fallback to the old method if that doesn’t work.

GridFS default chunk size

The default GridFS chunk size changed from 256k to 255k, to ensure that the total document size won’t go over 256k with the additional metadata. Going over 256k would force the reservation of a 512k block when using the power-of-two allocation schema.

Performance of bson.Raw decoding

Unmarshaling data into a bson.Raw will now bypass the decoding process and record the provided data directly into the bson.Raw value. This significantly improves the performance of dumping raw data during iteration.

Benchmarks contributed by Kyle Erf.

Performance of seeking to end of GridFile

Seeking to the end of a GridFile will now not read any data. This enables a client to find the size of the file using only the io.ReadSeeker interface with low overhead.

Improvement contributed by Roger Peppe.

Added Query.SetMaxScan method

The SetMaxScan method constrains the server to only scan the specified number of documents when fulfilling the query.

Improvement contributed by Abhishek Kona.

Added GridFile.SetUploadDate method

The SetUploadDate method allows changing the upload date at file writing time.

Read more
niemeyer

The qml package is right now one of the best choices for creating graphic applications under the Go language. Part of the reason why this is true comes from the convenience of QML, a high-level domain-specific language that allows describing visual components, events, animations, and content in general in a succinct and pleasing way. The integration of such a language with Go means having both a good mechanism for describing visual content, and a good platform for doing general development under, which can range from simple data manipulation to involved OpenGL content rendering.

On the practical side, one of the implications of using such a language partnership is that every Go qml application will have some sort of resource content to deal with, carrying the QML logic. Such content may be loaded either from files on disk, or from strings in memory. Loading from a file means the content may be organized in multiple files that directly reference each other without changing the Go application, and may be updated and tested without rebuilding. Loading from a string in memory means the content needs to be self-contained, but results in a standalone binary (linking aside – still depends on Qt libraries).

There’s a well known trick to have both benefits at once, though, and the basic solution has already been made available in multiple packages: have the content on disk, and use an external tool to pack it up into a Go file that is built into the binary when the content is updated. Unfortunately, this trick alone is not enough with the qml package, because the QML engine needs to know what resources are available and where so that the right thing happens when it sees a directory being imported or an image path being referenced.

To solve that problem, the qml package has been enhanced with functionality that leverages the existing Qt resource system to pack content into the binary itself. Rather than using the upstream C++ and XML-based resource compiler, though, a new resource packer was implemented inside the qml package and made available both under a friendly Go API, and as a tool that follows common Go idioms.

The help text for the genqrc tool describes it in detail:

Usage: genqrc [options] <subdir1> [<subdir2> ...]

The genqrc tool packs all resource files under the provided subdirectories into
a single qrc.go file that may be built into the generated binary. Bundled files
may then be loaded by Go or QML code under the URL "qrc:///some/path", where
"some/path" matches the original path for the resource file locally.

Starting with Go 1.4, this tool may be conveniently run by the "go generate"
subcommand by adding a line similar to the following one to any existent .go
file in the project (assuming the subdirectories ./code/ and ./images/ exist):

    //go:generate genqrc code images

Then, just run "go generate" to update the qrc.go file.

During development, the generated qrc.go can repack the filesystem content at
runtime to avoid the process of regenerating the qrc.go file and rebuilding the
application to test every minor change made. Runtime repacking is enabled by
setting the QRC_REPACK environment variable to 1:

    export QRC_REPACK=1

This does not update the static content in the qrc.go file, though, so after
the changes are performed, genqrc must be run again to update the content that
will ship with built binaries.

The tool may be installed via go get as usual:

go get gopkg.in/qml.v1/cmd/genqrc

and once the qrc.go file has been generated, the main qml file may be
loaded with logic equivalent to:

component, err := engine.LoadFile("qrc:///path/to/file.qml")

The loaded file can in turn reference any other content that was bundled
into the Go binary.

For a better picture, this example demonstrates the use of the tool.

Read more
niemeyer

There were a few long standing issues in the yaml.v1 package which were being postponed so that they could be done at once in a single incompatible change, and the time has come: yaml.v2 is now available.

Besides these incompatible changes, other compatible fixes and improvements were performed in that push, and those were also applied to the existing yaml.v1 package so that dependent applications benefit immediately and without modifications.

The subtopics below outline exactly what changed, and how to adapt existent code when necessary.

Type errors

With yaml.v1, decoding a YAML value that is not compatible with the Go value being unmarshaled into will silently drop the offending value without notice. In many cases continuing with degraded behavior by ignoring the problem is intended, but this was the one and only option.

In yaml.v2, these problems will cause a *yaml.TypeError to be returned, containing helpful information about what happened. For example:

yaml: unmarshal errors:
  line 3: cannot unmarshal !!str `foo` into int

On such errors the decoding process still continues until the end of the YAML document, so ignoring the TypeError will produce logic equivalent to the old yaml.v1 behavior.

New Marshaler and Unmarshaler interfaces

The way that yaml.v1 allowed custom types to implement marshaling and unmarshaling of YAML content was slightly confusing and troublesome. For example, considering a CustomType with a keys field:

type CustomType struct {
        keys map[string]int
}

and supposing the goal is to unmarshal this YAML map into it:

custom:
    a: 1
    b: 2
    c: 3

With yaml.v1, one would need to implement logic similar to the following for that:

func (v *CustomType) SetYAML(tag string, value interface{}) bool {
        if tag == "!!map" {
                m := value.(map[interface{}]interface{})
                // ... iterate/validate/convert key/value pairs 
        }
        return goodValue
}

This is too much trouble when the package can easily do those conversions internally already. To fix that, in yaml.v2 the Getter and Setter interfaces are both gone and were replaced by the Marshaler and Unmarshaler interfaces.

Using the new mechanism, the example above would be implemented as follows:

func (v *CustomType) UnmarshalYAML(unmarshal func(interface{}) error) error {
        return unmarshal(&v.keys)
}

Custom-ordered maps

By default both yaml.v1 and yaml.v2 will marshal keys in a stable order which is increasing within the same type and arbitrarily defined across types. So marshaling is already performed in a sensible order, but it cannot be customized in yaml.v1, and there’s also no way to tell which order the map was originally in, as some applications require.

To fix that, there is a new pair of types that support preserving the order of map keys both when marshaling and unmarshaling: MapSlice and MapItem.

Such an ordered map literal would look like:

m := yaml.MapSlice{{"c", 3}, {"b", 2}, {"a", 1}}

The MapSlice type may be used for variables going in and out of the yaml package, or in struct fields, map values, or anywhere else sensible.

Binary values

Strings in YAML must be valid UTF-8 or UTF-16 (with a byte order mark for the latter), and for binary data the specification defines a standard !!binary tag which represents the raw data encoded (encrypted?) as base64. This is now supported both in yaml.v1 and yaml.v2, transparently. That is, any string value that is not valid UTF-8 will be base64-encoded and appropriately tagged so that it roundtrips as the same string. Short strings are inlined, while long ones are automatically broken into several lines and represented in a proper style. For example:

one: !!binary gICA
two: !!binary |
  gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgI
  CAgICAgICAgICAgICAgICAgICAgICAgICAgICA

Multi-line strings

Any string that contains new-line characters (‘\n’) will now be encoded using the literal style. For example, a value that would before be encoded as:

key: "line 1\nline 2\nline 3\n"

is now encoded by both yaml.v1 and yaml.v2 as:

key: |
  line 1
  line 2
  line 3

Other improvements

Besides these major changes, some assorted minor improvements were also performed:

  • Better handling of top-level “null”s (issue #35)
  • Marshal base 60 floats quoted for YAML 1.1 compatibility (issue #34)
  • Better error on invalid map keys (issue #25)
  • Allow non-ASCII characters in plain strings (issue #11).
  • Do not catch unrelated panics by mistake (commit a6dc653f)

For obtaining the yaml.v1 improvements:

go get -u gopkg.in/yaml.v1

For updating to yaml.v2, adapt the code as necessary given the points above, replace the import path, and run:

go get -u gopkg.in/yaml.v2

Read more
niemeyer

As detailed in the preliminary release of qml.v1 for Go a couple of weeks ago, my next task was to finish the improvements in its OpenGL API. Good progress has happened since then, and the new API is mostly done and available for experimentation. At the same time, there’s still work to do on polishing edges and on documenting the extensive API. This blog post aims to present the improvements made, their internal design, and also to invite help for finishing the pending details.

Before diving into the new, let’s first have a quick look at how a Go application using OpenGL might look like with qml.v0. This is an excerpt from the old painting example:

import (
        "gopkg.in/qml.v0"
        "gopkg.in/qml.v0/gl"
)

func (r *GoRect) Paint(p *qml.Painter) {
        width := gl.Float(r.Int("width"))
        height := gl.Float(r.Int("height"))
        gl.Enable(gl.BLEND)
        // ...
}

The example imports both the qml and the gl packages, and then defines a Paint method that makes use of the GL types, functions, and constants from the gl package. It looks quite reasonable, but there are a few relevant shortcomings.

One major issue in the current API is that it offers no means to tell even at a basic level what version of the OpenGL API is being coded against, because the available functions and constants are the complete set extracted from the gl.h header. For example, OpenGL 2.0 has GL_ALPHA and GL_ALPHA4/8/12/16 constants, but OpenGL ES 2.0 has only GL_ALPHA. This simplistic choice was a good start, but comes with a number of undesired side effects:

  • Many trivial errors that should be compile errors fail at runtime instead
  • When the code does work, the developer is not sure about which API version it is targeting
  • Symbols for unsupported API versions may not be available for linking, even if unused

That last point also provides a hint of another general issue: portability. Every system has particularities for how to load the proper OpenGL API entry points. For example, which libraries should be linked with, where they are in the local system, which entry points they support, etc.

So this is the stage for the improvements that are happening. Before detailing the solution, let’s have a look at the new painting example in qml.v1, that makes use of the improved API:

import (
        "gopkg.in/qml.v1"
        "gopkg.in/qml.v1/gl/2.0"
)

func (r *GoRect) Paint(p *qml.Painter) {
        gl := GL.API(p)
        width := float32(r.Int("width"))
        height := float32(r.Int("height"))
        gl.Enable(GL.BLEND)
        // ...
}

With the new API, rather than importing a generic gl package, a version-specific gl/2.0 package is imported under the name GL. That choice of package name allows preserving familiar OpenGL terms for both the functions and the constants (gl.Enable and GL.BLEND, for example). Inside the new Paint method, the gl value obtained from GL.API holds only the functions that are defined for the specific OpenGL API version imported, and the constants in the GL package are also constrained to those available in the given version. Any improper references become build time errors.

To support all the various OpenGL versions and profiles, there are 23 independent packages right now. These packages are of course not being hand-built. Instead, they are generated all at once by a tool that gathers information from various sources. The process can be tersely described as:

  1. A ragel-based parser processes Qt’s qopenglfunctions_*.h header files to collect version-specific functions
  2. The Khronos OpenGL Registry XML is parsed to collect version-specific constants
  3. A number of tweaks defined in the tool’s code is applied to the state
  4. Packages are generated by feeding the state to text templates

Version-specific functions might also be extracted from the Khronos Registry, but there’s a good reason to use information from the Qt headers instead: Qt already solved the portability issue. It works in several platforms, and if somebody is using QML successfully, it means Qt is already using that system’s OpenGL capabilities. So rather than designing a new mechanism to solve the same problem, the qml package now leverages Qt for resolving all the GL function entry points and the linking against available libraries.

Going back to the example, it also demonstrates another improvement that comes with the new API: plain types that do not carry further meaning such as gl.Float and gl.Int were replaced by their native counterparts, float32 and int32. Richer types such as Enum were preserved, and as suggested by David Crawshaw some new types were also introduced to represent entities such as programs, shaders, and buffers. The custom types are all available under the common gl/glbase package that all version-specific packages make use of.

So this is all working and available for experimentation right now. What is left to do is almost exclusively improving the list of function tweaks with two goals in mind, which will be highlighted below as those are areas where help would be appreciated, mainly due to the footprint of the API.

Documentation importing

There are a few hundred functions to document, but a large number of these are variations of the same function. The previous approach was to simply link to the upstream documentation, but it would be much better to have polished documentation attached to the functions themselves. This is the new documentation for MultMatrixd, for example. For now the documentation is being imported manually, but the final process will likely consist of some automation and some manual polishing.

Function polishing

The standard C OpenGL API can often be translated automatically (see BindBuffer or BlendColor), but in other cases the function prototype has to be tweaked to become friendly to Go. The translation tool already has good support for defining most of these tweaks independently from the rest of the machinery. For example, the following logic changes the ShaderSource function from its standard from into something convenient in Go:

name: "ShaderSource",
params: paramTweaks{
        "glstring": {rename: "source", retype: "...string"},
        "length":   {remove: true},
        "count":    {remove: true},
},
before: `
        count := len(source)
        length := make([]int32, count)
        glstring := make([]unsafe.Pointer, count)
        for i, src := range source {
                length[i] = int32(len(src))
                if len(src) > 0 {
                        glstring[i] = *(*unsafe.Pointer)(unsafe.Pointer(&src))
                } else {
                        glstring[i] = unsafe.Pointer(uintptr(0))
                }
        }
`,

Other cases may be much simpler. The MultMatrixd tweak, for instance, simply ensures that the parameter has the proper length, and injects the documentation:

name: "MultMatrixd",
before: `
        if len(m) != 16 {
                panic("parameter m must have length 16 for the 4x4 matrix")
        }
`,
doc: `
        multiplies the current matrix with the provided matrix.
        ...
`,

and as an even simpler example, CreateProgram is tweaked so that it returns a glbase.Program instead of the default uint32.

name:   "CreateProgram",
result: "glbase.Program",

That kind of polishing is where contributions would be most appreciated right now. One valid way of doing this is picking a range of functions and importing and polishing their documentation manually, and while doing that keeping an eye on required tweaks that should be performed on the function based on its documentation and prototype.

If you’d like to help somehow, or just ask questions or report your experience with the new API, please join us in the project mailing list.

Read more
niemeyer

Yesterday at GopherCon I had the chance to sit together with Dave Cheney and Jamu Kakar to judge the entries received for the Go QML Contest. The result was already announced today, live at the second day of GopherCon, including a short demo on stage of the three most relevant entries received. This blog post provides more details for the winner and also for a few of these additional entries.

Winner

The winner of the contest is Robert Nieren with the entry gofusion, a tight and polished clone of the game 2048:

gofusion

The game code is worth looking into. It’s clean, has all the logic written in Go while making good use of the declarative features of QML, and leverages the OpenGL support of the qml package for drawing the tiles. It’s also fun to play!

Runner ups

Two other contest entries were demonstrated during GopherCon. The first one was Mark Saward’s multi-player game King’s Epic:

King's Epic

King's Epic

Deciding between Mark’s entry and Robert’s was not easy. King’s Epic is a much more ambitious project, and Mark did an impressive job on short notice. It’s also intended to be a real multi-platform game, which is something we really hope to see happening. For the contest, the fact that there were a few issues while running the code and that the interesting interactions are still in development have weighted against it.

The source code for the King’s Epic client is available at GitHub, and there’s an introduction that explains how to get started on it.

The second runner up demonstrated during the conference was Maxim Kouprianov’s teg-workshop:

maxim-kouprianov

Maxim’s entry is a Timed Event Graphs editor, and is being developed as part of his bachelor’s thesis. Although the topic is unknown to us, the interactions with the tool looked very attractive, as demonstrated in the video provided with the entry. What weighted against it was mainly that the rendering of these interactions was implemented in Javascript, while they might have been easily implemented in Go instead.

Honorable mention

Although it was not demonstrated during GopherCon, Zev Goldstein’s submission not only deserves being mentioned, but we also hope to see it being improved and made into a complete application. Zev submitted a very early version of Fallback Messenger:

zev-goldstein

The goal of Fallback Messenger is to be a multi-protocol messenger for mobile phones, and at the moment it is able to communicate with Google Talk over XMPP. It’s a promising idea and a great start, but still in very early stages as Zev pointed out during the submission.

Closing

It was a pleasure to interact with some of the participants over these two months in which the contest was run. It was also inspiring to see both the level of the entries, and how most of the entries were or became long term projects.

In terms of Go QML, these entries provide a clear message: although Go was designed with server-side systems in mind, it is by no means limited to that.

Thanks to everyone that submitted entries for the contest, and to Dave Cheney and Jamu Kakar for their time as judges during the tight schedule of GopherCon.

Read more
niemeyer

As part of the on going work on Ubuntu Touch phones, I was invited to contribute a Go package to interface with ubuntuoneauth, a C++ and Qt library that authenticates against Ubuntu One using the system account made available by the phone owner. The details of that library and its use case are not interesting for most people right now, but the work to interface with it is a good example to talk about because, besides the result (uoneauth) being an external and independent Go package that extends the qml package, ubuntuoneauth is not a QML library, but rather a plain Qt library. Some of the callbacks use even traditional C++ types that do not inherit from QObject and have no Qt metadata, so offering that functionality from Go nicely takes a bit more work.

What follows are some of the highlights of that integration logic, to serve as a reference for similar extensions in the future. Note that if your interest is in creating QML applications with Go, none of this is necessary and the documentation is a better place to start.

As an initial step, the following examples demonstrate how the original C++ library and the Go package being designed are meant to be used. The first snippet contains the relevant logic taken out of the examples/signing-main.cpp file, tweaked for clarity:

int main(int argc, char *argv[]) {
    (...)
    UbuntuOne::SigningExample *example = new UbuntuOne::SigningExample(&a);
    QTimer::singleShot(0, example, SLOT(doExample()));
    (...)
}

SigningExample::SigningExample(QObject *parent) : QObject(parent) {
    QObject::connect(&service, SIGNAL(credentialsFound(const Token&)),
                     this, SLOT(handleCredentialsFound(Token)));
    QObject::connect(&service, SIGNAL(credentialsNotFound()),
                     this, SLOT(handleCredentialsNotFound()));
    (...)
}

void SigningExample::doExample() {
    service.getCredentials();
}

void SigningExample::handleCredentialsFound(Token token) {
    QString authHeader = token.signUrl(url, QStringLiteral("GET"));
    (...)
}
 
void SigningExample::handleCredentialsNotFound() {
    qDebug() << "No credentials were found.";
}

The example hooks into various signals in the service, one for each possible outcome, and then calls the service’s getCredentials method to initiate the process. If successful, the credentialsFound signal is emitted with a Token value that is able to sign URLs, returning an HTTP header that can authenticate a request.

That same process is more straightforward when using the library from Go:

service := uoneauth.NewService(engine)
token, err := service.Token()
if err != nil {
        return err
}
signature := token.HeaderSignature("GET", url)

Again, this gets a service, a token from it, and signs a URL, in a “forward” way.

So the goal is turning the initial C++ workflow into this simpler Go API. A good next step is looking into how the NewService function is implemented:

func NewService(engine *qml.Engine) *Service {
        s := &Service{reply: make(chan reply, 1)}

        qml.RunMain(func() {
                s.obj = *qml.CommonOf(C.newSSOService(), engine)
        })

        runtime.SetFinalizer(s, (*Service).finalize)

        s.obj.On("credentialsFound", s.credentialsFound)
        s.obj.On("credentialsNotFound", s.credentialsNotFound)
        s.obj.On("twoFactorAuthRequired", s.twoFactorAuthRequired)
        s.obj.On("requestFailed", s.requestFailed)
        return s
}

NewService creates the service instance, and then asks the qml package to run some logic in the main Qt thread via RunMain. This is necessary because a number of operations in Qt, including the creation of objects, are associated with the currently running thread. Using RunMain in this case ensures that the creation of the C++ object performed by newSSOService happens in the main Qt thread (the “GUI thread”).

Then, the address of the C++ UbuntuOne::SSOService type is handed to CommonOf to obtain a Common value that implements all the common logic supported by C++ types that inherit from QObject. This is an unsafe operation as there’s no way for CommonOf to guarantee that the provided address indeed points to a C++ value with a type that inherits from QObject, so the call site must necessarily import the unsafe package to provide the unsafe.Pointer parameter. That’s not a problem in this context, though, since such extension packages are necessarily dealing with unsafe logic in either case.

The obtained Common value is then assigned to the service’s obj field. In most cases, that value is instead assigned to an anonymous Common field, as done in qml.Window for example. Doing so means qml.Window values implement the qml.Object interface, and may be manipulated as a generic object. For the new Service type, though, the fact that this is a generic object won’t be disclosed for the moment, and instead a simpler API will be offered.

Following the function logic further, a finalizer is then registered to ensure the C++ value gets deallocated if the developer forgets to Close the service explicitly. When doing that, it’s important to ensure the Close method drops the finalizer when called, not only to facilitate the garbage collection of the object, but also to avoid deallocating the same value twice.

The next four lines in the function should be straightforward: they register methods of the service to be called when the relevant signals are emitted. Here is the implementation of two of these methods:

func (s *Service) credentialsFound(token *Token) {
        s.sendReply(reply{token: token})
}

func (s *Service) credentialsNotFound() {
        s.sendReply(reply{err: ErrNoCreds})
}

func (s *Service) sendReply(r reply) {
        select {
        case s.reply <- r:
        default:
                panic("internal error: multiple results received")
        }
}

Handling the signals consists of just sending the reply over the channel to whoever initiated the request. The select statement in sendReply just ensures that the invariant of having a reply per request is not broken without being noticed, as that would require a slightly different design.

There’s one more point worth observing in this logic: the token value received as a parameter in credentialsFound was already converted into the local Token type. In most cases, this is unnecessary as the parameter is directly useful as a qml.Object or as another native type (int, etc), but in this case UbuntuOne::Token is a plain C++ type that does not inherit from QObject, so the default signal parameter that would arrive in the Go method has only a type name and the value address.

Instead of taking the plain value, it is turned into a more useful one by registering a converter with the qml package:

func convertToken(engine *qml.Engine, obj qml.Object) interface{} {
        // Copy as the one held by obj is passed by reference.
        addr := unsafe.Pointer(obj.Property("plainAddr").(uintptr))
        token := &Token{C.tokenCopy(addr)}
        runtime.SetFinalizer(token, (*Token).finalize)
        return token
}

func init() {
        qml.RegisterConverter("Token", convertToken)
}

Given that setup, the Service.Token method may simply call the getCredentials method from the underlying UbuntuOne::SSOService method to fire a request, and block waiting for a reply:

func (s *Service) Token() (*Token, error) {
        s.mu.Lock()
        qml.RunMain(func() {
                C.ssoServiceGetCredentials(unsafe.Pointer(s.obj.Addr()))
        })
        reply := <-s.reply
        s.mu.Unlock()
        return reply.token, reply.err
}

The lock ensures that a second request won’t take place before the first one is done, forcing the correct sequencing of replies. Given the current logic, this isn’t strictly necessary since all requests are equivalent, but this will remain correct even if other methods from SSOService are added to this interface.

The returned Token value may then be used to sign URLs by simply calling the respective underlying method:

func (t *Token) HeaderSignature(method, url string) string {
        cmethod, curl := C.CString(method), C.CString(url)
        cheader := C.tokenSignURL(t.addr, cmethod, curl, 0)
        defer freeCStrings(cmethod, curl, cheader)
        return C.GoString(cheader)
}

No care about using qml.RunMain has to be taken in this case, because UbuntuOne::Token is a simple C++ type that does not interact with the Qt machinery.

This completes the journey of creating a package that provides access to the ubuntuoneauth library from Go code. In many cases it’s a better idea to simply rewrite the logic in Go, but there will be situations similar to this library, where either rewriting would take more time than reasonable, or perhaps delegating the maintenance and stabilization of the underlying logic to a different team is the best thing to do. In those cases, an approach such as the one exposed here can easily solve the problem.

Read more
niemeyer

Go QML Contest

UPDATE: The contest result is out.

A couple of weeks ago a probe message was sent to a few places questioning whether there would be enough interest on a development contest involving Go QML applications. Since the result was quite positive, we’re moving the idea forward!

OpenGL Gopher

This blog post provides further information on how to participate. If you have any other questions not covered here, or want technical help with your application, please get in touch via the mailing list or twitter.

Eligible applications

Participating applications must be developed in Go with a graphic interface that leverages the qml package, and must be made publicly available under an Open Source license approved by the OSI.

The application may be developed on Linux, Mac OS, or Windows.

Review criteria

Applications will be judged under three lenses:

  • Quality
  • Features
  • Innovation

We realize the time is short, so please submit even if you haven’t managed to get everything you wanted in place.

Deadline

All submissions should be made before the daylight of the Monday on April 21st.

Sending the application

Applications must be submitted via email including:

  • URL for the source code
  • Build instructions

If necessary for judging, a screencast may be explicitly requested, and must be provided before the daylight of Wednesday 23rd.

Prizes

There are two prizes for the winner:

Judging and announcement

Judging will be done by well known developers from the Go community, and will take place during GopherCon. The result will be announced during the conference as well, but the winner doesn’t have to be in the conference to participate in the contest.

Learning more about Go QML

The best places to start are the package documentation and the mailing list.

Read more