Canonical Voices

Michael

I’ve spent a few evenings this week implementing a derbyjs version of the Todo spec for the TodoMVC project [1] – and it was a great way to learn more about the end-to-end framework, and appreciate how neat the model-view bindings really are. Here’s a 2 minute demo showing the normal TodoMVC functionality as well as the collaborative editing which Derby brings out of the box:

 

It’s amazing how simple Derby’s model-view bindings enable the code to be. It’s really just two files containing the functionality:

The other files are just setup (define which queries are allowed, define the express server, and some custom style on top of the base.css from TodoMVC). Well done Nate and Brian, and the DerbyJS community (which seems to be growing quite a bit over the last few weeks)!

[1] I’ve still got a few things todo before I can submit a pull-request to get this added to TodoMVC.


Filed under: javascript

Read more
Michael

Over the last week or so I’ve spent a few hours learning a bit about DerbyJS – an all-in-one app framework for developing collaborative apps for the web [1]. You can read more about DerbyJS itself at derbyjs.com, but here are six highlights that I’m excited about (text version below the video):

 

1. The browser reflects dev changes immediately. While developing, derbyjs automatically reflects any changes you make to styles, templates (and scripts?) as soon as you save. No need to switch windows and refresh, instead they’re pushed out to your browser(s).

2. Separation of templates (views) and controllers. Derbyjs provides a model-view-controller framework that we’ve come to expect, with Handlebars-like templates and trivial binding to any event handlers defined in your controller. Derby also provides standard conventions for file locations and bundles your files for you.

3. Model data is bound to the view – derbyjs automatically updates other parts of your templates that refer to any data which the user changes, but that’s not all…

4. Model data is synced real-time (as you type/edit) – updating the data you are changing in all browsers viewing the same page. The data just synchronises (and resolves conflicts) without me caring how. (OK, well I really do care how, but I don’t *need* to care).

5. The same code runs on both the (node) server and the browser. The code that renders a page after a server request (great for initial page load and search indexing) is one and the same code that renders pages in the browser without (necessarily) hitting the server.

6. My app works offline out of the box. That’s right – as per the demo – any changes made while offline are automatically synced back to the server and pushed out to other clients as soon as a connection is re-established.

It’s still early days for DerbyJS, but it looks very promising – opening up the doors to loads of people with great ideas for collaborative apps who don’t have the time to implement their own socket.io real-time communication or conflict resolution. Hats off to the DerbyJS team and community!

[1] After doing similar experiments in other JS frameworks (see here and here), and being faced with the work of implementing all the server-sync, sockets, authentication etc. myself, I went looking and found both meteorjs and derbyjs. You can read a good comparison of meteorjs and derbyjs by the derbyjs folk (Note: both are now MIT licensed).


Filed under: javascript

Read more
Michael

After experimenting recently with the YUI 3.5.0 Application framework, I wanted to take a bit of time to see what other HTML5 app development setups were offering while answering the question: “How can I make HTML5 app development more fun on Ubuntu” – and perhaps bring some of this back to my YUI 3.5 setup.

I’m quite happy with the initial result – here’s a brief (3 minute) video overview highlighting:

  • Tests running not only in a browser but also automatically on file save without even a headless browser (with pretty Ubuntu notifications)
  • Modular code in separate files (including html templates, via requirejs and its plugins)

 

 

Things that I really like about this setup:

  • requirejs - YUI-like module definitions and dependency specification makes for very clear code. It’s an implementation of the Asynchronous Module Definition “standard” which allows me to require dependencies on my own terms, like this:
    require(["underscore", "backbone"], function(_, Backbone) {
        // In here the underscore and backbone modules are loaded and
        // assigned to _ and Backbone respectively.
    })

    There’s some indication that YUI may also implement AMD in its loader also. RequireJS also has a built in optimiser to combine and minify all your required JS during your build step. With two plugins for RequireJS I can also use CoffeeScript instead of Javascript, and load my separate HTML templates as resources into my modules (no more stuffing them all into your index.html.

  • mocha tests running on nodejs or in the browser – as shown in the above screencast. Once configured, this made it pretty trivial to add a `make watch` command to my project which runs tests automatically (using nodejs’ V8 engine) when files change, displaying the results using Ubuntu’s built-in notification system. (Mocha already has built in growl support for Mac users, it’d be great to get similar OSD notifications built in too).

The setup wasn’t without its difficulties [1], but the effort was worth it as now I have a fun environment to start building my dream app (we’ve all got one right?) and continue learning. I think it should also be possible for me to go back and re-create this nodejs dev environment using YUI also – which I’m keen to try if someone hasn’t already done something similar – or even possibly without needing nodejs? I think the challenge for YUI will be if and when most other modules can be loaded via AMD why, as an app developer, would I want to commit to one monolithic framework release when I can cleanly pick-n-chose the specific versions of small tightly-focused modules that I need (assuming my tests pass). Or perhaps YUI will join in and begin versioning modules (and module dependencies) rather than the one complete framework so that they were available via any AMD loader – that would rock!

Thanks to James Burke (author of RequireJS) and the brunch team for their help!

[1] For those interested, things that were difficult getting this setup were:

  • Many JS libraries are not yet AMD ready (or yet giving support), which means adding shims to load them correctly (or using the use plugin in some cases). And sometimes this gets complicated (as it did for me with expect.js). I don’t know if AMD will get widespread adoption, who knows? A result of this is that many JS libraries are designed to work within the browser or node only (ie. they assume that either window or module/exports will be available globally).
  • Using the coffeescript plugin is great for looking at the code, but the errors displayed when running tests that result from  coffeescript parse errors are often hard to decipher (although I could probably use an editor plugin to check the code on save and highlight errors).
  • A recent nodejs version isn’t yet available in the ubuntu archives for Precise. It wasn’t difficult, but I had to install Node 0.6.12 to my home directory and put its bin directory on my path before I could get started.

If you want to try it out or look at the code, just make sure NodeJS 0.6.12 is available on your path and do:

tmp$ bzr branch lp:~michael.nelson/open-goal-tracker/backbone-test/
Branched 42 revisions. 
tmp$ cd backbone-test/
backbone-test$ make
[snip]
backbone-test$ make test
...........
? 12 tests complete (47ms)

Filed under: javascript, open-goal-tracker, ubuntu

Read more
Michael

I had a few hours recently to try updating my Open Goal Tracker javascript client prototype to use jQuery Mobile for the UI… and wow – it is so nice, as a developer with an idea, not having to think about certain UI issues (such as a touch interface, or just basic widget design). I can see now how ugly my previous play-prototype looked. Here’s a brief demo of the jQueryMobile version (sorry for the mumbling):

 

That’s using jQuery Mobile 1.01 for the UI and YUI 3.5.0PR2 for the MVC client-side framework, although I’m tempted to move over to backbone.js (which is what the YUI application framework is based on, it seems). Backbone.js has beautifully annotated source and a book – Developing backbone applications - which so far seems like very worthwhile reading material.

The prototype can be played with at http://opengoaltracker.org/prototype_fun/


Filed under: javascript, jquery, open-goal-tracker, yui

Read more
Michael

More YUI and html5 fun

Continuing on my YUI MVC experiment, I got to spend a bit of time last week playing with my prototype for Open Goal Tracker, and was blown away by the simplicity of two new html attributes: x-webkit-speech (just ‘speech’ when standardised) and contenteditable. Here’s a 2min video showing how I might use both of them in practise (in dire need of styling as usual):

 

 

Behind the scenes I’m still loving Yahoo User Interfaces application framework together with the unit-test api, which allow me to create tests like “creating the view should render stats for existing goals”.

Next week I hope to start implementing the server-sync and investigating integration with Ubuntu’s notification system. Once I’ve sorted out the application-cache and build/packaging etc. for both Ubuntu and Android, I want to do a initial dummy release and then start iterating the features that will make Open Goal Tracker actually useful for facilitating social-yet-individual learning, both within and without of classrooms.

You can play with the little open-goal-tracker prototype (I’ve only tried webkit so far) or run the tests yourself.


Filed under: Uncategorized

Read more
Michael

After experimenting with juju and puppet the other week, I wanted to see if it was possible to create a generic juju charm for deploying any Django apps using Apache+mod_wsgi together with puppet manifests wherever possible. The resulting apache-django-wsgi charm is ready to demo (thanks to lots of support from the #juju team), but still needs a few more configuration options. The charm currently:

  1. Enables the user to specify a branch of a Python package containing the Django app/project for deploy. This python package will be `python setup.py install`’d on the instance, but it also
  2. Enables you to configure extra debian packages to be installed first so that your requirements can be installed in a more reliable/trusted manner, along with the standard required packages (apache2, libapache2-mod-wsgi etc.). Here’s the example charm config used for apps.ubuntu.com,
  3. Creates a django.wsgi and httpd.conf ready to serve your app, automatically collecting all the static content of your installed Django apps to be served separately from the same Apache virtual host,
  4. When it receives a database relation change, it creates some local settings, overriding the database settings of your branch, sync’s and migrates the database (a noop if it’s the second unit) and restarts apache (See the database_settings.pp manifest for more details).

Here’s a quick demo which puts up a postgresql unit and two app servers with these commands:

$ juju deploy --repository ~/charms local:postgresql
$ juju deploy --config ubuntu-app-dir.yaml --repository ~/apache-django-wsgi/ local:apache-django-wsgi
$ juju add-relation postgresql:db apache-django-wsgi
$ juju add-unit apache-django-wsgi

Things that I think need to be improved or I’m uncertain about:

  1. `gem install puppet-module` is included in the install hook (a 3rd way of installing something on the system :/). I wanted to use the vcsrepo puppet module to define bzr resource types and puppet-module-tool seems to be the way to install 3rd-party puppet modules. Using this resource-type enables a simple initial_state.pp manifest. Of course, it’d be great to have ‘necessary’ tools like that in the archive instead.
  2. The initial_state.pp manifest pulls the django app package to /home/ubuntu/django-app-branch and then pip installs it on the system. Requiring the app to be a valid python package seemed sensible (in terms of ensuring it is correctly installed with its requirements satisfied) while still allowing the user to go one step further if they like and provide a debian package instead of a python package in a branch (which I assume we would do ultimately for production deploys?)
  3. Currently it’s just a very simple apache setup. I think ideally the static file serving should be done by a separate unit in the charm (ie. an instance running a stripped down apache2 or lighttpd). Also, I would have liked to have used an ‘official’ or ‘blessed’ puppet apache module to benefit from someone else’s experience, but I couldn’t see one that stood out as such.
  4. Currently the charm assumes that your project contains the configuration info (ie. a settings.py, urls.py etc.), of which the database settings can be simply overridden for deploy. There should be an additional option to specify a configuration branch (and it shouldn’t assume that you’re using django-configglue), as well as other options like django_debug, static_url etc.
  5. The charm should also export an interface (?) that can be used by a load balancer charm.

Filed under: django, juju

Read more
Michael

I’ve been playing around wit h YUI’s new application framework over the past few weeks, basically building on the ToDo list example app, extending it for a basic interface for the Open Goal Tracker project. Here’s a 2 min demo:

I’m loving the MVC separation in the client code, separation of templates, ability to use real links for navigating within the client (which will also work when copy-n-pasted to hit the server) etc.

If anyone is interested, you can play with the code yourself:

$ bzr branch lp:~michael.nelson/open-goal-tracker/html5-client-play
$ cd html5-client-play
$ fab test

which will setup the dev environment and run the django tests… followed by

$ fab runserver

To play with the demo html5 client: http://localhost:8000/static/index.html
To run the javascript unit tests: http://localhost:8000/static/app/tests/test_models.html

or find out more about the purpose of the app with the first 2 mins of this video from a year ago outlining where i hope to go with the application: http://www.youtube.com/watch?v=OT7P-u-86sM (when time permits :) ).


Filed under: javascript, yui

Read more
Michael

I’ve been playing with juju for a few months now in different contexts and I’ve really enjoyed the ease with which it allows me to think about services rather than resources.

More recently I’ve started thinking about best-practices for deploying services using juju, while still using puppet to setup individual units. As a simple experiment, I wrote a juju charm to deploy an irssi service [1] to dig around. Here’s what I’ve found so far [2]. The first is kind of obvious, but worth mentioning:

Install hooks can be trivial:

#!/bin/bash
sudo apt-get -y install puppet

juju-log "Initialising machine state."
puppet apply $PWD/hooks/initial_state.pp

Normally the corresponding manifest (see initial_state.pp) would be a little more complicated, but in this example it’s hardly worth mentioning.

Juju config changes can utilise Puppet’s Facter infrastructure:

This enables juju config options to be passed through to puppet, so that config-changed hooks can be equally simple:

#!/bin/bash
juju-log "Getting config options"
username=`config-get username`
public_key=`config-get public_key`

juju-log "Configuring irssi for user"
# We specify custom facts so that they're accessible in the manifest.
FACTER_username=$username FACTER_public_key=$public_key puppet apply $PWD/hooks/configured_state.pp

In this example, it is the configured state manifest that is more interesting (see configured_state.pp). It adds the user to the system, sets up byobu with an irssi window ready to go, and adds the given public ssh key enabling the user to login.

The same would go for other juju hooks (db-relation-changed etc.), which is quite neat – getting the best of both worlds: the charm user can still think in terms of deploying services, while the charm author can use puppets declarative syntax to define the machine states.

Next up: I hope to experiment with an optional puppet master for a real project (something simple like the Ubuntu App directory), so that

  1. a project can be deployed without the (probably private) puppet-master to create a close-to-production environment, while
  2. configuring a puppet-master in the juju config would enable production deploys (or deploys of exact replicas of production to a separate environment for testing).

If you’re interested in seeing the simple irssi charm, the following 2min video demos:

# Deploy an irssi service
$ juju deploy --repository=/home/ubuntu/mycharms  local:oneiric/irssi
# Configure it so a user can login
$ juju set irssi username=michael public_key=AAAA...
# Login to find irssi already up and running in a byobu window
$ ssh michael@new.ip.address

and the code is on Launchpad.

[1] Yes, irssi is not particularly useful as a juju service (as I don’t want multiple units, or relating it to other services etc.), but it suited my purposes for a simple experiment that also automates something I can use for working in the cloud.

[2] I’m not a puppet or juju expert, so if you’ve got any comments or improvements, don’t hesitate.


Filed under: juju, puppet, ubuntu

Read more
Michael

Working in the cloud

For a while now I’ve done all my development work and irc communication via SSH and byobu – I love the freedom of just grabbing my netbook (ie. switching computers) and heading to a cafe and picking up right where I left off (or even letting tests run while I walk).

The one pain-point was that I SSH’d into my own box at home – and if my home connection went (anything from a thunderstorm to a curious kid pulling a chord), I had to then work on my much slower netbook.

Now, after reading on an email thread that Aaron was doing something similar, I’m running a tiny cloud instance for my irc and development work, and documenting everything that I’m doing to get my dev environment just right – so that I can automate the setup (not really something for an ensemble formula – just a simple script).

That, and having all my music in the cloud for the past 6 months make switching computers and/or physical locations a non-obtrusive part of my work-day.

If anyone has small tips that make their own dev environment enjoyable, let me know!


Filed under: Uncategorized

Read more
Michael

What is the dream setup for developing and deploying Django apps? I’m looking for a solution that I can use consistently to deploy apps to servers where I may or may not have the ability to install system packages, or where I might need my app temporarily to use a newer version of a system-installed package while giving other apps running on the same server breathing space to update (think: updating a system-installed Django package on a server running four independent apps).

Specifically, the goals I have for this scenario are:

  • It should be easy to use for both development and deployment (using standard tools and locations so developers don’t need to learn the environment),
  • Updating any virtualenv environment should be automatic, but transparent (ie. if the pip requirements.txt changes, the next time I run tests, devserver or deployed server, it’ll automatically ensure the virtualenv is correct),
  • I shouldn’t have to wait unnecessarily for virtualenvs to be created (ie. if I make a change to the requirements to try a new version of a package, and then change it back, I don’t want to re-create the original virtualenv). Similarly, if I revert a deployment to a previous version, the previous virtualenv should still be available.
  •  For deployment, the virtualenv shouldn’t unnecessarily replace system python packages, but allow this as an option (ie. not a –no-site-packages virtualenv).

There are a lot of virtualenv/fabric posts out there for both development and deployment, and using a SHA of the requirements.txt seems an obvious way to go. What I ended up with for my project was this develop and deploy with virtualenv snippet which so far is working quite well (although I’m yet to try a deploy where I override system packages). If the deployed version is using virtualenv exclusively, the requirements.txt file can be shared, but otherwise it would just be a matter of including the requirements.txt for the deploy with the other configuration data (settings.py etc.).

If you can see any reasons why this is not a good idea, or improvements, please let me know!


Filed under: django, python

Read more
Michael

If you’re working on a project that bootstraps a development environment (using virtualenv or similar) it can be painful to bootstrap the environment for every bug/feature branch that you work on. There are lots of ways you can avoid this (local cache etc.), but if you’re using bzr to manage your branches, a single working directory for all your branches can solve the problem for you. Most bzr users will be familiar with this, but people new to bzr might benefit.

The one-time setup is pretty straight forward – create a repository with a local copy of trunk, and a branch of trunk for your new feature/bug:

$ cd dev/
$ bzr init-repo myproject
$ cd myproject
$ bzr branch lp:myproject trunk
$ bzr branch trunk myfeaturex

From now on, you can simply `cd trunk && bzr pull` when you want to update your local trunk [1]. Now we’ll create a light-weight checkout and bootstrap our environment there. I tend to call this directory ‘current_work’ but whatever works for you. Also, most of the projects I’m working on are using fabric for setting up the virtualenv and other tidbits, but replace that with your own bootstrap command:

$ bzr checkout --lightweight myfeaturex current_work
$ cd current_work && fab bootstrap

Assuming everything went well, you now have your development environment setup and are ready to work on myfeaturex. Make changes, commit etc., push to launchpad – it’ll all be for your myfeaturex branch. But what if half-way through, you need to switch and work on an urgent bug? Easy: commit (or shelve) any changes you’ve got for your current branch, then:

$ cd ..
$ bzr branch trunk criticalbugx
$ cd current_work
$ bzr switch ../criticalbugx  [2]

Edit 2011-05-23: See Ricardo’s comment – the above 4 lines can be replaced by his two.

That’s it – your current_work directory now reflects the criticalbugx branch – and is already bootstrapped [3], you can work away, commit push etc., and then switch back to your original branch with:

$ bzr switch ../myfeaturex

If you’re working on large branches, once you’ve got the above workflow, you may want to try next keeping large features reviewable with bzr pipelines.

[1] It’s a good idea to never do any work in your trunk tree… always work in a branch of trunk.
[2] The ../ is not actually required, but when you’ve lots of branches with long names, it means you can tab-complete the branch name.
[3] Of course, if one of your branches changes any virtualenv requirements or similar, you’ll need to re-bootstrap, but that isn’t so often in practise. Or sometimes a new version of a dependency is released and you won’t get it until you rebootstrap.


Filed under: bzr, python

Read more
Michael

Learning a new language is fun…finding new ways of thinking about old problems and simple ways of expressing new ideas.

As a small learning project for Golang, I set out the other day to experiment writing a simple form field validation library in my spare time – as it seems there is not yet anything along the lines of Django’s form API (email thread on go-nuts).

The purpose was to provide an API for creating forms that can validate http.Request.Form data, cleaning the data when it is valid, and collecting errors when it is not.

The initial version provides just CharField, IntegerField and RegexField, allowing form creation like:

    egForm := forms.NewForm(
        forms.NewCharField("description"),
        forms.NewIntegerField("purchase_count"))         

    egForm.SetFormData(request.Form)
    if egForm.IsValid() {
        // process the now populated egForm.CleanedData() and 
        // redirect.
    } else {
        // Use the now populated egForm.Errors map to report
        // the validation errors.
    }

The GoForms package is installable with `goinstall launchpad.net/goforms` or you can browse the goforms code on Launchpad (forms_test.go and fields_test.go have examples of the cleaned data and error). Let me know if you see flaws in the direction, or better ways of doing this in Go.

As a learning project it has been great – I’ve been able to use GoCheck for the tests, use embedded types (for sharing BaseField functionality – similar to composition+delegation without the bookkeeping) and start getting a feel for some of the subtleties of working with interfaces and other types in Go (this felt like all the benefits of z3c interfaces, without any of the overhead). Next I hope to include a small widget library for rendering basic forms/fields.


Filed under: golang, launchpad, testing

Read more
Michael

For quite a while I’ve been wanting to get stuck into learning the Go programming language, after a colleague at Canonical (Gustavo Niemeyer) wrote enthusiastically about it… and today I took a bit of time to get started.

Getting setup on my Ubuntu Maverick VM was pretty straight forward:

  • Add Gustavo’s software channel:
sudo add-apt-repository ppa:niemeyer/ppa
sudo apt-get update
sudo apt-get install golang
export GOROOT=/usr/lib/go
  • This also adds ‘gorun‘, which effectively saves you having to compile and link each time you edit your go source file – nice
  • I had to then grab the go source to get the vim syntax highlighting etc. (files are provided for most editors in the source).
Whenever learning a new language I always find it incredibly helpful to be able to see all the options available – code auto-completion. So finally, I installed gocode, but had to make the following changes to get it to `make install`:
  • Installed golang-weekly (rather than golang) to get a more recent version…
  • Reverted the latest commit to gocode which was an update for the latest golang weekly release, which hadn’t yet hit the PPA.
With those changes, gocode installed and worked beautifully – learning with auto-completion is like seeing with peripheral vision for the first time.
Next week I hope to checkout the gocheck testing library as well as the Effective Go tutorial. As much as I love Python, it’s great getting stuck into a new language!

Filed under: golang

Read more
Michael

I really enjoyed the emphasis of Ben’s Code-reviews as relationship builders post over the weekend. The focus on healthy learning relationships both for new and old is great – something I’ve tried to focus on myself over the past few years working in distributed teams.

On a related note, I’ve wondered of late whether I could use “Agreed” rather than “Approved” when reviewing a branch. Agreed (IMO) implies one type of relationship – peer discussion resulting in agreement – rather than an approval from an authority (perhaps without interaction). It is a small and maybe insignificant distinction…


Filed under: Uncategorized

Read more
Michael

One of the things I love is reviewing smaller targeted branches – 400-500 lines focused on one or two particular points. But you don’t want to be blocked on the review of one branch to start the next dependent branch. Here’s a quick screencast (with text version below) that I made for some colleagues showing how Bzr’s pipeline plugin can help your workflow in this situation:

Text version

Assuming you’ve already got a lightweight checkout of your project code, create a new branch for part-1 of your feature and switch to it:

$ bzr branch trunk part-1
$ cd current_work;bzr switch ../part-1

Make your changes, commit and push to Launchpad:

$ bzr commit -m "Finished part 1 of feature-x that does x, y and z";bzr push

While that branch is being reviewed, you can start on the next part of the feature:

$ bzr add-pipe part-2
Created and switched to pipe "part-2"
$ bzr pipes
    part-1
*   part-2

Now make your further changes for the feature and commit/push those as above (Launchpad will create a new branch for part-2). Now let’s say that the review for part-1 comes back, requiring a small change, so we switch back to part-1, make the change, commit and pump it (so that our part-2 branch will include the change also):

$ bzr switch part-1
$ bzr pipes
*   part-1
    part-2
(make changes)
$ bzr commit -m "Changes from review";bzr pump

Now we can see the changes have been merged into the part-2 branch:

$ bzr switch part-2
$ bzr log --limit 1 --line
102: Michael Nelson 2011-01-13 [merge] Merged part-1 into part-2.

So, let’s say we’re now ready to submit part-2 for review. The handy lp-submit bzr plugin will automatically know to set part-1 as a prerequisite branch, so that the diff shown on the part-2 merge proposal includes only the changes from part-2 (as well as making sure that your branches are pushed up to LP):

$ bzr lp-submit
Pushing to lp:~michael.nelson/rnr-server/part-1
Pushing to lp:~michael.nelson/rnr-server/part-2

Nice! Now, what if there have been some changes to trunk in the mean-time that you want to include in your branch? Easy, just:

$ bzr pump --from-submit

and pipelines will automatically merge trunk into part-1, commit, then merge part-1 into part-2.

More info with `bzr help pipeline` (once it’s installed), or more information (including how to get it) on the Bazaar Pipeline wiki page.


Filed under: bzr, python

Read more
Michael

My direct team-mates at Canonical live everywhere from the US west-coast through to Poland, so we don’t have the option of doing a ‘traditional’ stand-up each day.

In the past we’ve done IRC standups – each pasting our notes at one point in the day when we’re all online – which helps a few points (recording what we’re up to) but has quite a few drawbacks in my opinion (see Vincent’s thoughts on irc standups also). A lot of us on the team admitted that we didn’t always give the irc standups our full-attention.

Right now we’re trying an experiment which is kind-of neat. We’re doing voice standups in pairs/triplets with people in similar timezones, but adding our standup notes to a wiki page before the call. The advantages are that the calls are easy – I can read through Lukasz’s notes and we just discuss any issues or successes as well as the plan for today, and vice-versa. Also, as we all subscribe to the wiki page for email notifications, we actually read everyone else’s notes from different time-zones asynchronously (and all stakeholders can take part in calls or read the notes as desired).

Personally, having a brief voice call as part of my day is a huge benefit – just touching base with another voice from work, and having a bit of a chat after going through our stand-up gives a more human touch to working from home. It’ll be interesting to see whether others on the team find it more or less helpful.


Filed under: Uncategorized

Read more
Michael

Moving on from Launchpad

I’ve been working at Canonical for nearly two years now, in a role that I think of as a support role for all the awesome Debian, Ubuntu and application developers out there who use Launchpad to publish their software in Ubuntu. Most of my time has been spent making it easier to get software to users of Ubuntu (technically, working on Launchpad’s archive features such as Personal Package Archives), but I’ve also enjoyed improving processes for designing new features collaboratively in an open environment.

It’s been a great two years and I’ve learned an incredible amount while working with the Launchpad community. And although I’m not leaving the community, I will be spending the majority of my time developing tools on a different team in Canonical. I’m excited to be spending more time doing Django development, and am hoping that the skills I’ll be building there will give me opportunities to work on a bunch of open-source education-related tools in my own time to support life-based open-learning – something I’m quite passionate about…

The work that we all do in the open-source world impacts thought and philosophy far beyond the circles we work in. Principles from open source software have already been changing the way people think about copyright in education (the free high-school science texts project is an incredible example of that), but I believe they will also change the way we facilitate the learning of the next generations (from my edu-focus blog). Educators in schools and universities are already adapting to new mediums and principles but – in the same way that early film commercials re-implemented the stage on film - educators are often just re-implementing the classrooms of the past online (with “Learning management systems” that deliver content and keep the teacher/institution managing the learning). Schools and other education institutions need either a revolution or speedy evolution, and I’m excited to be a part of that both indirectly through my work and in my own time.


Filed under: Uncategorized

Read more
Michael

Thanks to the excellent work from the Launchpad Code team and others (James and William, to name just two), here’s a preview of how easy it will soon be to build a branch into published binaries in your PPA:

http://blip.tv/file/3738068

(I’ll update with the inline flash version once it’s been converted).


Filed under: launchpad

Read more
Michael

Having just been through the process of trying to involve people in open source design and development, Maris Fogels (another Launchpad engineer) and I took some time to document/reflect on the process. The strongest point that came out of the Q&A for me is the importance of voice conversations for iterating a design like this.

In this Q&A Michael shares his experience helping design the ‘Build Branch to Archive’ user interface:

Ubuntu is beginning to manage its source packages as Bazaar branches, as part of the DistributedDevelopment project.

One incredibly important activity for source packages is building them into binary packages that can be downloaded and installed. This spec explains how Launchpad can make this a wonderful experience.

Mars: Have you done any work designing user interfaces before this?

Michael: A little – we went through a similar process for the PPA redesign as
part of Launchpad 3.0. Prior to that, no, only your typical “Oh this
needs a UI – let’s just start creating the UI that *we* think the
target users will want.”

We had tried (in the Launchpad Soyuz team) doing something similar to
this design process previously (particularly, defining stories for an
interaction before coding, and getting feedback from the people
representing the target users), but no matter how hard we tried, we
weren’t able to get the time with them or any feedback, and so we
ended up implementing the design and landing it without their
feedback, which was a mistake – and a lesson learned which influenced
the process I used here.

Mars: What tools and resources did you use as you went through the project?

Michael: Balsamiq, wikis, email discussions and phone calls. Balsamiq was great
as a way to create UI’s quickly, modify them easily as we learned
more, share them for others to modify (I just pushed a bzr branch that
I kept up to date).

Mars: Was there a particular process that you followed?

Michael: Basically I tried to:

  1. Identify and describe the use-cases,
  2. Create mockups for each use-case,
  3. Get feedback from as many people as possible,

and then constantly update or add new use-cases and mockups as more
info came in. Looking back, it seems we went through three definite
iterations.

Mars: How long did it take to design the page?

Michael: We just designed the main interactions to build a branch into a PPA,
which is 2 pages/overlays (they’ll be overlays if js is enabled). I
think the process (3 iterations above of gathering feedback,
phone-calls etc.) spanned 3 weeks, on and off.

Mars: Do you think the problem was well-understood before you started working on the user interface?

Michael: I think different people understood their own domains of the
build-from-branch interaction, and everyone had ideas about how the
other parts should work, but no, I don’t think any of us understood
the use-cases properly – particularly the issues around re-using
recipes in the UI – until all the ideas and feeback was collected and
put together.

Mars: Did the design of the interface change much as work progressed?

Old vs. New

Michael: Yes, quite a bit. Initially I had initially (mis)understood that we
needed to work with the current implementation of bzr-builder, but
after the first iteration, James Westby pointed out that bzr-builder
can be updated to support the requirements of the UI. This freed us to
consider sharing recipes in the UI in a way that would not have been
possible otherwise.

Collecting and expanding the use-cases naturally required updates to
the design. Also feedback and questions from various people helped to
keep refocusing on what people want to be able to do (for example,
“This recipe supports building to Ubuntu Lucid, Ubuntu 9.10 and Ubuntu
9.04″, while still allowing the person using the recipe to select just
the distroseries they want).

Mars: How did you get feedback on your designs? Was there a good response?

Michael: (Just an aside, there’s no ‘y’ there – they are really our designs, I
was just filtering and focusing the collected thoughts and ideas into
something visible).

So as above, email to the dev list, then irc chats, but the biggest
throughput of communication and feedback came via actual calls. It’s
just so much easier to clear misunderstandings on the spot if you walk
through the process together. I had multiple calls with 6 different
people, all which were invaluable.

Mars: Looking back, what was the most difficult part of the process?

Michael: (1) Getting real user input and feedback and (2) getting feedback from
professional designers.

With (1) the best I could do with the time I had was to communicate
with representatives of target users, which I think worked quite well.
And (2) was difficult mostly due to the nature of the problem for
build-from-branch. IMO it requires someone to sit physically in the
office and walk through the process with a designer, something I
couldn’t do, and hasn’t been done yet as far as I know.

Mars: What did you most enjoy about the work?

Michael: Bringing lots of ideas together from lots of people, clarifying the
different needs to different people to build a common understanding of
the issues involved.

But, to be honest, I am also enjoying getting back to soyuz coding. I
enjoy doing this kind of thing for a week or two, but after that I’m
always keen to get back to doing some coding.

I think I (and James?) also enjoyed the fact that we could work on
defining the interaction *before* anything had been coded. Too often
we get in there and just start coding something without thinking
through the use-cases properly (or even defining them), and it ends up
causing a lot of pain. Like everything, it’s a learning process.

Mars: Any final words for developers looking to try this themselves?

Michael: Sure, if you’d like a change of rhythm and enjoy facilitating
conversations and ideas, give it a go – it’s a great chance to
interact with other people in different areas and build your
understanding of what people actually want to do with your product.


Filed under: design, launchpad

Read more
Michael

Every time that I upgrade my operating system, I feel like I’m receiving a whole bunch of gifts from people I don’t know – which actually is what’s happening! As I learn more about the Debian and Ubuntu communities, and Launchpad (which I’ve been working on for the past year), I never stop feeling awed at the amount of effort that is being filtered through Launchpad (in both directions) in terms of bug-fixes, new packages, translations etc., into end-products that rock!

Having upgraded my netbook a few weeks ago, over the weekend I decided to upgrade my work machine to test the latest development release, Ubuntu Lucid, and the only difficult part of the process was having to install Windows XP to update my bios (because Samsung, the manufacturer of my work laptop, only provide bios updates as win-32 applications). Other than that, it’s been a joy seeing all the improvements (like the open-source nvidia support – a few clicks and my dual monitors worked perfectly, something I could only do previously with the proprietary drivers).

So, whether it is heard or not, I want to say a big thanks to the millions involved for the gift – my Lucid experience has been wonderful.


Read more