Canonical Voices

Posts tagged with 'experiment'

Michael Hall

I’ve been using Ubuntu on my only phone for over six months now, and I’ve been loving it. But all this time it’s been missing something, something I couldn’t quite put my finger on. Then, Saturday night, it finally hit me, it’s missing the community.

That’s not to say that the community isn’t involved in building it, all of the core apps have been community developed, as have several parts of our toolkit and even the platform itself. Everything about Ubuntu for phones is open source and open to the community.

But the community wasn’t on my phone. Their work was, but not the people.  I have Facebook and Google+ and Twitter, sure, but everybody is on those, and you have to either follow or friend people there to see anything from them. I wanted something that put the community of Ubuntu phone users, on my Ubuntu phone. So, I started to make one.

Community Cast

Community Cast is a very simple, very basic, public message broadcasting service for Ubuntu. It’s not instant messaging, or social networking. It doesn’t to chat rooms or groups. It isn’t secure, at all.  It does just one thing, it lets you send a short message to everybody else who uses it. It’s a place to say hello to other users of Ubuntu phone (or tablet).  That’s it, that’s all.

As I mentioned at the start, I only realized what I wanted Saturday night, but after spending just a few hours on it, I’ve managed to get a barely functional client and server, which I’m making available now to anybody who wants to help build it.


The server piece is a very small Django app, with a single BroadcastMessage data model, and the Django Rest Framework that allows you to list and post messages via JSON. To keep things simple, it doesn’t do any authentication yet, so it’s certainly not ready for any kind of production use.  I would like it to get Ubuntu One authentication information from the client, but I’m still working out how to do that.  I threw this very basic server up on our internal testing OpenStack cloud already, but it’s running the built-in http server and an sqlite3 database, so if it slows to a crawl or stops working don’t be surprised.  Like I said, it’s not production ready.  But if you want to help me get it there, you can get the code with bzr branch lp:~mhall119/+junk/communitycast-server, then just run syncdb and runserver to start it.


The client is just as simple and unfinished as the server (I’ve only put a few hours into them both combined, remember?), but it’s enough to use. Again there’s no authentication, so anybody with the client code can post to my server, but I want to use the Ubuntu Online Accounts to authenticate a user via their Ubuntu One account. There’s also no automatic updating, you have to press the refresh button in the toolbar to check for new messages. But it works. You can get the code for it with bzr branch lp:~mhall119/+junk/communitycast-client and it will by default connect to my test instance.  If you want to run your own server, you can change the baseUrl property on the MessageListModel to point to your local (or remote) server.


There isn’t much to show, but here’s what it looks like right now.  I hope that there’s enough interest from others to get some better designs for the client and help implementing them and filling out the rest of the features on both the client and server.


Not bad for a few hours of work.  I have a functional client and server, with the server even deployed to the cloud. Developing for Ubuntu is proving to be extremely fast and easy.


Read more
Michael Hall

When we announced the Ubuntu Edge crowd-funding campaign a week ago, we had one hell of a good first day.  We broke records left and right, we sold out of the first round of perks in half the time we expected, and we put the campaign well above the red line we needed to reach our goal.  Our second day was also amazing, and when we opened up a new round of perks at a heavy discount the third day we got another big boost.

But as exciting and record-breaking as that first week was, we couldn’t escape the inevitable slowdown that the Kickstarter folks call “the trough“.  Our funding didn’t stop, you guys never stopped, but it certainly slowed way down from it’s peak.  We’ve now entered a period of the crowd-funding cycle where keeping momentum going is the most important thing. How well we do that will determine whether or not we’re close enough to our goal for the typical end-of-cycle peak to push us over the edge.

And this is where we need our community more than ever, not for your money but for your ideas and your passion.  If you haven’t contributed to the campaign yet, what can we offer that would make it worthwhile for you?  If your friends haven’t contributed yet, what would it take to make them interested?  We want to know what perks to offer to help drive us through the trough and closer to the Edge.

Our Options

So here’s what we have to work with.  We need to raise about $24 million by the end of August 21st.  That’s a lot, but if we break it down by orders of magnitude we get the following combinations:

  • 1,000,000 people giving $24 each
  • 100,000 people giving $240 each
  • 10,000 people giving $2,400 each
  • 1,000 people giving $24,000 each

Now finding ways to get people to contribute $24 are easy, but a million people is a lot of people.  1,000 or even 10,000 people isn’t that many, but finding things that they’ll part with $2,400 for is challenge, even more so for $24,000.

That leaves us with one order of magnitude that I think makes sense. 100,000 people is a lot, but not unreasonable.  Previously large crowd-funding campaigns have reached 90,000 contributors, while raising only a fraction of what we’re trying for, so that many people is an attainable goal.  Plus $240, while more than an impulse purchase, still isn’t an unreasonable amount for a lot of people to part with, especially if we’re giving them something of similar real value in return.

Now it doesn’t have to be exactly $240, but think of perk ideas that would be around this level, something less than the cost of a phone, but more than the Founder levels.

Our Limits

Now, for the limitations we have.  I know everybody wants to see $600 phones again, and that would certainly be an easy way to boost the campaign.  But the manufacturing estimate we have is that $32 million will build only 40,000 phones.  That’s $800 per phone.  That’s something we can’t get away from.  Whatever we offer as perks, we have to average at least $800 per phone.  We were able to offer perks for less than that because we projected the other perk levels to help make up the difference.  So if you’re going to suggest a lower-priced phone perk, you’re going to have to offer some way to make up the difference.

You also need to consider the cost of offering the perk, as a $50 t-shirt doesn’t actually net $50 once you take out the cost of the shirt itself, so we can’t offer $240 worth of merchandise in exchange for a $240 contribution. But you could probably offer something that costs $20 to make in exchange for a $240 contribution.

Our Challenge

So there’s the challenge for you guys.  I’ve been thinking of this for over a week now, and have offered my ideas to those managing the campaign.  Often they pointed out some flaw in my reasoning or estimates, but some ideas they liked and might try to offer.  I can’t promise that your ideas will be offered, but I can promise to put them in front of the people making those decisions, and they are interested in hearing from you.

Now, rather than trying to cultivate your ideas here on my blog, because comments are a terrible place for something like that, I’ve created a Reddit thread for you.  Post your ideas there as comments, upvote the ones you think are good, downvote the ones you don’t think are possible, leave comments and suggestions to help refine the ideas.  I will let those running the campaign know about the thread, and I will also be taking the most popular (and possible) ideas and emailing them to the decision makers directly.

We have a long way to go to reach $32 million, but it’s still within our reach.  With your ideas, and your help, we will make it to the Edge.


Read more
Michael Hall

UPDATE: A command porting walk-through has beed added to the documentation.

Back around UDS time, I began work on a reboot of Quickly, Ubuntu’s application development tool.  After two months and just short of 4000 lines of code written, I’m pleased to announce that the inner-workings of the new code is very nearly complete!  Now I’ve reached the point where I need your help.

The Recap

First, let me go back to what I said needed to be done last time.  Port from Python 2 to Python 3: Done. Add built-in argument handling: Done. Add meta-data output: Well, not quite.  I’m working on that though, and now I can add it without requiring anything from template authors.

But here are some other things I did get done. Add Bash shell completion: Done.  Added Help command (that works with all other commands): Done.  Created command class decorators: Done.  Support templates installed in any XDG_DATA_DIRS: Done.  Allow template overriding on the command-line: Done.  Started documentation for template authors: Done.

Now it’s your turn

With the core of the Quickly reboot nearly done, focus can now turn to the templates.  At this point I’m reasonably confident that the API used by the templates and commands won’t change (at least not much).  The ‘create’ and ‘run’ commands from the ubuntu-application template have already been ported, I used those to help develop the API.  But that leaves all the rest of the commands that need to be updated (see list at the bottom of this post).  If you want to help make application development in Ubuntu better, this is a great way to contribute.

For now, I want to focus on finishing the port of the ubuntu-application template.  This will reveal any changes that might still need to be made to the new API and code, without disrupting multiple templates.

How to port a Command

The first thing you need to do is understand how the new Quickly handles templates and commands.  I’ve started on some documentation for template developers, with a Getting Started guide that covers the basics.  You can also find me in #quickly in Freenode IRC for help.

Next you’ll need to find the code for the command you want to port.  If you already have the current Quickly installed, you can find them in /usr/share/quickly/templates/ubuntu-application/, or you can bzr branch lp:quickly to get the source.

The commands are already in Python, but they are stand-alone scripts.  You will need to convert them into Python classes, with the code to be executed being called in the run() method.  You can add your class to the ./data/templates/ubuntu-application/ file in the new Quickly branch (lp:quickly/reboot).  Then submit it as a merge proposal against lp:quickly/reboot.

Grab one and go!

So here’s the full list of ubuntu-application template commands.  I’ll update this list with progress as it happens.  If you want to help, grab one of the TODO commands, and start porting.  Email me or ping me on IRC if you need help.

add: TODO
configure: TODO
create: DONE!
debug: TODO
design: TODO
edit: DONE!
license: TODO
package: DONE!
release: TODO
run: DONE!
save: DONE!
share: TODO
submitubuntu: TODO
test: TODO
tutorial: TODO
upgrade: TODO

Read more
Michael Hall

The Ubuntu Skunkworks program is now officially underway, we have some projects already staffed and running, and others where I am gathering final details about the work that needs to be done.  Today I decided to look at who has applied, and what they listed as their areas of interest, programming language experience and toolkit knowledge.  I can’t share details about the program, but I can give a general look at the people who have applied so far.

Most applicants listed more than one area of interest, including ones not listed in Mark’s original post about Skunkworks.  I’m not surprised that UI/UX and Web were two of the most popular areas.  I was expecting more people interested in the artistic/design side of things though.

Not as many people listed multiple languages as multiple areas of interest.  As a programmer myself, I’d encourage other programmers to expand their knowledge to other languages.  Python and web languages being the most popular isn’t at all surprising.  I would like to see more C/C++ applicants, given the number of important projects that are written in them.  Strangely absent was any mention of Vala or Go, surely we have some community members who have some experience with those.

The technology section had the most unexpected results.  Gtk has the largest single slice, sure, but it’s still much, much smaller than I would have expected.  Qt/QML even more so, where are all you KDE folks?  The Django slice makes sense, given the number of Python and Web applicants.

So in total, we’ve had a pretty wide variety of skills and interests from Skunkworks applicants, but we can still use more, especially in areas that are under-represented compared to the wider Ubuntu community.  If you are interested, the application process is simple: just create a wiki page using the ParticipationTemplate, and email me the link (

Read more
Michael Hall

Shortly after the Ubuntu App Showdown earlier this year, Didier Roche and Michael Terry kicked off a series of discussions about a ground-up re-write of Quickly.  Not only would this fix many of the complications app developers experienced during the Showdown competition, but it would also make it easier to write tools around Quickly itself.

Unfortunately, neither Didier nor Michael were going to have much time this cycle to work on the reboot.  We had a UDS session to discuss the initiative, but we were going to need community contributions in order to get it done.


I was very excited about the prospects of a Quickly reboot, but knowing that the current maintainers weren’t going to have time to work on it was a bit of a concern.  So much so, that during my 9+ hour flight from Orlando to Copenhagen, I decided to have a go at it myself. Between the flight, a layover in Frankfurt without wifi, and a few late nights in the Bella Sky hotel, I had the start of something promising enough to present during the UDS session.  I was pleased that both Didier and Michael liked my approach, and gave me some very good feedback on where to take it next.  Add another 9+ hour flight home, and I had a foundation on which a reboot can begin.

Where is stands now

My code branch is now a part of the Quickly project on Launchpad, you can grab a copy of it by running bzr branch lp:quickly/reboot.  The code currently provides some basic command-line functionality (including shell completion), as well as base classes for Templates, Projects and Commands.  I’ve begun porting the ubuntu-application template, reusing the current project_root files, but built on the new foundation.  Currently only the ‘create’ and ‘run’ commands have been converted to the new object-oriented command class.

I also have examples showing how this new approach will allow template authors to easily sub-class Templates and Commands, by starting both a port of the ubuntu-cli template, and also creating an ubuntu-git-application template that uses git instead of bzr.

What comes next

This is only the very beginning of the reboot process, and there is still a massive amount of work to be done.  For starters, the whole thing needs to be converted from Python 2 to Python 3, which should be relatively easy except for one area that does some import trickery (to keep Templates as python modules, without having to install them to PYTHON_PATH).  The Command class also needs to gain argument parameters, so they can be easily introspected to see what arguments they can take on the command line.  And the whole thing needs to gain a structured meta-data output mechanism so that non-Python application can still query it for information about available templates, a project’s commands and their arguments.

Where you come in

As I said at the beginning of the post, this reboot can only succeed if it has community contributions.  The groundwork has been laid, but there’s a lot more work to be done than I can do myself.  Our 13.04 goal is to have all of the existing functionality and templates (with the exception of the Flash template) ported to the reboot.  I can use help with the inner-working of Quickly core, but I absolutely need help porting the existing templates.

The new Template and Command classes make this much easier (in my opinion, anyway), so it will mostly be a matter of copy/paste/tweak from the old commands to the new ones. In many cases, it will make sense to sub-class and re-use parts of one Template or Command in another, further reducing the amount of work.

Getting started

If you are interested in helping with this effort, or if you simply want to take the current work for a spin, the first thing you should do is grab the code (bzr branch lp:quickly/reboot).  You can call the quickly binary by running ./bin/quickly from within the project’s root.

Some things you can try are:

./bin/quickly create ubuntu-application /tmp/foo

This will create a new python-gtk project called ‘foo’ in /tmp/foo.  You can then call:

./bin/quickly -p /tmp/foo run

This will run the applicaiton.  Note that you can use -p /path/to/project to make the command run against a specific project, without having to actually be in that directory.  If you are in that directory, you won’t need to use -p (but you will need to give the full path to the new quickly binary).

If you are interested in the templates, they are in ./data/templates/, each folder name corresponds to a template name.  The code will look for a class called Template in the base python namespace for the template (in ubuntu-application/ for example), which must be a subclass of the BaseTemplate class.  You don’t have to define the class there, but you do need to import it there.  Commands are added to the Template class definition, they can take arguments at the time you define them (see code for examples), and their .run() method will be called when invoked from the command line.  Unlike Templates, Commands can be defined anywhere, with any name, as long as they subclass BaseCommand and are attached to a template.

Read more
Michael Hall

A few weeks ago, Canonical founder Mark Shuttleworth announced a new project initiative dubbed “skunk works”, that would bring talented and trusted members of the Ubuntu community into what were previously Canonical-only development teams working on some of the most interesting and exciting new features in Ubuntu.

Since Mark’s announcement, I’ve been collecting the names and skill sets from people who were interested, as well as working with project managers within Canonical to identify which projects should be made part of the Skunk Works program.  If you want to be added to the list, please create a wiki page using the SkunkWorks/ParticipationTemplate template and send me the link in an email (  If you’re not sure, continue reading to learn more about what this program is all about.

 What is Skunk Works?

Traditionally, skunk works programs have involved innovative or experimental projects lead by a small group of highly talented engineers.  The name originates from the Lockheed Martin division that produced such marvels as the U-2, SR-71 and F-117.  For us, it is going to focused on launching new projects or high-profile features for existing projects.  We will follow the same pattern of building small, informal, highly skilled teams that can work seamlessly together to produce the kinds of amazing results that provide those “tada” moments.

Why is it secret?

Canonical is, despite what some may say, an open source company.  Skunk Works projects will be no exception to this, the final results of the work will be released under an appropriate open license.  So why keep it secret?  One of the principle features of a traditional skunk works team is autonomy, they don’t need to seek approval for or justify their decisions, until they’ve had a chance to prove them.  Otherwise they wouldn’t be able to produce radically new designs or ideas, everything would either be watered down for consensus, or bogged down by argument.  By keeping initial development private, our skunk works teams will be able to experiment and innovate freely, without having their work questioned and criticized before it is ready.

Who can join?

Our Skunk Works is open to anybody who wants to apply, but not everybody who applies will get in on a project.  Because skunk works teams need to be very efficient and independent, all members need to be operating on the same page and at the same level in order to accomplish their goals.  Mark mentioned that we are looking for “trusted” members of our community.  There are two aspects to this trust.  First, we need to trust that you will respect the private nature of the work, which as I mentioned above is crucial to fostering the kind of independent thinking that skunk works are famous for.  Secondly, we need to trust in your ability to produce the desired results, and to work cooperatively with a small team towards a common goal.

What kind of work is involved?

We are still gathering candidate projects for the initial round of Skunk Works, but we already have a very wide variety.  Most of the work is going to involve some pretty intense development work, both on the front-end UI and back-end data analysis.  But there are also  projects that will require a significant amount of new design and artistic work.  It’s safe to say that the vast majority of the work will involve creation of some kind, since skunk works projects are by their nature more on the “proof” stage of development rather than “polish”.  Once you have had a chance to prove your work, it will leave the confines of Skunk Works and be made available for public consumption, contribution, and yes, criticism.

How to join

Still interested?  Great!  In order to match people up with projects they can contribute to, we’re asking everybody to fill out a short Wiki page detailing their skill sets and relevant experience.  You can use the SkunkWorks/ParticipationTemplate, just fill it in with your information.  Then send the link to the new page to my ( and I will add you to my growing list of candidates.  Then, when we have an internal project join the Skunk Works, I will give the project lead a list of people who’s skills and experience match the project’s need, and we will pick which ones to invite to join the team.  Not everybody will get assigned to a project, but you won’t be taken off the list.  If you revise your wiki page because you’ve acquired new skills or experience, just send me another email letting me know.

Read more
Michael Hall

Sweet Chorus

Juju is revolutionizing the way web services are deployed in the cloud, taking what was either a labor-intensive manual task, or a very labor-intensive re-invention of the wheel  (or deployment automation in this case), and distilling it into a collection of reusable components called “Charms” that let anybody deploy multiple inter-connected services in the cloud with ease.

There are currently 84 Juju charms written for everything from game backends to WordPress sites, with databases and cache servers that work with them.  Charms are great when you can deploy the same service the same way, regardless of it’s intended use.  Wordpress is a good use case, since the process of deploying WordPress is going to be the same from one blog to the next.

Django’s Blues

But when you go a little lower in the stack, to web frameworks, it’s not quite so simple.  Take Django, for instance.  While much of the process of deploying a Django service will be the same, there is going to be a lot that is specific to the project.  A Django site can have any number of dependencies, both common additions like South and Celery, as well as many custom modules.  It might use MySQL, or PostgreSQL, or Oracle (even SQLite for development and testing).  Still more things will depend on the development process, while WordPress is available in a DEB package, or a tarball from the upstream site, a Django project may be anywhere, and most frequently in a source control branch specific to that project.  All of this makes writing a single Django charm nearly impossible.

There have been some attempts at making a generic, reusable Django charm.  Michael Nelson made one that uses Puppet and a custom config.yaml for each project.  While this works, it has two drawbacks: 1) It requires Puppet, which isn’t natural for a Python project, and 2) It required so many options in the config.yaml that you still had to do a lot by hand to make it work.  The first of these was done because ISD (where Michael was at the time) was using Puppet to deploy and configure their Django services, and could easily have been done another way.  The second, however, is the necessary consequence of trying to make a reusable Django charm.

Just for Fun

Given the problems detailed above, and not liking the idea of making config options for every possible variation of a Django project, I recently took a different approach.  Instead of making one Django Charm to rule them all, I wrote a small Django App that would generate a customized Charm for any given project.  My goal is to gather enough information from the project and it’s environment to produce a charm that is very nearly complete for that project.  I named this charming code “Naguine” after Django Reinhardt’s second wife, Sophie “Naguine” Ziegler.  It seemed fitting, since this project would be charming Django webapps.

Naguine is very much a JFDI project, so it’s not highly architected or even internally consistent at this point, but with a little bit of hacking I was able to get a significant return. For starters, using Naguine is about as simple as can be, you simply install it on your PYTHONPATH and run:

python charm --settings naguine

The –settings naguine will inject the naguine django app into your INSTALLED_APPS, which makes the charm command available.

This Kind of Friend

The charm command makes use of your Django settings to learn about your other INSTALLED_APPS as well as your database settings.  It will also look for a requirements.txt and, inspecting each to learn more about your project’s dependencies.  From there it will try to locate system packages that will provide those dependencies and add them to the install hook in the Juju  charm.

The charm command also looks to see if your project is currently in a bzr branch, and if it is it will use the remote branch to pull down your  project’s code during the install.  In  the future I hope to also support git and hg deployments.

Finally the command will write hooks for linking to a database instance on another server, including running syncdb to create the tables for your models, adding a superuser account with a randomly generated password and, if you are using South, running any migration scripts as well. It also writes some metadata about your charm and a short README explaining how to use it.

All that is left for you to do is review the generated charm, manually add any dependencies Naguine couldn’t find a matching package for, and manually add any install or database initialization that is specific to your project.  The amount of custom work needed to get a charm working is extremely minor, even for moderately complex projects.

Are you in the Mood

To try Naguine with your Django project, use the following steps:

  1. cd to your django project root (where your is)
  2. bzr branch lp:naguine
  3. python charm –settings naguine

That’s all you need.  If your django project lives in a bzr branch, and if it normally uses, you should have a directory called ./charms/precise/ that contains an almost working Juju charm for your project.

I’ve only tested this on a few Django project, all of which followed the same general conventions when it came to development, so don’t be surprised if you run into problems.  This is still a very early-stage project after all.  But you already have the code (if you followed step #2 above), so you can poke around and try to get it working or working better for your project.  Then submit your changes back to me on Launchpad, and I’ll merge them in.  You can also find me on IRC (mhall119 on freenode) if you get stuck and I will help you get it working.

(For those who are interested, each of the headers in this post is the name of a Django Reinhardt song)

Read more
Michael Hall

I spent some more time over the weekend working on Hello Unity.  If you haven’t already, be sure to read my first post about it.  In short, Hello Unity is a showcase application that demonstrates all the different ways an application developer can integrate their app with the Unity desktop.

The current version of Hello Unity sports a new syntax-highlighted code display, which makes it much easier to read and understand what the code is doing.  I also spent a significant amount of time going through and heavily commenting the code to explain what everything was doing.

In the Launcher section I added support for setting the “urgent” status of the application.  In Unity, this will cause the application’s icon in the Launcher to shake and, if the Launcher is set to auto-hide, will also cause the icon to slide out into view while it shakes.  This is a very useful way of telling a user that your application needs their attention.

A new section was added for integrating with the Message Menu.  It automatically adds a Hello Unity section to the menu, and allows you to add count indicators to it.  Clicking on an item in the menu will execute the code for removing it.  All of this is explicitly commented on in the source code.

Another new section is for Notifications.  While it uses the generic libnotify API, it does highlight how to use it with Ubuntu’s Notify-OSD display system, including how to updated and append text to the currently displayed notification.

Once again, if you are interested in contributing to this project, you can get the code from the Launchpad project page.  Also available there are source tarballs and installable .DEB files.

Read more
Michael Hall

David Planella and I have spent the past couple of weeks gathering, cleaning and verifying Unity integration documentation that has existed on the Ubuntu wiki, individual blogs and inside the source code itself.   After publishing those docs, a couple of people asked for some kind of simple “hello world” type of application to show off how all that integration works.

So, I made one.

You can get the code from Launchpad:

bzr branch lp:hello-unity

It’s not complete yet, but it’s a start.  Let me know if you’re interesting in contributing to it.

Read more
Michael Hall

One of the most requesting things since I first introduced Singlet was to have a Quickly template for creating Unity Lenses with it.  After weeks of waiting, and after upgrading Singlet to work in Precise, and getting it into the Universe repository for 12.04, I finally set to work on learning enough Quickly internals to write a template.

It’s not finished yet, and I won’t guarantee that all of Quickly’s functions work, but after a few hours of hacking I at least have a pretty good start.  It’s not packaged yet, so to try it out you will need todo the following:

  1. bzr branch lp:~mhall119/singlet/quickly-lens-template
  2. sudo ln -s ./quickly-lens-template /usr/share/quickly/templates/singlet-lens
  3. quickly create singlet-lens <your-lens-project-name>
  4. cd <your-lens-project-name>
  5. quickly package

Read more
Michael Hall

If you have written or know how to write a Quickly template, I’d like to get some help making one for Singlet Lenses and Scopes.

Read more
Michael Hall

I’ve finally had a little extra time to get back to working on Singlet.  There’s been a lot of progress since the first iteration.  To start with, Singlet had to be upgraded to work with the new Lens API introduced when Unity 5.0 landed in the Precise repos.  Luckily the Singlet API didn’t need to change, so any Singlet lenses written for Oneiric and Unity 4 will only need the latest Singlet to work in Precise[1].

The more exciting development, though, is that Singlet 0.2 introduces an API for Scopes.  This means you can write Lenses that support external scopes from other authors, as well as external Scopes for existing lenses.  They don’t both need to be based on Singlet either, you can write a Singlet scope for the Music Lens if you wanted to, and non-Singlet scopes can be written for your Singlet lens.  They don’t even have to be in Python.

In order to make the Scope API, I chose to convert my previous LoCo Teams Portal lens into a generic Community lens and separate LoCo Teams scope.  The Lens itself ends up being about as simple as can be:

from singlet.lens import Lens, IconViewCategory, ListViewCategory 

class CommunityLens(Lens): 

    class Meta:
        name = 'community'
        description = 'Ubuntu Community Lens'
        search_hint = 'Search the Ubuntu Community'
        icon = 'community.svg'
        category_order = ['teams', 'news', 'events', 'meetings']

    teams = IconViewCategory("Teams", 'ubuntu-logo')

    news = ListViewCategory("News", 'news-feed')

    events = ListViewCategory("Events", 'calendar')

    meetings = ListViewCategory("Meetings", 'applications-chat')

As you can see, it’s really nothing more that some meta-data and the categories.  All the real work happens in the scope:

class LocoTeamsScope(Scope):

    class Meta:
        name = 'locoteams'
        search_hint = 'Search LoCo Teams'
        search_on_blank = True
        lens = 'community'
        categories = ['teams', 'news', 'events', 'meetings']

    def __init__(self, *args, **kargs):
        super(LocoTeamsScope, self).__init__(*args, **kargs)
        self._ltp = locodir.LocoDirectory()
        self.lpusername = None

        if os.path.exists(os.path.expanduser('~/.bazaar/bazaar.conf')):
                import configparser
            except ImportError:
                import ConfigParser as configparser

            bzrconf = configparser.ConfigParser()

                self.lpusername = bzrconf.get('DEFAULT', 'launchpad_username')
            except configparser.NoOptionError:

    def search(self, search, model, cancellable):

I left out the actual search code, because it’s rather long and most of it isn’t important when talking about Singlet itself.  Just like the Lens API, a Singlet Scope uses an inner Meta class for meta-data.  The most important fields here are the ‘lens’ and ‘categories’ variables.  The ‘lens’ tells Singlet the name of the lens your scope is for.  Singlet uses this to build DBus names and paths, and also to know where to install your scope.  The ‘categories’ list will let you define a result item’s category using a descriptive name, rather than an integer.

 model.append('' % (team['lp_name'], tevent['id']), team['mugshot_url'],, "text/html", tevent['name'], '%s\n%s' % (tevent['date_begin'], tevent['description']), '')

It’s important that the order of the categories in the Scope’s Meta matches the order of categories defined in the Lens you are targeting, since in the end it’s still just the position number that’s being passed back to the Dash.

After all this, I still had a little bit of time left in the day.  And what good is supporting external scopes if you only have one anyway?  So I spent 30 minutes creating another scope, one that will read from the Ubuntu Planet news feed:

The next step is to add some proper packaging to get these into the Ubuntu Software Center, but you impatient users can get them either from their respective bzr branches, or try the preliminary packages from the One Hundred Scopes PPA.

[1] Note that while lenses written for Singlet 0.1 will work in Singlet 0.2 on Precise, the reverse is not necessarily true.  Singlet 0.2, as well as lenses and scopes written for it, will not work on Oneiric.

Read more
Michael Hall

Back when I first started writing Unity lenses, I lamented the complexity required to get even the most basic Lens written and running.  I wrote in that post about wanting to hide all of that unnecessary complexity.  Well now I am happy to announce the first step towards that end: Singlet.

In optics, a “singlet” is a very simple lens.  Likewise, the Singlet project aims to produce simple Unity lenses.  Singlet targets opportunistic programmers who want to get search results into Unity with the least amount of work.  By providing a handful of Python meta classes and base classes, Singlet lets you write a basic lens with a minimal amount of fuss. It hides all of the boilerplate code necessary to interface with GObject and DBus, leaving the developer free to focus solely on the purpose of their lens.  With Singlet, the only thing a Lens author really needs to provide is a single search function.

Writing a Singlet

So what does a Singlet Lens look like?  Here is a sample of the most basic lens, which produced the screenshot above:

#! /usr/bin/python

from singlet.lens import SingleScopeLens, IconViewCategory, ListViewCategory
from singlet.utils import run_lens

class TestLens(SingleScopeLens):

    class Meta:
        name = 'test'

    cat1 = IconViewCategory("Cat One", "stock_yet")

    cat2 = ListViewCategory("Cat Two", "hint")

    def search(self, phrase, results):
        results.append('' % phrase,
                             phrase, phrase, '')

        results.append('' % phrase,
                             phrase, phrase, '')

if __name__ == "__main__":
    import sys
    run_lens(TestLens, sys.argv)

As you can see, there isn’t much to it.  SingleScopeLens is the first base class provided by Singlet.  It creates an inner-scope for you, and connects it to the DBus events for handling a user’s search events.  The three things you need to do, as a Lens author, is give it a name in the Meta  class, define at least one Category, and most importantly implement your custom search(self, phrase, results) method.

Going Meta

Django developers will notice a similarity between Singlet Lenses and Django Models in their use of an inner Meta class.  In fact, they work exactly the same way, though with different properties.  At a minimum, you will need to provide a name for your lens.  Everything else can either use default values, or will be extrapolated from the name.  Everything in your Meta class, plus defaults and generated values, will be accessible in <your_class>._meta later on.

Categorically easy

Again borrowing from Django Models, you add categories to your Singlet Lens by defining it in the Class’s scope itself, rather than in the  __init__ method.  One thing that I didn’t like about Categories when writing my previous lenses was that I couldn’t reference them when adding search results to the result model.  Instead you have to give the numeric index of the category.  In Singlet, the variable name you used when defining the category is converted to the numeric index for that category, so you easily reference it again when building your search results.  But don’t worry, you category objects are still available to you in <your_class>._meta.categories if you want them.

The search is on

The core functionality of a Lens is the search.  So it makes sense that the majority of your work should happen here.  Singlet will call your search method, passing in the current search phrase and an empty results model.  From there, it’s up to you to collect data from whatever source you are targeting, and start populating that results model.

You can handle the URI

Unity knows how to handle common URIs in your results, such as file:// and http:// uris.  But often times your lens isn’t going to be dealing with results that map directly to a file or website.  For those cases, you need to hook into DBus again to handle the URI of a selected result item, and return a specifically constructed GObject response.  With Singlet, all you need to do is define a handle_uri method on your Lens, and it will take care of hooking it into DBus for you.  Singlet also provides a couple of helper methods for your return value, either hide_dash_response to hide the dash after you’ve handled the URI, or update_dash_response if you want to leave it open.

Make it go

Once you’ve defined your lens, you need to be able to initialize it and run it, again using a combination of DBus and GObject.  Singlet hides all of this behind the run_lens function in singlet.utils, which you should call at the bottom of your lens file as shown in the above snippet.

Lord of the files

There’s more to getting your Lens working that just the code, you also need to specify a .lens file describing your lens to Unity, and a .service file telling dbus about it.  Singlet helps you out here too, by providing command line helpers for generating and installing these files.  Suppose the code snippet above was in a file called, once it’s written you can run “python make” and it will write test.lens and unity-test-lens.service into your current directory.  The data in these files comes from <your_lens>._meta, including the name, dbus information, description and icon.  After running make you can run “sudo python install”, this will copy your code and config files to where they need to be for Unity to pick them up and run them.

More to come

You can get the current Singlet code by branching lp:singlet.  I will be working on getting it built and available via PyPi and a PPA in the near future, but for now just having it on your PYTHONPATH is enough to get started using it.  Just be aware that if you make/install a Singlet lens, you need to make the Singlet package available on Unity’s PYTHONPATH as well or it won’t be able to run.  I’ve already converted my Dictionary Lens to use Singlet, and will work on others while I grow the collection of Singlet base classes.  If anybody has a common yet simple use case they would like me to target, please leave a description in the comments.

Read more
Michael Hall

Unity certainly has been getting a lot of attention in the past year.  Love it or hate it, everybody seems to have something to say, whether it’s about the Launcher, application indicators, or the window control buttons being on the left.  But with all the talk about Unity, good and bad, one very unique aspect that hasn’t been getting nearly enough attention are Lenses.

Lenses are a central part of the Unity desktop, and anybody who’s used it will be familiar with the default Application and File lenses, maybe even the Music lens.  But there’s so much more to this technology than you might think.  In fact, David Callé has recently been spearheading an effort to build out a large number of small but incredibly useful lenses.  I first took notice of David’s work when he released his Book lens which, being a huge ebook fan, really brought home the usefulness of Unity Lenses for me.

More recently, David has been writing Scopes for the One Hundred Scopes project.  A Scope is the program that feeds results into a Lens.  While the Lens defines the categories and filters, it’s the Scopes that do the heavy lifting of finding and organizing the data that will ultimately be displayed on your Dash.  If you follow David on Google+, chances are you’ve seen him posting screenshots of one scope after another as he writes them, often multiple of them per day.

Seeing how quickly he was able to write these, I decided to dive in and try it out myself.  You can write Lenses in a variety of languages, including Python, my language of choice.  I decided to start of with something relatively easy, and something that I’ve personally been missing for a while.  I used to use a Gnome2 applet called Deskbar, which let you type in a short search word or phrase, and it presented you with search results and various other options.  Included among those was the option to lookup the word in the gnome-dictionary, and I used this option on a startlingly frequent basis.  Unfortunately Deskbar fell out of favor and development even before the switch to Gnome 3, and I’d been lacking a quick way to lookup words ever since.  So I decided that the Lens I wanted was one that would replace this missing functionality, a Dictionary lens.

My first task was to find out how to write a lens.  I checked the Ubuntu Wiki and the Unity portal, both of which offered a lot of technical information about writing lenses, but unfortunately not very much that I found helpful for someone just starting off.  In fact, I had to get a rather large amount of one-on-one help from David Callé before I could even get the most basic functionality working.

Lenses and Scopes all communicate with each other and with the Unity Dash via DBus, and for anybody not familiar with DBus this makes for a very steep learning curve.  On top of that, writing it in Python means you’ll be relying on GObject Introspection (GI), which is a very nice way of making APIs written in one language automatically available to another, but it also means you’re going to be using the lowest common denominator when it comes to language features.  I found that learning to work with these two technologies accounted for 90% or more of the time it took me to write my Dictionary lens.  Before I write another Lens or Scope, I plan on wrapping much of the DBus and GI boilerplate and wiring behind a simple, reusable set of Python classes.  I hope this will help developers, both newbies and seasoned Unity hackers, in writing simple Scopes by allowing them to focus 90% of their time on writing the code that does the actual searching.

But by the end of the day I had a working Dictionary lens.  It uses your local spellcheck dictionary, via python-enchant, to both confirm whether or not the word you typed in is spelled correctly, as well as offer a list of suggested alternatives.  I also dug through the gnome-dictionary code and found that it was pulling its definitions from the online database using an open protocol.  Using the python-dictclient I was able to query the same database, and include the start of a word’s definition in the Lens itself.

This lens turned out to be more complex than I had originally envisioned, not just for the features listed above, but also because I needed to override what happened when you clicked on an item.  When you build up your results, you have to give each item a unique URI, which is often in the form or an http:// or file:// URL that Gnome knows how to handle.  But for results that were just words, I needed to do the handling myself, which meant more DBus wiring to have the mouse click event call a local function.  From there I was able to copy the word to the Gnome clipboard or launch gnome-dictionary for viewing the full definition.

After seeing that first Lens running in my Dash, I felt an urge to try another.  Since I already had all of the DBus and GI code, I wouldn’t have to mess with all of that and I could focus just on the functionality I wanted.  Jorge Castro has been trying to get me to write a lens for the LoCo Teams Portal (formerly LoCo Directory) since the concept was first introduced to Unity under the name “Places”.  Since LTP already offers a REST/JSON api, this turned out to be remarkably simple to do.  Between the existing DBus/GI code copied from my previous lens, and an existing python client library for the LTP, I was able to get a working lens in only a couple of hours.

For item icons, you can use either a local icon, or a URL to a remote image.  For this lens, I used the team’s mugshot URL that LTP pulls from the team’s Launchpad information.  When you search, it’ll show matching Teams, as well as Events and Meetings for those teams, and any Event or Meeting that also matches your search criteria.  I’ve also added the ability for this lens to lookup your Launchpad username (by checking for it in your bazaar.conf) and defaulting it to display the LoCo Teams you are a member of, as well as their upcoming Events and Meetings.

Both the Dictionary Lens and the LTP Lens lump the Lens and Scope code in the same Python class, but it doesn’t have to be this way.  You can write a Scope for someone else’s Lens, and vice versa.  In fact, plan on separating the LTP lens into a general purpose Community Lens, with an LTP Scope feeding it results about LoCo Teams.  From there, others can write scopes pulling in other community information to be made available on the Dash.  This will also be my prototype for a Python-friendly wrapper around all of the DBus and GI work that scope writers probably don’t need to know about anyway.

Read more
Michael Hall

As promised in my previous post about my Unity Phone Mockups, here’s a look at the TV mockups I’ve been playing with over the last week, along with the reasons behind some of the designs.  Links to the source files for these mockups, as well as the mockups by other community contributors, can be found on this wiki page.

These aren’t as innovative as my phone mockups, partly because I didn’t have much of a reference to start from like I did with IOS and Android for phones.  I have a DVR from my cable company, but no HTPC or other media center software.

The TV frame and choice of movie (Big Buck Bunny) came from Alan Bell’s template.  I added a remote control to that, and added buttons as I found a need for them.  In the mockups, I’ve highlighted the buttons you would use to interact with the given screen.

The basic concept was to overlay the Unity components when the user presses the Circle-of-Friends button on the remote, but to otherwise leave whatever they are watching full screen.  On this mockup I used applications in the Launcher, and pressing the CoF would put the focus on them the same way Alt-F1 does on the desktop.  You could then navigate through them using the arrow buttons on the remote, pressing OK to make your selection.

In this alternate mockup, I replaced the applications with Unity lenses, which I think makes more sense for a TV, since you will more often than not be interested in finding content, not selecting which application to run.  Luckily Unity already supports both, so this is just a matter of what the default initial configuration should be, after that the user can put whatever they want in the Launcher.

For opening the dash I envision being able to double-tap the CoF button (or selecting a Lens).  In fact, I would like to have this on desktop Unity, since I want quick access to the Launcher more often than I want the Dash.  Once the Dash is open, you would use the remote’s arrow buttons to navigate through the items listed, again pressing OK to make your selection.

For accessing the Dash’s filters, I added the “context menu” button to the remote which, when pressed while on the Dash screen, will open the filter sidebar, letting you navigate through the filtering options using the remote’s arrows.  Pressing the context menu button again would collapse the filter sidebar and return arrow navigation to the results table.

Likewise I added a Search button to the remote to send focus to the Dash’s search field and activate the on-screen keyboard.  Text input from a remote like this will be cumbersome (unless we come up with a monstrosity of a remote like this), so we’ll want to avoid the need for this as much as possible.  But since the Dash is highly search-focused, I felt that there needed to be a mockup for this aspect.

This is a screenshot of Shotwell taken from my laptop and scaled to fit a resolution it might be at on a TV screen.  I did this to demonstrate how an existing desktop app might look and work without modification on a 10 foot interface.  Here again I added a new button to the remote, which is a kind of next/last panel or, more accurately, the tab key on a desktop keyboard.  This lets you navigate through widgets and panels on a traditional application like you can using the tab key on a desktop.

I think with some small enhancements to GTK and QT, we can allow application developers to make navigating apps this way easier.  Alternately, a new library similar to uTouch could act as an interface between remote control input and applications.

Someone pointed me to YouTube’s LeanBack interface, an HTML5 webapp designed for 10 foot displays.  This is a very promising way to deliver applications and contents to internet connected TVs, especially if we ship with a fully functional, HTML5 supporting web browser optimized for control with a remote.

Finally I wanted to show off Media Explorer, a media center application built on the same Gnome technologies as the rest of the Ubuntu desktop.


Just like I did in my last post, I’d like to solicit discussion and feedback on these mockups, and also ask what other scenarios/mockups you would like to see.  You can either leave comments here, or join the live conversation in #ubuntu-tv on Freenode.

Read more
Michael Hall

I’ve been expanding on my earlier Unity TV mockups lately, which I will post more about later, but I’ve also been working on some Unity Phone mockups, and I wanted to put a couple of ideas out for wider discussion and feedback.

Launcher Position

OMG!Ubuntu! highlighted some other mockups for Unity on a Phone where the designer kept the launcher on the left hand side, as it is on the desktop.  While this is keeping with the look of desktop Unity, I didn’t think it was in line with the reasons for it.  The primary reason for the launcher’s position is to use up screen width (which is abundant on desktops) and free up screen height (which is scarce on desktops).

Keeping that in mind, I came up with a design that keeps the launcher on the narrowest screen edge, regardless of orientation.  When holding the phone in portrait mode, the launcher sits along the bottom, the way is traditionally does on Android interfaces.  When rotated to landscape, the launcher sits along the left edge, the way it does in Unity on the desktop.  This was the launcher is only ever taking up space in the dimension you have the most to spare, while providing a familiar interface in both modes.

The top panel, however, always stays up along the top edge of the screen.  This matches Android in portrait mode, and desktop Unity on landscape.  This is because it contains the time, which would either have to be rotated or left at 90 degrees in landscape, but also because having the panel at the top gives consistency in the next feature I looked at.

Accessing Indicators

There is a video on YouTube showing a new Acer Iconia W500 running Ubuntu 11.10, and watching the user try and land a touch event on an indicator icon was particularly painful, even on a tablet the indicator icons are far to small to be easy touch targets.  How much worse would it be when they’re on a screen a quarter of the size?

Instead I thought about the way Android lets you drag-down the top bar to access your list of notifications, and how easy of a gesture that is.  Applying that to Unity, I came up with the idea of a drag-down gesture activating the indicator menus, much like you can by pressing F10 on desktop unity.  Only instead of popup menus, these would use a full-screen overlay, again like Android does for notifications.

Unlike Android, however, you would be able to access all of your indicators from here, not just notifications.  Swiping left or right will move you to adjacent indicators (just like left and right keyboard buttons do on desktop Unity).  So you would have quick, easy, always available access to your networking, sound (including playback controls) and calendar.


Like my earlier TV mockups, these are all done using the Pencil mockup tool.  You can see the full exported version here, and download the source files here.  If you are interested in the discussions going on, join the #ubuntu-phone channel on Freenode.  Please let me know what you think of these mockups, and what other areas of Unity on a phone you would like to see designs for.  And remember, none of these are official designs, they’re just what I’ve been doing for fun in the evenings.

Read more
Michael Hall

Inspired my Mark Shuttleworth’s recent post about Alan Bell’s Unity TV mockups, I’ve decided to try my hand at some.  Alan did his using Pencil, which is an awesome tool for UI mockups that I wrote about previously, so it was easy enough for me to get started.  Here is my first one, a mockup where just the Launcher and Unity panel are showing (no Dash):

You can see other people’s mockups, and add your own on the Ubuntu Wiki.  Instructions for getting setup with Pencil can be found in the mailing list archive.

Read more
Michael Hall

Are you both an Ubuntu user and a bibliophile?  Want to keep your ebooks synced between all your connected devices, including bookmarks and reading position?   If so, join us for this UDS session Thursday, Nov 3rd, where we’ll be talking about how to add that functionality to Ubuntu One.

Read more
Christian Giordano

Getting physical


With the ingress in the market of products like Nintendo Wii, Apple iPhone and Microsoft Kinect, developers finally started realizing that there are several ways a person can control a computer besides keyboards, mice and touch screens. These days there are many alternatives, obviously based on hardware sensors, and the main difference is the dependency on software computation. In facts solutions based on computer-vision (like Microsoft Kinect) rely on state of the art software to analyze pictures captured by one or more cameras.
If you are interested on the technical side of it, I recommend to have a look to the following projects: Arduino, Processing and OpenFrameworks.

Usage with Ubuntu

During a small exploration we did internally few months ago, we thought about how Ubuntu could behave if it was more aware of its physical context. Not only detecting the tilt of the device (like iPhone apps) but also analysing the user’s presence.
This wasn’t really a new concept for me, in 2006 I experimented with a user proximity sensitive billboard idea. I reckon there is a value on adapting the content of the screen based on the distance with who is watching it.

We came up with few scenarios which are far to be developed and specified, hopefully will just open some discussions or, even better, help to start some initiatives.

Lean back fullscreen

If the user moves further from the screen while a video is playing on the focused window, the video will go automatically to fullscreen.

Fullscreen notifications

If the user is not in front of the screen, the notifications could be shown at fullscreen so the user can still read them from a different location.

Windows parallax

Since this is the year of 3D screens, we couldn’t omit a parallax effect with the windows. A gesture of the user could also trigger the launcher appearance (see prototype below).


With few hours available, I mock up something very quickly in Processing using a face recognition library (computer-vision).
Despite it could be hard to detect the horizontal position of the user’s head without a camera, we are in no way defining the technology required. The proximity could be in-facts detected with infra-red or ultra-sound sensors.

Parallax and fullscreen interaction via webcam from Canonical Design on Vimeo.

Read more