Canonical Voices

What Ted Gould talks about

Tracking Usage

One of the long standing goals of Unity has been to provide an application focused presentation of the desktop. Under X11 this proves tricky as anyone can connect into X and doesn't necessarily have to give information on what applications they're associated with. So we wrote BAMF, which does a pretty good job of matching windows to applications, but it could never be perfect because there simply wasn't enough information available. When we started to rethink the world assuming a non-X11 display server we knew there was one thing we really wanted, to never ever have something like BAMF again.

This meant designing, from startup to shutdown, a complete tracking of an application before it started creating windows in the display server. We then were able to use the same mechanisms to create a consistent and secure environment for the applications. This is both good for developers and users as their applications start in a predictable way each and every time it's started. And we also setup the per-application AppArmor confinement that the application lives in.

Enough backstory, what's really important to this blog post is that we also get an event when an application starts and stops which is a reliable event. So I wrote a little tool that takes those events out of the log and presents them as usage data. It is cleverly called:

$ ubuntu-app-usage

And it presents a list of all the applications that you've used on the system along with how long you've used them. How long do you spend messing around on the web? Now you know. You're welcome.

It's not perfect in that it uses all the time that you've used the device, it'd be nice to query the last week or the last year to see that data as well. Perhaps even a percentage of time. I might add those little things in the future, if you're interested you can beat me too it.

Read more

HUD shown over terminal app with commands visible

Most expert users know how powerful the command line is on their Ubuntu system, but one of the common criticisms of it is that the commands themselves are hard to discover and remember the exact syntax for. To help a little bit with this I've created a small patch to the Ubuntu Terminal which adds entries into the HUD so that they can be searched by how people might think of the feature. Hopefully this will provide a way to introduce people to the command line, and provide experienced users with some commands that they might have not known about on their Ubuntu Phone. Let's look at one of the commands I added:

UnityActions.Action {
  text: i18n.tr("Networking Status")
  keywords: i18n.tr("Wireless;Ethernet;Access Points")
  onTriggered: ksession.sendText("\x03\nnm-tool\n")
}

This command quite simply prints out the status of the networking on the device. But some folks probably don't think of it as networking, they just want to search for the wireless status. By using the HUD keywords feature we're able to add a list of other possible search strings for the command. Now someone can type wireless status into the HUD and figure out the command that they need. This is a powerful way to discover new functionality. Plus (and this is really important) these can all be translated into their local language.

It is tradition in my family to spend this weekend looking for brightly colored eggs that have been hidden. If you update your terminal application I hope you'll be able to enjoy the same tradition this weekend.

Read more

One of the goals of this cycle is to decrease application startup times on the Ubuntu phone images. Part of my work there was to look at the time taken by Upstart App Launch in initializing the environment for the application. One of the tricky parts of measuring the performance of initialization is that it contains several small utilities and scripts that span multiple Upstart jobs. It's hard to get a probe inside the system to determine what is going on without seriously disrupting it.

For measuring everything together I decided to use LTTng which loads a small kernel module that records tracepoints submitted by userspace programs using the userspace tracer library. This works really well for Upstart App Launch because we can add tracepoints to each tool, and see the aggregate results.

Adding the tracepoints was pretty straight forward (even though it was my first time doing it). Then I used Thomas Voß's DBus to LTTng bridge, though I had to add signal support.

To setup your Ubuntu Touch device to get some results you'll need to make the image writable and add a couple of packages:

$ sudo touch /userdata/.writable_image
$ sudo reboot
# Let it reboot
$ sudo apt-get update
$ sudo apt-get install lttng-modules-dkms lttng-tools
$ sudo reboot
# Rebooting again, shouldn't need to, but eh, let's be sure

You then need to setup the Upstart App Launch environment variable to get it registering with LTTng:

$ initctl set-env --global LTTNG_UST_REGISTER_TIMEOUT=-1

Then you need to setup a LTTng session to run your test. (NOTE: this configuration allows all events through, but you can easily add event filters if that makes sense for your task)

$ lttng create browser-start
$ lttng enable-event -u -a
$ lttng start

To get the Upstart starting events from DBus into LTTng:

$ dbus-monitor --profile sender=com.ubuntu.Upstart,member=EventEmitted,arg0=starting | ./dbus_monitor_lttng_bridge 

And at last we can run our test, in this case starting the webbrowser once from not running and once to change URLs:

$ url-dispatcher http://ubuntu.com
# wait for start
$ url-dispatcher http://canonical.com

And then shut things down:

$ lttng stop
$ lttng destroy browser-start

This then creates a set of traces in your home directory. I pulled them over to my laptop to look at them, thougth you could analyze them on the device. For complex traces there are more complex tools that are available, but for what I needed babletrace was enough. All of this contributed to a set of results that we are no using to optimize upstart-app-launch to make applications start faster!

Read more

Emergent Complexity

When looking at Upstart it comes off as very simple. Almost too much so. It's just events. And jobs that take those events. And not much else. You're so used to configuring every last detail of a system to ensure that it's tightly tuned for every possible scenario, certainly you need more tools than just events! When you let go of that preconception and really start to understand it, you realize that events are not only enough, but they're exactly what you need.

One might have thought – as at first I certainly did – that if the rules for a program were simple then this would mean that its behavior must also be correspondingly simple. For our everyday experience in building things tends to give us the intuition that creating complexity is somehow difficult, and requires rules or plans that are themselves complex. But the pivotal discovery that I made some eighteen years ago is that in the world of programs such intuition is not even close to correct. — Stephen Wolfram, A New Kind of Science.

And so we take these simple jobs that we've built and we start to build a system. They each just wait on a set of events, and some emit events as they go along, and the system starts to take form. But where are our initialization phases and guarantees and complex dependencies? (writing code to solve them is fun!) They still exist, but as an emergent behavior of the system. Let's look at Graphviz diagram of the Ubuntu Saucy system init:


(Graphviz | Full SVG)

Without zooming in and just looking at the shapes that emerge you start to see a natural grouping of the jobs. And there are stages of the boot. There are types of jobs that are gathered together. There'd be even more if initctl2dot could break down the runlevel job into its various values. What we see is a the complex boot of a modern operating system broken down into pieces for analysis. What we see is a model of the behavior of the system, but that model is the only place that the complexity actually exists, it becomes the emergent behavior of the system. And that, that is why Upstart can be so simple and yet be powerful enough to boot a modern Linux system.

Read more

I love BAMF, may it die a peaceful death. BAMF was always stuck trying to solve an unsolvable problem, trying to recreate information and associations that had been lost through the X11 protocol and general fuzziness about what should happen. BAMF then had to handle a wide variety of corner cases and try to bring things back into a situation of sanity. And I loved BAMF because I didn't have to do that myself, BAMF did it, and all of its clients just got sanity. We knew when starting another display system we didn't want another BAMF.

With Mir there is a closer tie between applications as a whole and their windows. When an application asks for a session it specifies who it is, then Unity can make sure it understands who it is, and gets a chance to veto the connection. This means that Unity can check on the status of who the app says it is before it gets any windows and can track that directly throughout the application session. To do this we're using what we call the "Application ID", which for most apps you have today is the name of their desktop file (e.g., "inkscape", "gedit").

Let's look at an overly verbose message sequence diagram to see how this works. Note: all of the columns on here aren't separate processes, the diagram is made to explain this idea, not to represent the system architecture.


(svg | msc)

What we can see here is that because Upstart is doing process tracking, and making sure it knows the state of the application, Unity have have assurances of the application's name and existence. It then can work with Mir to block applications that do have proper configurations and can't be matched well. 100% matching, by design.

Read more

One of the design goals of Unity was to have an Application-centric user experience. Components like the Launcher consolidate all of the windows into a single icon instead of a set like the GNOME 2 panel. Nothing else in Ubuntu thinks about applications in this way making it a difficult user experience to create. X11 worries about windows. DBus worries about connections. The kernel focuses on PIDs. None of these were focused on applications, just parts of applications. We created the BAMF Application Matching Framework (BAMF) to try and consolidate these, and while it has done a heroic job, its task is simply impossible. We need to push this concept lower into the stack.

First we looked at the display server and started thinking about how it could be more application centric. That effort resulted in Mir. Mir gets connections from applications and manages their associated windows together. They can have multiple windows, but they always get tracked back to the application that created them. Unity can then always associate and manage them together, as an application, without any trickery.

Application confinement also provides another piece of this puzzle. It creates a unified set of security policies for the application independent of how many submodules or processes exist for it. One cache directory, set of settings and policies follow the application around. Apparmor provides a consistent and flexible way of managing the policies along with the security that we need to keep users safe.

To start to look at the process management aspect of this I started talking to James Hunt about using Upstart, our process manager in Ubuntu. Working together we came up with a small little upstart user session job that can start and stop applications, and also track them. I've pushed the first versions of that to a test repository in Launchpad. What this script provides is the simple semantics of doing:

$ start application APP_ID=gedit
$ stop application APP_ID=gedit
to manage the application. Of course, the application lifecycle is also important, but Upstart provides us an guaranteed way of making sure the application stops at the end of the session.

Upstart can also help us to guarantee application uniqueness. If you try and start an application twice you get this:

$ start application APP_ID=inkscape
application (inkscape) start/running, process 30878
$ start application APP_ID=inkscape
start: Job is already running: application (inkscape)
This way we can ensure that a single icon on the launcher associates to a set of processes, managed by the process manager itself. In the past libraries like libunique have accomplished this using DBus name registration. Which, for the most part, works. Using DBus registration relies on well behaving applications, which basically guarantee their own uniqueness. By using Upstart we can have misbehaving applications, and still guarantee their uniqueness for the Unity to show the user.

We're just getting started on getting this setup and working. The schedule isn't yet final for vUDS next week, but I imagine we'll get a session for it. Come and join in and help us define this feature if it interests you.

Read more

For a while I've had a little project for debugging the desktop. Basically it starts tracking all of the DBus events on the user session startup so that you can figure out what's going on. This is especially an issue for indicators, where they're started at login, and sometimes it can be hard to track what is happening.

Previously it was pretty hard to inject into the startup of the session. Getting in the middle of building a very long command line was risky and pretty fragile. Not proud of what I had to do. Now that I have Upstart user session running, I took the opportunity port this debugging script over to Upstart.

Now I have this one simple configuration file that can be dropped in /usr/share/upstart/sessions and gets started immediately after dbus:

description "Bustle Boot Log"
author "Ted Gould "

start on started dbus
stop on desktop-end

script
	rm -f ~/.cache/bustle-boot-log/boot-log.bustle
	mkdir -p ~/.cache/bustle-boot-log/
	timeout -s INT 30 bustle-pcap --session ~/.cache/bustle-boot-log/boot-log.bustle &
end script

The beauty of this is that I can inject this small little script in, and have Upstart figure out all the startup mess. I also have minimal impact on the natural desktop boot which is critcial for testing. Simple things to make debugging easier.

Read more

I've started to prototype and lay the foundations for the indicators to use the Upstart User Sessions. It's an exciting change to our desktop foundations, and while it's still very fresh, I think it's important to start understanding what it can do for us. For right now you're going to need a patch to Unity and a patch to indicator-network to even get anything working, not recommended for trying at home.

Previously for indicators the way that they've worked is that a small loadable module was loaded by the panel service that had indicator specific UI in it. That plugin also took care of the responsibility to restart the indicator backend, respawning it if it crashed. While this works and it has created a robust desktop (most people don't notice when their indicator backends crash) it has had some downsides. For one, it makes it difficult to build and test new backends as you pretty much have to restart Unity to stop the previous service from getting respawned. Also all the debugging messages end up coming under the DBus process in ~/.xsession-errors because we were using DBus activation to start them.

With upstart user sessions we're now getting a lot more power and flexibility in managing the jobs in the user session, it makes sense that indicators would start to use it to control the backend services. This comes with a set of advantages.

The first one is that there is better developer control of the state of the process. It's really easy to start and stop the service:

$ stop indicator-network
$ start indicator-network
and the ever exciting:
$ restart indicator-network
All of these ensure that the same commands are run each time in a recreatable way. Plus give the user and/or developer full control.

Upstart also takes the output of each process and puts it into its own log file. So for our example here there is a ~/.cache/upstart/indicator-network.log that contains all of the junk that the backend spits out. While this is nice to just make xsession-errors cleaner, it also means that we can have a really nice apport-hook to pick up that file. Hopefully this will lead to easier debugging of every indicator backend bug because they'll have more focused information on what the issue is. You can also file general bugs with ubuntu-bug indicator-network and get that same information attached.

In the future we'll also be able to do fine tuned job control using external events. So we could have the indicator network backend not start if you don't have any networking devices, but startup as soon as you plug in that USB cellular modem. We're not there yet, but I'm excited that we'll be able to reduce the memory and CPU footprint on devices that don't have all the features of higher end devices, scaling as the features are required.

Those that know me know that I love diagrams and visualizations, and so I'll have to real quickly say that I'm excited about being able to map our desktop startup using intlctl2dot. This gives a Graphviz visualization of startup and how things will interact. I expect this to be a critical debugging tool in the future.

What's next? Getting all the indicators over to the branch new Saucy world. We also want to get application indicators using a similar scheme and get a fast responsive desktop. Hope to have a blog post or two on that in the near future.

Read more

Going to sleep last night I started thinking about inflation, which meant that I got to sleep later than I'd wished, but it also lead to some interesting thoughts about where the US economy is currently. Popularly inflation is considered bad, or something that needs to be controlled through monetary policy. And that we have. In the US we've seen record low inflation rates, to the point where we've lowered interest rates to where it's practically a useless lever on the economy. It seems to me that we need higher inflation.

Inflation powers the creative destruction that makes capitalism work as an economic system. For value to increase relative to inflation we must reprice things, and they must continue to have more value to the people in the economy. If weak products just stick around, with their current value in tact, they effectively stagnate as they don't get ground under the treadmill of inflation. This is the evolutionary gauntlet that destroys those who are not fit to compete. Perhaps capitalism is driven less by an invisible hand and more an invisible treadmill.

We can as a country control inflation and increase it artificially through policy in government. Policy programs like farm and transportation subsidies ensure that food prices don't rise. Starting to remove many of these would force farmers to charge more, which in turn would effect derivatives like meat and dairy to raise prices, effectively pushing inflation into play.

The economy is an ecosystem of stored value, consumers and producers. By eliminating inflation as a mechanism inside that ecosystem we've allowed the balance to be shifted and created a system that is off balance. We should adopt legislative policies to increase inflation.

Read more

Introducing HUD 2.0

As you might have noticed in the tablet information we've put some time and effort into imaging the next steps for the HUD. For the last few months I've been leading a team to bring that into a reality on the tablet.

Screenshot of HUD

Managing complexity

One of the problems facing application developers on a device like a tablet is adding functionality without making the entire interface feel cluttered. We even made the problem harder by emphasizing a content focused strategy in our SDK. Where do you put controls? There are various tools including toolbars, but none of them scale to even a moderately complex application. With the power of today's devices ever increasing, it's clear that tablet applications are going to become more complex.

We could just tell application authors not to do it. Save that complexity for your desktop UI. But that wouldn't be convergence, that'd be creating silos for you applications to live in.

By using HUD we allow applications to expose rich functionality that is available to the user via search. Users can search through the actions exposed by the application to find the functionality that they need. We combine that with historical usage and recently used items to take into account what functionality the user uses in the application. It's the rich functionality of your application, customized for the individual user.

Designed for HUD

One of the things we realized early in the HUD 2.0 efforts is that we can no longer just passively take data from applications and make things better. We needed to go beyond menu sucking to having applications actively targeting the HUD. We're building HUD functionality directly into the SDK making it easily available for application developers to add actions that are visible in the HUD.

By providing a way for applications to export both actions and their descriptions to the HUD directly we can make that interaction much richer. We can do things like get the keywords pragmatically so that they're included next to the original item definitions. This also allows us to define actions that have additional properties that can be adjusted with UI elements, we're calling these parameterized actions.

Screenshot of HUD using a parameterized action

Parameterized actions provide a way for applications to create a small autogenerated UI for simple settings and items that can be quick to edit. Let's be clear, autogenerated UI's aren't the best UI's, but if done right can be attractive and effective. We aim to do it right. Hopefully your application has a primary UI that is beautiful and tailored for its specific task that is then supplemented with additional settings and actions in the HUD, parameterized actions are no different there.

Currently in the code we only have support for sliders of integer percentages. That's pretty limiting. We plan on expanding that to most of the base widgets in the toolkit.

Voice

While talking to yourself makes you seem crazy, talking to your tablet is just cool. With the HUD we realized that we had a relatively small data set, and so it would be possible to get reasonable voice recognition using the resources available in the device. That makes a great way to interact with an application, keyboards are chunky on any handheld device (but needed for when you're supposed to be paying attention to the person talking) and voice makes interacting much more fluid.

Screenshot of HUD doing voice

We built the voice feature around two different Open Source voice engines: Pocket Sphinx and Julius. While we started with Pocket Sphinx we weren't entirely happy with it's performance, and found Julius to start faster and provide better results. Unfortunately Julius is licensed with the 4-clause BSD license, putting it in multiverse and making it so that we can't link to it in the Ubuntu archive version of HUD. We're looking at ways to make it so that people who do want to install it from multiverse can easily use Julius, but what we'd really like is to make the Pocket Sphinx support really great. It's something we'd love help with. We're not voice experts, but some of you might be, let's make the distributable free software solution the best solution.

Enhancing Search

When we did user testing of the first version of the HUD one of the biggest problems users had was composing a search in the terms used in the applications. It turns out users search for "Send E-mail" instead of "Compose New Message". I'm sure there are even some people who want to "Clear History" but others want to "Delete" it. To help this situation we've introduced keywords that can be added as a sidecar file to legacy applications, and defined directly for libhud exported actions. These can then be searched for as well, increasing the ability of application authors to provide different ways to express the same action.

Issues

One of the issues that the HUD has in general is discoverability. How do I know that this cool new app I downloaded can do color balancing? Does every app need to run a Superbowl ad to make consumers realize their features? We've got some ideas, but come and share yours on the unity-design mailing list.

Shut up and take my app

While the source is published, we still don't have beautiful documentation and developer help out there yet. We're working on it. You're welcome to look at the source code, or just hang tight, I'll have another blog post soon.

Read more

Cloud 2.0

I'm getting ahead of the curve and calling it, there will be a Cloud 2.0. I know, shocking, but I think that there are some interesting changes afoot that will lead to a change in cloud deployments. Enough to justify the 2.0 tag. If you haven't read it yet, you should probably read my 2GB is enough for anyone post as it'll set the stage for what's going on here.

The key part about what happens in that post that relates to this one is that Internet companies will start having commercial relationships with telcos. They'll be paying for bandwidth, and they'll want to optimize their own costs. But before we can talk about 2.0 and how that optimization can happen, let's first define what 1.0 is.

Cloud 1.0

There are a million definitions of what "cloud" is, and I'm not going attempt to clear up all the confusion or provide a comprehensive definition. That'd result in failure. But because of the confusion, I feel that we need a common thought to work from.

I'm going to go ahead and talk about Cloud 1.0 as the move from servers you own, to servers you rent and that you have no idea of the real physical location. Sure, you know that Amazon has different zones, but most people couldn't give you the mailing address for them. This is a transition from a world where you knew, and did a lot of planning around your hardware. If you thought you were going to have 10x as many customers next month, you're already behind on ordering and deployment. Today, there's no reason to care about that except to optimize your costs, which is something most startups aren't immediately concerned about. This is a good time to start a company who's main costs would have been servers.

From "where?" to "how many?"

At some point when your server setup starts to become super fluid, you start loosing track of how many servers you have. "Did we set up a second load balancer?" "Oh, I guess someone should figure that out." Today while you'd spend time to set up that second server (might be as easy as juju add-node) you still might not be entirely managing the exact numbers there.

In the future I think it is fair to say that there will be software managing the numbers for you, instead of allocating servers you'll specify a quality of service that you want to maintain (and this'll involve costs as well, maybe you can skip a daily backup during high shopping season). That change will make it so that the only thing that actually knows how many servers you have is your management software, and the person billing you for the time. But, you can still find out how many servers are out there, and you're still paying for individual machines. We can call this Cloud 1.5 in this discussion. It changes how you think about your setup, but in reality, your architecture is roughly the same.

Who cares how many servers

The problem even with that setup is that every customer to your service is still going across the Internet to get there. The edges of the network are getting faster at a more rapid rate than the backbone. This is most evident in the cell phone case where LTE speeds are making it so that rapid access to the cell tower is commonly available, but from the cell tower is a more difficult proposition. Cellular providers and high speed home networking providers are trying to combat this with bandwidth limits, but that's not a great solution for anyone. But it does lead to a unique relationship between Internet companies and cellular providers.

How does this effect clouds? Because of this existing relationship between the Internet services and the cellular network providers where they're working to avoiding bandwidth caps. Now they've got a relationship where they're both interested in reducing bandwidth costs. What would reduce bandwidth costs the most? It would be running the servers directly in the cell tower. Why not? With cloud platforms already setting up Internet companies running on virtualized environments, there's no reason those couldn't be expanded to the towers themselves.

It makes sense to me that these would be deployed on demand based on usage. There will always be a limitation of space and if Facebook is popular in Texas but MySpace in California, you'd want to only be using resources where it makes sense. Which means, at some level, deployment needs to be controlled by the cellular operator from images provided by the company providing the service. Which means, that as a company providing a service on the Internet, you might not even know how many of your servers are running. And that, not knowing how many servers you're actually running, that is Cloud 2.0.

Read more

UDS Raring Key Sessions

It's a bit after UDS Raring, but I'll blame my blog being broken on not writing the entry. It's fixed now and I wanted to take the opportunity to talk about a couple of themes that went on at UDS Raring that I'm excited about, but haven't gotten a lot of press generally.

Upstart in the user session. There is a bunch of work going on in 13.04 to start building the basis for a totally upstart based user session. Hopefully in 13.04 it'll get slightly underneath with some things changing, so the big change can happen safely in 13.10 and then be solid for 14.04. What this means in real user visible terms is that we can start to kill some of the long running processes that wait for events. Upstart gives use the ability to have more sophisticated job starting and stopping, and event listeners, so that you only have running the parts of the system you're using. The rest of the system can lay there... waiting... not sucking up resources.

Application containment (1, 2, 3, 4). There has always been an assumption in the Unix world that what happens in user space stays in user space, and that's okay, the system is secure. And this assumption has mostly worked out for us as really the number of applications running in user space has been fairly limited, and largely trusted. With things like the ability to easily publish applications in Software Center that barrier is getting lower, which is a really good thing, but it means we need to rethink security in that context. This work has been going on for a few UDSes, but at UDS-R I felt like it took a real turn to start being a workable plan that has a solution.

So while a few other big things happened at UDS this time around, these are the ones that I'm most excited about. I think they position Ubuntu well for a life on a variety of devices.

Read more

Desktop in the Cloud

A feature that my squad worked on for the Ubuntu 12.10 release is the remote greeter feature. In a nutshell, this is support in the greeter to launch into a full screen remote login under a guest user on the local machine. This means that you don't need to authenticate the local machine (by using the guest account) and you can quickly get access to your remote machine, which can be anywhere on the Internet.

One of the problems we realized pretty quickly was that remembering the hostnames for all of the machines when they're "on a cloud" somewhere was going to make this feature much less useful. So we talked with some of our friends on the server side of Canonical and asked them to help. They've created a small service tied into Ubuntu SSO that will store a list of servers for you. We tied this back in so you only need to remember your Ubuntu SSO login, which will get the list of servers, and then you can select which you'd like to login into.

I realize now you want to play with this feature, and we can help you out there too. What we've done is create a Juju charm called xrdp-desktop that will create an Ubuntu desktop that you can connect to via RDP. You can find instructions on deploying it in its README but you probably want to make sure you set up Juju first. After you've got it running you can add it to your server list and then you have it anywhere there is an Ubuntu machine.

Now you can quickly try out Kubuntu or get real "private browsing" on a disposable machine or perhaps remember what the defaults were before you customized everything. Perhaps you want to open that attachment to see if you really are the niece or nephew of a Nigerian prince. This is the tip of the iceberg for how this feature could be used. It only supports RDP right now. What we've spent our time on is ensuring that there's a reasonable framework for adding new protocols and ways to use this feature in the future. I'm excited to see what people will do with it!

Read more

Updated GPG Key

Finally, I'm getting around to updating my GPG key. Here are the new details for those who are interested, otherwise wait for an update coming soon to a keyserver near you!

pub   4096R/33E6185C 2012-08-07
      Key fingerprint = 46C2 E0AE 5B56 39B4 DCE1  454D 9E28 586D 33E6 185C
uid   Ted Gould <ted@gould.cx>

The new one is signed by the old, so I should still be in your web of trust, just a little further away.

Read more

For a while we've been using dbus-test-runner in various DBus related projects to create a clean DBus session bus for our test suites. This also makes it so that we can test on headless systems that don't have a standard DBus configuration like what is available on most developer's desktop systems. This release we even got it into the Ubuntu archives so we could run our test suites on package builds. As our testing has gotten more mature and we've increased the number of builds on the Ubuntu QA Jenkins server we need to have better reporting, something like what gtester or Google Test can provide, which is difficult with an external utility.

To handle this we've taken dbus-test-runner, turned it entirely inside out, and created libdbustest. This allows for managing the DBus service that is being used in the test setup to be managed by the test framework. Which means you can have a DBus Session per test, or share them, or what ever you need. You've got choices that can match what you're trying to test. I created a small example using gtester that is part of the dbus-test-runner test suite. I expect that we'll be able to port of more of the various ubuntu-menu-bar projects' test suites in the coming weeks.

For those faithful users of the dbus-test-runner command line utility, no worries, it exists and just uses libdbustest. I expect no regressions as it passes the original test suite, and even maintains the code coverage numbers that we worked on in an earlier post. It does have a place in some testing, I expect it's usage to remain as another way to test applications and interfaces.

At this point I've merged the basic ideas behind what we're trying to achieve, but there's still a lot of time to work on the API and make it perfect. If you've got requirements or ideas please share them and let's make this useful for everyone. Also we need documentation and GObject introspection support so this little guy can be used for Python and Javascript testing as well. There's a lot of work to do, but I'm excited where this project is going.

Read more

Over the history of human society we can see great increases in productivity through unlocking the potential of greater and greater segments of the human population. Today, more people can contribute creatively to the betterment of the world than ever before. While more work is to be done there, an eventual maximum is possible. What happens if all humans are working together for the betterment of humanity? Where will our next gains in productivity come from? At Canonical we're thinking beyond this eventual limit, and that's one of the reasons that we introduced the HUD. The HUD makes your cat more productive than ever before.

Cat launching nukes

Extensive research has been done on how cats interact with keyboards, it can be seen in chat rooms all across the Internet. From "awsd" to "jijkl" these appear nonsensical to everyone else, but it is the cat trying to use a computer effectively. Using the HUD matching algorithms the cat is able to select "Launch Nukes" from the menu using these simple key combinations.

There is certainly more work to be done. For instance, it's impossible for a cat to unlock the screen at the start of a workday. This is why we're investing in kitty facial recognition at the screen saver lock screen. We're also looking at laser pointer based mouse control, by swiping the "virtual pointer" the cat is able to control the pointer on screen. Lastly, we're working a remote control helicopter based catnip delivery system to take advantage of the eventual disposable income of working cats.

We hope that you are as excited about these potential advancements in Earth's overall productivity as we are. By thinking beyond the human race we hope to find advancements that help humans as well.

Cat pic CC-BY-NC-SA by Sophie

Read more

In order to reduce the bus factor a bit, I'm documenting how I do releases for the indicators. This is not a complex process, but not obvious either. This post will involve lots of Bazaar, Launchpad and Ubuntu. If that's not what you want, skip to the next post in your reader.

Understanding Ubuntu Distributed Development

I think the first think to understand is what our goals are in making a set of releases. It's a set of artifacts to mark a point in the continual development of the software. But, that point is marked differently by different groups of people. Largely this is mirrored by the "trunks" of Ubuntu Distributed Development. It's important to realize that Bazaar doesn't really have the concept of "trunk" in it. We name a branch in Launchpad that to make it easier to communicate, but Bazaar doesn't treat that branch any differently than any other. So for the different audiences, they each have their own trunk, and releases there mean different things. The trunks we're using are:

  • Upstream Trunk. Traditionally this is what software developers think of as trunk. The place where the artificats that are edited by the developers exist
  • Upstream Import Trunk. This is a branch that is maintained by the UDD tools and represents the tarball releases. The tarballs are slightly different than upstream in that they contain additional files generated by the build.
  • Upstream packaging. A packaging branch that is mostly a pure representation of upstream software, perhaps more up-to-date that distro. This allows us to do test builds and maintain updates for things like our Jenkins builds independent of having to ship the change to every Ubuntu user.
  • Ubuntu packaging. A branch maintained by the Ubuntu Desktop team that represents what is sent to Ubuntu users.

As we move through the release process we'll go through these branches. Some will end up slightly obscured, but I'll try to point them out along the way.

Releasing Upstream

Building and shipping the upstream tarballs is mostly straight forward, but I want to take the time to be explicit and note a couple of tricks that are useful along the way. For our examples here I'm going to release indicator-messages version 1.2.3. First grab the trunk:

$ bzr branch lp:indicator-messages trunk

Then you can edit configure.ac to update the version and commit that. It's important to commit before distcheck since our ChangeLog is built dynamically at dist and you want the top of it to be the version number that is released.

$ vi configure.ac
$ bzr commit -m 1.2.3

Note, we're not tagging, and that's for good reason! Tags are a pain to move, and as releases go there's a reasonable chance that distcheck will fail, but let's check first:

$ ./autogen.sh
$ make
$ make distcheck

Distcheck will build the tarball, uncompress it and ensure it builds and can run the test suite. Most of the time this works so you can just:

$ bzr tag 1.2.3
$ bzr push :parent

But, sometimes it doesn't. And you'll need to fix it. I'm going to leave fixing it as an exercise for the reader, but what I want to talk about is how that should look on the branch. You don't want your fix to occur after you set the version in the history. So you can step back by doing:

$ bzr uncommit
$ bzr shelve

This saves your changes. So then when you want to come back to releasing again

$ bzr unshelve
$ bzr commit -m 1.2.3

And you're back to where we left off. We need to put the tarball into Launchpad, and fortunately there's a handy script for that:

$ lp-project-upload indicator-messages 1.2.3 indicator-messages-1.2.3.tar.gz 1.2.4

An interesting thing here is the "1.2.4", where does that come from? It's creating the next milestone in Launchpad for you so that we can start assigning bugs that are Fix Committed to it and they're already there come release time. Also you'll need to take the bugs that are Fix Committed on the 1.2.3 release. You can do that with this script if there's a bunch, but if there's not too many just do it by hand.

Building a test package

We've built and released a tarball, and it passes all the tests, and all the code's been reviewed, but until you install it on your machine you're not putting any skin in the game. Let's really believe in that release. First we need to get the upstream branch. Go ahead and put that in a directory next to your trunk branch. In this example we're assuming the Ubuntu release is precise because that's the current right now.

$ bzr branch lp:~indicator-applet-developers/ubuntu/precise/indicator-messages/upstream upstream

You should at this point make sure that this branch is in sync with the Ubuntu Desktop branch. Frequently they'll make updates to the packaging or cherry pick revisions they're interested in from the developers. This isn't a necessary step to make everything work, but it makes the merge for them easier, so you should do it.

$ bzr merge lp:~ubuntu-desktop/indicator-messages/ubuntu
$ bzr commit -m "Sync from Ubuntu Desktop" 

We're now going to use the UDD toolchain to pull in the tarball and merge it into the packaging. I'm purposely providing two parameters here that are optional. The first is the tag that should be used to pull from. Chances are that if you just edited trunk the revision there would be exactly the same, but let's not take that chance. The second is the version where I'm just being more explicit about what we're doing.

$ bzr merge-upstream indicator-messages-1.2.3.tar.gz lp:indicator-messages -r tag:1.2.3 --version 1.2.3

At this point the tarball has been imported and merged into the current branch. One innocuous warning that you'll see in this step is about criss-cross merges. This usually happens when the desktop team cherry picks a branch. I have argued that this shouldn't be a warning and largely I regard this as Bazaar saying "if there are conflicts they're your fault," rarely are there ever conflicts from this.

We're now at the point where we start switching from the Bazaar-based tools to the Debian ones. We need to talk about what's changed in the Debian ChangeLog by using

$ dch -a

There will already be a section made for you buy the import, but you should continue to describe the upstream release with what has changed, specifically which bugs have been fixed. Also, I always change the version number from "1.2.3-0ubuntu1" to "1.2.3-0ubuntu1~ppa1". This is so that when I install the packages locally they'll get overridden by the ones that are built in the Ubuntu builders. Also it creates a unique namespace if I mess up that I can increment. You can then commit with the message you put into the changelog with

$ debcommit

A couple of quick things that you can check at this point. Debian packages have a way to include patches, we should have merged those upstream and if you leave them there they won't apply when building. So you can just remove them and comment as such

$ rm -rf debian/patches
$ dch -a
$ debcommit

Also you can check to ensure that you're shipping actually the tarball that you build, plus only the Debian packaging. All of the Debian packaging should be in the the debian/ directory so we can look to see if anything is not in that directory

$ bzr diff -r tag:upstream-1.2.3 | lsdiff

The astute reader will notice that I just used a tag that I didn't build. Hmm. This is where the Upstream Import branch comes into play. When we did the merge-upstream above we actually edited two branches. The first is the import branch where the clean tarball was placed. Then it was pulled into our packaging branch from that revision. I find it handy to look at the revision history at this point

$ bzr visualize

Now let's build ourselves a package. The first thing you want to do is set up your environment if you haven't done any packages before. Usually if it builds I also tag it as a version in the revision history so that I can keep track of what I've installed. So to ensure I don't mess that up I chain the commands

$ bzr builddeb && dch -r && debcommit -r

You should now have packages in the directory above the current directory that you can install.

$ sudo dpkg -i *1.2.3-0ubuntu1~ppa1*deb

Now you can test on your machine if things are generally working.

Pushing downstream

If everything looks good you should push that branch back up and then propose a merge into the Ubuntu Desktop branch. This tells the Ubuntu Desktop team that we've got a package that we think works and they should consider for usage in Ubuntu. They'll run their own set of acceptance tests, and I'm sure they'll find you if they fail.

Read more

It has become popular (at least in the US) to start to put bandwidth caps on customers. Technologists are upset, they've seen Star Trek and they know that everyone is going to start video conferencing constantly any time now. These uses would easily go over the paltry limits that network providers have setup. Real consumers won't care, what will actually happen is the economics of connecting to the Internet will change.

We can start to see this with AT&T;'s recent foray into allowing phone apps to pay for the bandwidth that their app consumes. This would mean that a user could grab the new app from their favorite sports team and not worry that the video highlights will knock them over their cap, the bandwidth is "free." The reality is that the way it's paid for has changed. Instead of being amalgamated over all the users, averaged and sold at a fixed rate for all you can use, it's being paid for out of the money you're paying the application provider. If it's $10/mo to Netflix, they're giving AT&T; a kickback of 10% of that to ensure you don't worry about using it.

Consumers will love this! Why? Because what will in turn happen is that the cost of the bandwidth to them will go down. Do you want a gigabit of bandwidth to your house? Sure you do, especially if it's only $20/mo. What other restrictions would you take to get that much bandwidth so cheap? I'm guessing if most people didn't feel the pain of the restrictions, they'd all take that trade. And this is where the "gain" to consumers lies. Most of them aren't doing bit torrents, they're using Hulu and Google and Youtube, all people who can give the DSL provider money to subsidize that bandwidth whether it be from subscriptions or advertising.

Network neutrality in this scenario sadly becomes a moot point. The provider isn't providing any preference to the routing, or deciding which packets to put on which line, they're just adjusting how they bill the customer. You can rest assured that no one (at least in the US) would regulate how they the ISP bills the customer. But, the restrictions would be such that consumers would choose (no mater how manipulated that choice is) one site over another based on their cost, which is effectively a negotiated price by their ISP. Again, the ISP is manipulating and choosing which sites the user is most likely to see and use.

It seems doubtful to me that all bandwidth will go to this model, only consumer level home and mobile bandwidth. Companies and Universities are unlikely to see significant change in how they purchase their own bandwidth as there isn't as much direct billing there. But, I think it might effect how people could work from home. I imagine this will result in the ability to purchase unlimited bandwidth to a single domain for VPN usage.

There is, of course, a lot more to say about this. It's an interesting pivot in how we see the Internet structured today, but was probably inevitable. We continued to squeeze the companies that owned the last miles to our houses, asking for more bandwidth for cheaper. And then we had the gall to actually use it! They're trying to figure out how to recoup those costs, and consumers aren't willing to pay more, so they have to hide it.

Comments: Identi.ca | Twitter

Read more

One of the prongs of our strategy to increase the quality of software throughout our developments in Product Strategy was to introduce static analysis to our code where possible. After evaluating several tools we decided that Coverity was the best tool for this, and we started figuring out how to make this work on our code bases. After a fair amount of work in pulling things together I'm happy to say we're scanning projects and processing the bugs in production now. You can see the open bugs and you can find out about what data is there.

While there were various technical issues I think one of the more interesting and difficult ones was social. How can we introduce a tool that we're restricted on distribution to a community without restrictions? It would have been very easy to create two classes of developers, those in Canonical with access to the tool and those outside that wouldn't have "all the info" about what was going on. We didn't want to do that, it's not how we work.

What we did was build a tool that would take the bugs out of the Coverity Integration Manager (CIM) and put them into Launchpad. So every time a commit happens on trunk a Coverity Scan is performed, the issues are put into CIM, and then Launchpad is updated. This includes both creating new defects in Launchpad as well as closing ones that were fixed. We also take the annotations of what branches Coverity took throughout the code and create an annotated source file and attach it to the bug.

Our sync tool is Open Source, and while it's hard to test as you'd need a license for Coverity to do so, we're happy to take patches. We want to see all the needed information to work on the bug in the Lauchpad bug. If there's something you think we need to add, come talk to us, it's a conversation we're interested in.

In the end we hope that we've created an environment that allows for Coverity to be used by everyone in our development community, on largely equal footing. Currently we're only licensed to scan Unity and the other projects it uses directly. I'm excited to see how we can use this new tool to improve the quality of PS projects as we continue to expand it to scan more projects we're licensed for, along with hopefully expanding its coverage as it shows value.

Comments: Identi.ca | Twitter

Read more

Procmail vs. Launchpad

Launchpad users know that it can send quite a bit of e-mail. Okay, a LOT of e-mail. There has been effort on the Launchpad side of things to add controls to set the amount of Launchpad e-mail you get. But for some of us, even getting the mail that you need results in a fair amount of Launchpad mail. In playing with my procmail config for Launchpad mail I stumbled on this little feature that I love, and thought I'd share, as it's cleaned up my mail a lot. The secret rule is:

:0:
* ^X-launchpad-bug:.*product=\/[a-z0-9-]+
.Launchpad.Bugs.${MATCH}/

Quite simply that rule takes the project specifier on a bug mail, and uses it for the folder name that it puts the mail into. This means each project gets it's own mail box, no matter what. So even as you add groups or subscribe to new bugs, you just get new mail boxes. Works for me. Hope it helps other folks too.

Comments: Identi.ca | Twitter

Read more