Canonical Voices

Posts tagged with 'featured'

Robin Winslow

Last weekend I went to my first Pycon, my second conference in a fortnight.

The conference runs from Friday to Monday, with 3 days of talks followed by one day of “sprints”, which is basically a hack day.

PyCon has a code of conduct to discourage any form of othering:

Happily, PyCon UK is a diverse community who maintain a reputation as a friendly, welcoming and dynamic group.

We trust that attendees will treat each other in a way that reflects the widely held view that diversity and friendliness are strengths of our community to be celebrated and fostered.

And for me, the conference lived up to this, with a very friendly feel, and a lot of diversity in its attendants. The friendly and informal atmosphere was impressive for such a large event with more than 450 people.

Unfortunately, the Monday sprint day was cut short by the discovery of an unexploded bomb.

Many keynotes, without much Python

There were a lot of “keynote” talks, with 2 on Friday, and one each on Saturday and Sunday. And interestingly none of them were really about Python, instead covering future technology, space travel and the psychology of power and impostor syndrome.

But of course there were plenty of Python talks throughout the rest of the day – you can read about them on my other post. And I think it was a good decision to have more abstract keynotes. It shows that the Python community really is more of a general community than just a special interest group.

Van Lindberg on data economics, Marx and the Internet of Things

In the opening keynote on Friday morning, the PSF chairman showed that total computing power is almost doubling every year, and that by 2020, the total processing power in portable devices will exceed that in PCs and servers.

He then used the fact that data can’t travel faster than 11.8 inches per nanosecond to argue that we will see a fundamental shift in the economics of data processing.

The big-data models of today’s tech giants will be challenged as it starts to be quicker and make more economic sense to process data at source, rather than transfer it to distant servers to be processed. Centralised servers will be relegated to mere aggregators of pre-processed data.

He likened this to Marx seizing the means of production in a movement which will empower users, as our portable Things start to hold the real information, and choose who to share it with.

I really hope he’s right, and that the centralised data companies are doomed to fail to be replaced by the Internet of Autonomous Things, because the world of centralised data is not an equal world.

Does Python have a future on small processors? Isn’t it too inefficient?

In a world where all the interesting software is running on light-weight portable devices, processing efficiency becomes important once again. Van used this to argue that efforts to run Python effectively on low-powered devices, like MicroPython, will be essential for Python as a language to survive.

Daniele Procida: All I really want is power

The second keynote was just after lunch on Friday, Daniele Procida, organiser of DjangoCon Europe openly admitted that what he really wanted out of life was power. He put forward the somewhat controversial idea that power and usefulness are the same thing, and that ideas without power are useless.

He made the very good point that power only comes to those who ask for it, or fight for it. And that if we want power not to be abused, we really need to talk about it a whole lot more, even though it makes people uncomfortable (try asking someone their salary). We should acknowledge who has the power, and what power we have, and watch where the power goes.

He suggested that, while in politics or industry, power is very much a rivalled good, in open source it is entirely an unrivalled good. The way you grab power in the open source community is by doing good for the community, by helping out. And so by weilding power you are actually increasing power for those around you.

I don’t agree with him on this final point. I think power can be and is hoarded and abused in the open source community as well. A lot of people use their power in the community to edge out others, or make others feel small, or to soak up influence through talks and presentations and then exert their will over the will of others. I am certainly somewhat guilty of this. Which is why we should definitely watch the power, especially our own power, to see what effect it’s having.

The takeaway maxim from this for me is that we should always make every effort to share power, as opposed to jealously guarding it. It’s not that sharing power in the open source community is inevitable or necessarily comes naturally, but at least in the open source community sharing power genuinely can help you gain respect, where I fear the same isn’t so true of politics or industry.

Dr Simon Sheridan: Landing on a comet: From planning to reality

Simon Sheridan was an incredibly most humble and unassuming man, given his towering achievements. He is a world-class space scientist who was part of the European Space Agency team who helped to land Rosetta on comet 67P.

Most of what he mentioned was basically covered in the news, but it was wonderful to hear it from his perspective.

Naomi Ceder: Confessions of a True Impostor

When, a short way into her Sunday morning keynote, Naomi Ceder asked the room:

How many of you would say that you have in some way or another suffered from imposter syndrome along with me?

Almost everybody put their hands up. This is why I think this was such an important talk.

She didn’t talk about this per se, but contributing to the open source community is hard. No-one talks about it much, but I certainly feel there’s a lot of pressure. Because of its very nature, your contributions will be open, to be seen by anyone, to be criticised by anyone. And let’s face it, your contributions are never going to be perfect. And the rules of the game aren’t written down anywhere, so the chance of being ridiculed seem pretty high. Open source may be a benevolent idea, but it’s damned scary to take part in.

I believe this is why less than 2% of open source contributors are female, compared with more like 25-30% women in software development in general. And, as with impostor syndrome, the same trend is true of other marginalised groups. It’s not surprising to me that people who are used to being criticised and discriminated against wouldn’t subject themselves to that willingly.

And, as Naomi’s question showed, it is not just marginalised people who feel this pressure, it’s all of us. And it’s a problem. As we know, confidence is no indicator of actual ability, meaning that many many talented people may be too scared to contribute to open source.

As Naomi pointed out, impostor syndrome is a socially created condition – when people are expected to do badly, they do badly. In fact I completely agree with her suggestion that the existing Wikipedia definition of impostor syndrome (at the time of writing) could be more sensitively phrased to define it as a “social condition” rather than a “psychological phenomenon”, as well as avoiding singling out women.

While Naomi chose to focus in her talk on how we personally can try to mitigate feelings of being an impostor, I think the really important message here is one for the community. It’s not our fault that open source is scary, that’s just the nature of openness. But we have to make it more welcoming. The success of the open source movement really does depend on it being diverse and accepting.

What I think is really interesting is that stereotype threat can be mitigated by reminding people of their values, of what’s important to them. And this is what I hope will save open source. The more we express our principles and passion for open source, the more we express our values, the easier it is to counter negative feelings, to be welcoming, to stop feeling like impostors.

A great conference

Overall, the conference was exhausting, but I’m very grateful that I got to attend. It was inspiring and informative, and a great example of how to maintain a great community.

If you want you can now go and read about the other talks.

(Also published on

Read more
Robin Winslow

The weekend before last, I went to PyCon UK 2015.

I already wrote about the keynotes, which were more abstract. Here I’m going to talk about the other talks I saw, which were generally more technical or at least had more to do with Python.


The talks I saw covered a whole range of topics – from testing through documentation and ways to achieve simplicity to leadership. Here are some key take-aways:

The talks

Following are slightly more in-depth summaries of the talks I thought were interesting.


Leadership of Technical Teams – Owen Campbell

There were two key points I took away from this talk. The first was Owen’s suggestion that leaders should take every opportunity to practice leading. Find opportunities in your personal life to lead teams of all sorts.

The second point was more complex. He suggested that all leaders exist on two spectra:

  • Amount of control: hand-off to dictatorial
  • Knowledge of the field: novice to expert

The less you know about a field the more hands-off you should be. And conversely, if you’re the only one who knows what you’re talking about, you should probably be more of a dictator.

Although he cautioned that people tend to mis-estimate their ability, and particularly when it comes to process (e.g. agile), people think they know more than they do. No-one is really an expert on process.

He suggested that leading technical teams is particularly challenging because you slide up and down the knowledge scale on a minute-to-minute basis sometimes, so you have to learn to be authoritative one moment and then permissive the next, as appropriate.

Document all the things – Kristian Glass

Kristian spoke about the importance, and difficulty, of good documentation.
Here are some particular points he made:

  • Document why a step is necessary, as well as what it is
  • Remember that error messages are documentation
  • Try pair documentation – novice sitting with expert
  • Checklists are great
  • Stop answering questions face-to-face. Always write it down instead.
  • Github pages are better than wikis (PRs, better tracking)

One of Kristian’s main points was that it goes against the grain to write documentation, ‘cos the person with the knowledge can’t see why it’s important, and the novice can’t write the documentation.

He suggested pair documentation as a solution, which sounds like a good idea, but I was also wondering if a StackOverflow model might work, where users submit questions, and the team treat them like bugs – need to stay on top of answering them. This answer base would then become the documentation.


Asking About Gender – the Whats, Whys and Hows – Claire Gowler

Claire spoke about how so many online forms expect people to be either simply “male” or “female”, when the truth can be much more complicated.

My main takeaway from this was the basic point that forms very often ask for much more information than they need, and make too many assumptions about their users. When it comes to asking someone’s name, try radically reducing the complexity by just having one text field called “name”. Or better yet, don’t even ask their name if you don’t need it.

I think this feeds into the whole field of simplicity very nicely. A very many apps try to do much more than they need to, and ask for much more information than they need. Thinking about how little you know about your user can help you realise what you actually don’t need to know about your user.

Finding more bugs with less work – David R. MacIver

David MacIver is the author of the Hypothesis testing library.

Hypothesis is a Python library for creating unit tests which are simpler to write and more powerful when run, finding edge cases in your code you wouldn’t have thought to look for. It is stable, powerful and easy to add to any existing test suite.

When we write tests normally, we choose the input cases, and we normally do this and we often end up being really kind to our tests. E.g.:

What Hypothesis does it help us test with a much wider and more challenging range of values. E.g.:

There are many cases where Hypothesis won’t be much use, but it’s certainly good to have in your toolkit.


Simplicity Is A Feature – Cory Benfield

Cory presented simplicity as the opposite of complexity – that is, the fewer options something gives you, the more simple and straightforward it is.

“Simplicity is about defaults”

To present as simple an interface as possible, the important thing is to have many sensible defaults as possible, so the user has to make hardly any choices.

Cory was heavily involved in the Python Requests library, and presented it as an example of how to achieve apparent simplicity in a complex tool.

“Simple things should be simple, complex things should be possible”

He suggested thinking of an “onion model”, where your application has layers, so everything is customisable at one of the layers, but the outermost layer is as simple as possible. He suggested that 3 layers is a good number:

  • Layer 1: Low-level – everything is customisable, even things that are just for weird edge-cases.
  • Layer 2: Features – a nicer, but still customisable interface for all the core features.
  • Layer 3: Simplicity – hardly any mandatory options, sensible defaults
    • People should always find this first
    • Support 80% of users 80% of the time
    • In the face of ambiguity do the right thing

He also mentioned that he likes README driven development, which seems like is an interesting approach.

How (not) to argue – a recipe for more productive tech conversations – Harry Percival

I think this one could be particularly useful for me.

Harry spoke about how many people (including him) have a very strong need to be right. Especially men. Especially those who went to boarding school. And software development tends to be full of these people.

Collaboration is particularly important in open source, and strongly disagreeing with people rarely leads to consensus, in fact it’s more likely to achieve the opposite. So it’s important that we learn how to get along.

He suggests various strategies to try out, for getting along with people better:

  • Try simply giving in, do it someone else’s way once in a while (hard to do graciously)
  • Socratic dialogue: Ask someone to explain their solution to you in simple terms
  • Dogfooding – try out your idea before arguing for its strength
  • Bide your time: Wait for the moment to see how it goes
  • Expose yourself to other social situations, where arguments are less acceptable

All of this comes down to stepping back, waiting and exercising humility. All of which are easier said than done, but all of which are very valuable if I could only manage it.

FIDO – The dog ate my password – Alex Willmer

After covering fairly common ground of how and why passwords suck, Alex introduced the FIDO alliance.

The FIDO alliance’s goal is to standardise authentication methods and hopefully replace passwords. They have created two standards for device-based authentication to try to replace passwords:

  • UAF: First-factor passwordless biometric authentication
  • U2F: Second-factor device authentication

Browsers are just starting to support U2F, whereas support for UAF is farther off. Keep an eye out.

Data Visualisation with Python and Javascript – crafting a data-visualisation for the web – Kyran Dale

Kyran spoke about visualising data, and demoed using Scrapy and Pandas to retrieve the Nobel laureatte data from Wikipedia, using Flask to serve it as a RESTful API, and then using D3 to create an interactive browser-based visualisation.

(Also published on

Read more
Robin Winslow

We recently introduced Vanilla framework, a light-weight styling framework which is intended to replace the old Guidelines framework as the basis for our Ubuntu and Canonical branded sites and others.

One of the reasons we created Vanilla was because we ran into significant problems trying to use Guidelines across multiple different sites because of the way it was made. In this article I’m going to explain how we structured Vanilla to hopefully overcome these problems.

You may wish to skip the rationale and go straight to “Overall structure” or “How to use the framework”.

Who’s it for?

We in Canonical’s design team will definitely be using Vanilla, and we also hope that other teams within Canonical can start to use it (as they did with Guidelines before it).

But most importantly, it would be fantastic if Vanilla offers a solid enough styling basis that members of the wider community feel comfortable using it as well. Guidelines was never really safe for the community at large to use with confidence.

This is why we’ve made an effort to structure Vanilla in such a way that any or all of it can be used with confidence by anyone.

Limitations of Guidelines

Guidelines was initially intended to solve exactly one problem – to be a single resource containing all the styling for This would mean that we could update Guidelines whenever we needed to update’s styling, and those changes would propagate across all our other Ubuntu-branded sites (e.g.: or

So we simply structured the markup of these sites in the same way, and then created a single hosted CSS file, and linked to it from all the sites that needed Ubuntu styling.

As time went on, two large problems with this solution emerged:

  • As over 10 sites were linking to the same CSS file, updating that file became very cumbersome, as we’d have to test the changes on every site first.
  • As the different sites became more individual over time, we found we were having to override the base stylesheet more and more, leading to overly complex and confusing local styling.

This second problem was only exacerbated when we started using Guidelines as the basis for Canonical-branded sites (e.g.: as well, which had a significantly different look.

Architecture goals for Vanilla

Learning from our experiences with Guidelines, we planned to solve a few specific problems with Vanilla:

  • Website projects could include only the CSS code they actually needed, so they don’t have to override lots of unnecessary CSS.
  • We could release new changes to the framework without worrying about breaking existing sites, allowing us to iterate quickly.
  • Other projects could still easily copy the styles we use on our sites with minimal work

To solve these problems, we decided on the following goals:

  • Create a basic framework (Vanilla) which only contains the common elements shared across all our sites.

    • This framework should be written in a modular way, so it’s easy to include only the parts you need
  • Extend the basic framework in “theme” projects (e.g. ubuntu-vanilla-theme) which will apply specific styling (colours etc.) for that specific brand.

    • These themes should also only contain code which needs to be shared. Site-specific styling should be kept local to the project
  • Still provide hosted compiled CSS for sites to hotlink to if they like, but force them to link to a specific version (e.g. vanilla-framework-version-0.0.15.css) rather than “latest” so that we can release a new version without worry.

Sass modularisation

This modular structure would be impossible in pure CSS. CSS itself offers no mechanism for encapsulation. Fortunately, our team has been using Sass to write our CSS for a while now, and Sass offers some important mechanisms that help us modularise our code. So what we decided to create is actually a Sass mixin library (like Bourbon for example) using the following mechanisms:

Default variables

Setting global variables is essential for the framework, so we can keep consistent settings (e.g. font colours, padding etc.). Variables can also be declared with the !default flag. This allows the framework’s settings to be overridden when extending the framework:

We’ve used this pattern in each of the Vanilla themes we’ve created.

Separating concerns into separate files

Sass’s @import feature allows us to encapsulate our code into files. This not only keeps our code tidier, but it means that anyone hoping to include some parts of our framework can choose which files they want:

Keeping everything in a mixin

When a Sass file is imported any loose CSS is compiled directly to the output. But anything declared inside a @mixin will not be output unless you call the mixin.

Therefore, we set a goal of ensuring that all parts of our library can be imported without any CSS being output, so that you can import the whole module but just choose what you want output into your compiled CSS:


To avoid conflicts with any local sass setup, we decided to namespace all our mixins with the vf- prefix – e.g. vf-grid or vf-header.

Overall structure

Using the aforementioned techniques, we created one base framework, Vanilla Framework, which contains (at the time of writing) 19 separate “modules” (vf-buttons, vf-grid etc.). You can see the latest release of the framework on the project’s homepage, and see the framework in action on the demo page.

The framework can be customised by overriding any of the global settings inside your local Sass, as described above.

We then extended this basic framework with three branded themes which we will use across our sites:

You can of course create your own themes by extending the framework in the same way.

NPM modules

To make it easy to include Vanilla Framework in our projects, we needed to pick a package manager to use for installing it and tracking versions. We experimented with Bower, but in the end we decided to use the Node package manager. So now anyone can install and use any of the following packages:

Hotlinks for compiled CSS

Although for in-depth usage of our framework we recommend that you install and extend it locally, we also provide hosted compiled CSS files, both minified and unminified, for the Vanilla framework itself and all Vanilla themes, which you can hotlink to if you like.

To find the links to the latest compiled CSS files, please visit the project homepage.

How to use the framework

The simplest way to use the framework is to hotlink to it. To do this, simply link to the latest version (minified or unminified) directly in your HTML:

However, if you want to take full advantage of the framework’s modular nature, you’ll probably want to install it directly in your project.

To do this, add the latest version of vanilla-framework to your project’s package.json as follows:

Then, after you’ve npm installed, include the framework from the node_modules folder:

The future

We will continue to develop Vanilla Framework, with version 0.1.0 just around the corner. You can track our progress over on the project homepage and on Github.

In the near future we’ll switch over and to using it, and when we do we’ll definitely blog about it.

Read more
Peter Mahnke

Ubuntu is a big Open Source project and there are a lot of websites in our community. The web team at Canonical literally doesn’t even know how many sites there are. We have heard there are over 200 subdomains alone, but we know that there are many more that are owned by local groups and teams outside that single domain.

Traditionally most of our work has been on and, but over the years, we have designed, often built and occasionally are responsible for the content of a series of key sites like:,,, And we have often attempted to provide on-brand versions of wiki and WordPress templates.

As the number of sites grew, we got tired of re-creating grids, templates, CSS all the time.

Enter guidelines

To resolve these issues, we created Ubuntu web guidelines. Instead of sites of cobbled together CSS and a borrowed grid, guidelines gave us something far more formalised and systematic. A grid, typography, core styles and pattern, all with our beautiful Ubuntu brand guidelines. We were not only able to maintain a whole set of sites from a single hosted set of CSS files, but others could borrow and use it easily. We even transitioned the guidelines to be responsive without breaking our sites. You can read more in our series of posts Making responsive.

Exit guidelines

Around two years ago, the web team started supporting the design and development of some of Canonical’s cloud apps, including Juju, MAAS, and Canonical OpenStack Autopilot installer. These apps have a different look and feel than And they often have special requirements, for example, MAAS is likely to be run in data centres without internet access for things like fonts, images, or CSS, that the guidelines did not natively support.

We looked at how to best adapt the guidelines to work with these web apps. We looked at how we were already making work, essentially overriding the Ubuntu branded guidelines and decided to change the entire approach.

Enter Vanilla

For Vanilla, we wanted to start over, but not have to rewrite everything. So our quick list of project goals was:

  • Minimise the changes to our existing html
  • Create a core theme that distilled the guidelines to its basic Ubuntu-ness
  • Make everything more modular, easy to add or remove components
  • Make it easy for anyone to create themes for each new project that could borrow from other themes
  • Create themes for ubuntu and canonical websites
  • Remove our reliance on javascript
  • Make it work stand-alone
  • Make it easy to build, develop and update
  • Invite other people both inside and outside Canonical to start using the framework

The future

So now we are close to releasing the first version of Vanilla. and will be moved over the coming months. Then we will look at moving other projects, like MAAS,, Landscape to the framework.

Please keep reading these posts, you can see Ant’s first post, Introducing Vanilla. And take a look at the project on GitHub and let us know what you think.

Read more
Benjamin Keyser

In the converged world of Unity-8, applications will work on small mobile screens, tablets and desktop monitors (with a mouse and keyboard attached) as if by magic. To achieve this transformation for your own app with little to no extra work required when considering the UI, simply design using grid units for a few predetermined virtual screen targets. Combined with Ubuntu off-the-shelf UI components built with convergence in mind, most of the hard work is done, freeing developers and designers to focus on what’s most important to their users.

What’s a grid unit? And why 40, 50, or 90 of them?

A grid unit (GU) is a virtual measure of screen space that’s independent of device hardware details like pixels or aspect ratio: those complexities are mapped under the covers by Ubuntu. Instead, by targeting just three ‘fixed’ virtual GU portrait widths—40, 50, and 90 GU— you’re guaranteed to be addressing the largest number of devices, including the desktop, to a high degree of design quality and consistency where relative spacing and content sizing just works.

The 40, 50, and 90 GU dimensions correspond to smaller smartphones, larger smartphones/phablets, and tablets respectively in portrait mode. These particular panel-widths weren’t chosen arbitrarily: they were selected by analyzing the most popular device specs on the market and picking the portrait dimensions that would embrace the largest number of possibilities most successfully, including for the desktop (more on that later).

For example, compact phones such as the BQ Aquarius E4.5 are best suited to the 40 GU-wide virtual portrait screen, offering the right balance of content to screen real estate for palm-sized viewing. For larger phones with more screen space such as the Meizu MX4, the 50 GU layout is most fitting, allowing more room for content. Finally, for edge-to-edge tablet portrait layouts for the N7 or N10, the 90 GU layout works best.

Try this exercise

Having trouble envisioning the system in action? Close your eyes and imagine a two-dimensional graph paper divided into squares that can adapt according to just three simple rules:

  • It can only be 40, 50, or 90 whole units along the short edge but the long edge can be variable
  • The long edge (in landscape mode or on the desktop) will be the whole number of GUs that carves out the maximum area rectangle that will fit within any given device’s physical screen in landscape mode based on the physical dimension of the GU determined from portrait mode (in pixels)
  • The last rule is simple but key: the squares of the graph paper must always be square—the graph paper, just to push the image a bit too far—is made of something more like graphene than polypropylene (no squeezed or stretched GUs allowed.)

Try it for yourself here:

There is one additional factor that can impact the final available screen area, but it’s a bit of a technical convolution. The under-the-covers pixels to grid unit mapping can’t include fractional pixels (this may seem like an obvious point, admittedly). But at the end of the day, the user sees the largest possible version of the 40, 50, or 90 GU wide virtual screen that’s possible on any given device. That means that all you have to do as a designer or developer is plan for the virtual dimensions we’ve been talking about, and you’re assured your user is getting the best possible rendering.

Though the system may seem abstract at first, the benefits of this system are all to easy to understand from a developer or designer standpoint: it’s far more predictable and simpler to design for layouts that follow rules rather than trying to account for a universe of idiosyncratic device possibilities. In addition, by using these layouts as the foundation, the convergence goal is much more easily achieved.

What about landscape & desktop? Use building blocks

By assembling these key portrait views together, it’s far easier to achieve landscape and desktop layouts than ever before. For example, if your app lends itself to a two panel layout, simply join together 40 and 50 GU phone layouts (that you’ve already designed) to achieve a landscape layout (or even a portrait tablet layout!)

Similarly, switching from portrait to landscape mode on tablet—also a desktop-friendly layout—could be as simple as joining a 40 GU layout and a 90 GU layout for a total of 130 GU, which fits nicely within both 16:9 and 16:10 tablet landscape screens as well as on any desktop monitor.

Since landscape and desktop layouts are the least predictable due to device variations and manual stretching by users, you can designate that of one of your panel layouts be of flexible width to fill the available space using one of these strategies:

  • Center the layout in the available space
  • Stretch or squeeze the layout to fit the available space
  • Combine these two, depending on the individual components within the layout

More complex layouts can also be achieved by joining three or more portrait layouts, too. For example, three 40 GU layouts can be joined side by side, which happen to fit perfectly into a 4:3 landscape tablet screen.

Columns, too

To help developers even further with one of the most common layouts—columnar or grid types—we’re adding a capability that maintains column-to-content size relationships across devices and the desktop the same way that type sizes are specified. This makes it very simple to achieve the proper content readability and density regardless of the device. For example, by specifying a “medium” sized column filled with “small” type, these relative relationships can be preserved throughout the converged-device experience without having to manually dig into pixel measurements.

The column capability can also adapt responsively to extra wide, variable landscape layouts, such as 16:10 aspect ratio tablets or manually stretched desktop layouts. This means that as more space becomes available as a user stretches the corners of the app window on the desktop, additional columns can be added on cue, providing more room for content.

Putting it all together across all form factors

By making screen dimensions virtual, we can minimize the vagaries of individual hardware specs that can frustrate device-convergent thinking and help developers focus more on their user’s needs. A combination of snap-together layouts, automated column layouts, and adaptive UI toolkit components like the header, list component, and bottom edge component help ensure users will experience a consistent, elegant journey from mobile to desktop and back again.




Read more
Robin Winslow

Despite some reservations, it looks like HTTP/2 is very definitely the future of the Internet.

Speed improvements

HTTP/2 may not be the perfect standard, but it will bring with it many long-awaited speed improvements to internet communication:

  • Sending of many different resources in the first response
  • Multiplexing requests to prevent blocking
  • Header compression
  • Keep connections alive
  • Bi-directional communication

Changes in long-held performance practices

I read a very informative post today (via Web Operations Weekly) which laid out all the ways this will change some deeply embedded performance principles for front-end developers. Namely:

Each of these practices are hacks which make website setups more complex and more opaque, but with the goal of speeding up front-end performance by working around limitations in HTTP. Fortunately, these somewhat ugly practices are no longer necessary with HTTP/2.

Importantly, Matt Wilcox points out that in an HTTP/2 world, these practices might actually slow down your website, for the following reasons:

  • If you serve concatenated CSS, Javascript or image files, it’s likely you’re sending more content than you strictly need to for each page
  • Serving assets from different domains prevents HTTP/2 from reusing existing connections, forcing it to open extra ones

But not yet…

This is all very exciting, but note that we can’t and shouldn’t start changing our practices yet. Even server-side support for HTTP/2 is still patchy, with nginx only promising full support by the end of 2015 (with Microsoft’s IIS, surprisingly, putting other servers to shame).

But of course the main limiting factor will, as usual, be browsers:

  • Firefox leads the way, with support since version 36
  • Chrome has support for spdy4 (the precursor to HTTP/2), but it isn’t enabled by default yet
  • Internet Explorer 11 supports HTTP/2 only in Windows 10 beta

As usual the main limiting factor will be waiting for market share of older versions of Internet Explorer to drop off. Braver organisations may want to be progressive by deliberately slowing down the experience for people on older browsers to speed up the more up-to-date and hence push adoption of good technology.

If you want to get really clever, you could serve a different website structure based on the user agent string, but this would really be a pain to implement and I doubt many people would want to do this.

Even with the most progressive strategy, I doubt anyone will be brave enough to drop decent HTTP/1 performance until at least 2016, as this is when nginx support should land; Windows 10 and therefore IE 11 will have had some time to gain traction and of course Internet Explorer market share in general will have continued to drop in favour of Chrome and Firefox.

TL;DR: We front-end developers should be ready to change our ways, but we don’t need to worry about it just yet.

Originally posted on

Read more
Robin Winslow

In the design team we keep some projects in Launchpad (as canonical-webmonkeys), and some project in Github (as UbuntuDesign), meaning we work in both Bazaar and Git.

The need to synchronise Github to Launchpad

Some of our Github projects need to be also stored in Launchpad, as some of our systems only have access to Launchpad repositories.

Initally we were converting these projects manually at regular intervals, but this quickly became too cumbersome.

The Bazaar synchroniser

To manage this we created a simple web-service project to synchronise Git projects to Bazaar. This script basically automates the techniques described in our previous article to pull down the Github repository, convert it to Bazaar and push it up to Launchpad at a specified location.

It’s a simple Python WSGI app which can be run directly or through a server that understands WSGI like gunicorn.

Setting up the server

Here’s a guide to setting up our bzr-sync project on a server somewhere to sync Github to Launchpad.

System dependencies

Install necessary system dependencies:

User permissions

First off, you’ll have to make sure you set up a user on whichever server is to run this service which has read access to your Github projects and write access to your Launchpad projects:

Cloning the project

Then you should clone the project and install dependencies. We placed it at /srv/bzr-sync but you can put it anywhere:

Preparing gunicorn

We should serve this over HTTPS, so our auth_token will remain secret. This means you’ll need a SSL certificate keyfile and certfile. You should get one from a certificate authority, but for testing you could just generate a self-signed-certificate.

Put your certificate files somewhere accessible (like /srv/bzr-sync/certs/), and then test out running your server with gunicorn:

Try out the sync server

You should now be able to synchronise a Github repository with Launchpad by pointing your browser at:


You should be able to see the progress of the conversion as command-line output from the above gunicorn command.

Add upstart job

Rather than running the server directly, we can setup an upstart job to manage running the process. This way the bzr-sync service will restart if the server restarts.

Here’s an example of an upstart job, which we placed at /etc/init/bzr-sync.conf:

You can now start the bzr-sync server as a service:

And output will be logged to /etc/upstart/bzr-sync.log.

Setting up Github projects

Now to use this sync server to automatically synchronise your Github projects to Launchpad, you simply need to add a post-commit webhook to ping a URL of the form:


Creating a webhook

Creating a webhook

In your repository settings, select “Webhooks and Services”, then “Add webhook”, and enter the following information:

  • Payload URL: https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}
  • Content type: “application/json”
  • Secret: -leave blank-
  • Select Just the push event
  • Tick Active
Saving a webhook

Saving a webhook

NB: Notice the Disable SSL verification button. By default, the hook will only work if your server has a valid certificate. If you are testing with a self-signed one then you’ll need to disable this SSL verification.

Now whenever you commit to your Github repository, Github should ping the URL, and the server should synchronise your repository into Launchpad.

Read more
Robin Winslow

Here in the design team we use both Bazaar and Git to keep track our projects’ hostory.

We quite often end up coverting our projects from Bazaar to Git or vice-versa. Here are some tips on how to do that.

To convert revision history between Git and Bazaar, we will use their respective fastimport features.

Install bzr-fastimport

In either case, you need the fastimport plugin for Bazaar, which installs both bzr fast-import and bzr fast-export:

Bazaar to Git

To convert a Bazaar branch to Git, open a Bazaar branch of your project and do the following:

Now you should have all the revision history for that Bazaar branch in Git:

(From Astrofloyd’s blog)


Git to Bazaar

Converting from Git to Bazaar is slightly different. Because Bazaar stores branches in sub-folders, while Git stores branches all in the same directory, when you convert a Git repository to Bazaar, it will create a directory tree for the branches:

bzr-repo will now contain a folder for each branch that was in your Git repository. You’re probably most interested in trunk, which will be at bzr-repo/trunk, or perhaps bzr-repo/trunk.remote:

(From the Bazaar wiki)


Keeping a project in both Git and Bazaar

You may wish to keep a project in both Git and Bazaar.


Create ignore files for both systems

As your project may be used in either Git or Bazaar, you should create practically duplicate .gitignore and .bzrignore files, the only difference being that the .bzrignore should ignore the .git directory, and the .gitignore should ignore the .bzr directory. You should also make sure you ignore the bzr-repo directory – e.g.:

And keep both ignore files in all versions of the project.

Only work in one repository

It is not practical to be doing your actual work in both systems, because converting from one to the other will overwrite any history in the destination repository. For this reason you need to choose to do all your work in either Git or Bazaar, and then regularly convert it to the other using the above conversion instructions.

Read more
Steph Wilson

Community members at the Sprint

Victor and Andrew are two inspiring Community developers that have devoted their spare time to contribute to the Ubuntu Touch Music App team. I sat down with them during the Washington Device Sprint in October where they told us how they drew inspiration from the Design Team, and what drives them to contribute to Ubuntu.

You can read more about Victor and Andrew through their blogs, where they post interesting articles on their work and personal projects.

From left to right: Riccardo, Andrew, Filippo and Victor

From left to right: Riccardo, Andrew, Filippo and Victor

Hey guys, so when did you first get involved with Ubuntu?

Victor: “I started to contribute to the Ubuntu platform in March/April 2013 where I noticed there was no music app, so I started putting one together. It was pretty sketchy to start with, but it worked. I didn’t have a device to test it on so I mostly tested it using the platform on my desktop – so things were a bit hit and miss.

There was also another developer doing a music app, and at the time there was no core capability of playing music through an application for the proposed devices. Michael Hall (Open Source Software Developer) and Alan Pope (Engineering Manager) pulled Daniel Holm and I together, where we merged our core bases and started the music core app.

We didn’t have as much time as other applications, so we more or less sprinted like we are now to get things done. It was very spec driven and specific, which was helpful but sometimes it was hard to put together a full vision of what the designers wanted. So now we are redoing it from the feedback we have gathered, and it’s going pretty well. A little more agile than it was previously as to do thing faster, but it’s been fun the whole time. It’s nice to work on an application that people need and gets visibility, never get sick of hacking at it.”

Andrew: “I’m from North London, where I’m currently studying Software Engineering at Oxford Brookes University. I was working on my own music app where I just taught myself how to do things using my own framework, then I saw that these guys at Ubuntu had a similar problem to me, and so I thought I’d provide a patch. This then built up from there, and now here I am!”

Steph: “It’s amazing that someone can be in their bedroom writing codes and then suddenly your app is on a phone!”

Victor: “The other great thing about it is the Community Managers make it easy and apparent that you can contribute to different projects.”

Andrew: “Yeah someone just got in contact with me and asked me if I wanted to join the team and told me how open source projects work.”

What inspired you to contribute?

Victor: “A lot of my original inspiration was from what the Design Team had previously done. The previous iteration design spec was very large for the music app and it wasn’t as future driven, more just visually pleasing.”

Do you find it hard to implement some designs?

Victor: “We try to make it as close to the designs as we can, but obviously there’s compromises. There was some very flow driven things such as: sized cover arts that were hard to implement, but we can implement them now. It’s nice because they use the same pattern from other applications.”

Andrew: “Usually we just tell the designer that this is just not possible.”

What is it about open source that you like?

Victor: “I have been a user since 2006, but I have never been a large open source developer myself. It is hard to get involved with when you don’t know what you want to contribute to.”

Andrew: “Most applications are so developed already, so you would have to learn the existing code base and develop on it, whereas if you start a new you know everything from the get-go. Seeing your application on the device and knowing it can be on other devices too, is pretty exciting!”

How does it fit into your lifestyles?

Victor: “I’m a software engineer as well, so I write a lot of code. I haven’t really done QML or QT until I started doing these applications with the Ubuntu platform, so it has been a learning experience. I am learning something new from experienced people.”

Have you made any other applications for Ubuntu?

Victor: I’ve made a few games like Piano Tiles, and another that’s kind of like a clone of that but in QML – It’s a simple app but a good time waster haha.”

How much time does it take you to develop an app?

Victor: “It took me like a day. Andrew made a game last night! In 2 hours…”

Andrew: “Yeah we did! Loads of us at the sprint just got together in a room and made a few games.”

So you’re used to working remotely, does that put a barrier against things?

Andrew: “It sometimes delay things. However, you start to build this image of a person, so when you actually get to meet them you start to understand how they are and what makes them tick.

Victor: “Depends on how personal it really needs to be. If you are collaborating together and it’s mostly writing code and coming up with ideas, it doesn’t necessarily need to be face-to-face. It is obviously nicer, but you also get the benefit if the other person is a night owl in a different country where sometimes our hours overlap, two different chunks of time we’re working in.

Andrew: “There’s usually someone on IRC to speak to, it’s like a 24 hour operation haha.”

What’s the vibe like in the Community at the moment?

Victor: “It’s a pretty small Community at the moment, with close ties. Everyone is receptive to feedback, so if it was larger Community I don’t think it would be as receptive.”

Steph: “Thanks for your time guys!”

Here’s a sneaky preview of the music app, more will be revealed soon:

Album detail

Landing page

Read more
Steph Wilson

Last week was a week of firsts for me: my first trip to America, my first Sprint and my first chili-dog.

Introducing myself as the new (only) Editorial and Web Publisher, I dove head first into the world of developers, designers and Community members. It was a very absorbing week, which after felt more like a marathon than a sprint.

After being grilled by Customs, finally we arrived at Tyson’s Corner where 200 or so other developers, designers and Community members gathered for the Devices Sprint. It was a great opportunity for me to see how people from each corner of the world contribute to Ubuntu, and share their passion for open source. I especially found it interesting to see how designers and developers work together, given their different mind sets and how they collaborated together.

The highlight for me was talking to some of the Community guys, it was really interesting to talk to them about why and how they contribute from all corners of the world.

From left to right: Riccardo, Andrew, Filippo and Victor.

(From left to right: Riccardo, Andrew, Filippo and Victor)

The main ballroom.

(The main Ballroom)

Design Team dinner.  From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni.

(Design Team dinner. From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni)

I caught up with Olga and Giorgio to share their thoughts and experiences from the Sprint:

So how did the Sprint go for you guys?

Olga: “It was very busy and productive in terms of having face time with development, which was the main reason we went, as we don’t get to see them that often.

For myself personally, I have a better understanding of things in terms of what the issues are and what is needed, and also what can or cannot be done in certain ways. I was very pleased with the whole sprint. There was a lot of running around between meetings, where I tried to use the the time in-between to catch-up with people. On the other hand as well, Development made the approach to the Design Team in terms of guidance, opinions and a general catch-up/chat, which was great!

Steph: “I agree, I found it especially productive in terms of getting the right people in the same room and working face-to-face, as it was a lot more productive than sharing a document or talking on IRC.”

Giorgio: “Working remotely with the engineers works well for certain tasks, but the Design Team sometimes needs to achieve a higher bandwidth through other means of communication, so these sprints every 3 months are incredibly useful.

What a Sprint allows us to do is to put a face to the name and start to understand each other’s needs, expectations and problems, as stuff gets lost in translation.

I agree with Olga, this Sprint was a massive opportunity to shift to much higher level of collaboration with the engineers.

What was your best moment?

Giorgio: “My best moment was when the engineers perception towards the efforts of the Design Team changed. My goal is to better this collaboration process with each Sprint.”

Did anything come up that you didn’t expect?

Giorgio: “Gaming was an underground topic that came up during the Sprint. There was a nice workshop on Wednesday on it, which was really interesting.”

Steph: “Andrew a Community Developer I interviewed actually made two games one evening during the Sprint!”

Olga: “They love what they do, they’re very passionate and care deeply.”

Do you feel as a whole the Design Team gave off a good vibe?

Giorgio: “We got a good vibe but it’s still a working progress, as we need to raise our game and become even better. This has been a long process as the design of the Platform and Apps wasn’t simply done overnight. However, now we are in a mature stage of the process where we can afford to engage with Community more. We are all in this journey together.

Canonical has a very strong engineering nature, as it was founded by engineers and driven by them, and it is has evolved because of this. As a result, over the last few years the design culture is beginning to complement that. Now they expect steer from the Design Team on a number of things, for example: Responsive design and convergence.

The Sprint was good, as we finally got more of a perception on what other parties expect from you. It’s like a relationship, you suddenly have a moment of clarity and enlightenment, where you start to see that you actually need to do that, and that will make the relationship better.”

Olga: The other parties and the Development Team started to understand that initiated communication is not just the responsibility of the Design Team, but it’s an engagement we all need to be involved in.”

In all it was a very productive week, as everyone worked hard to push for the first release of the BQ phone; together with some positive feedback and shout-outs for the Design Team :)

Unicorn hard at work.

(Unicorn hard at work)

There was a bit of time for some sightseeing too…

It would have been rude not to see what the capital had to offer, so on the weekend before the sprint we checked out some of Washington’s iconic sceneries.

The Washington Monument.

(The Washington Monument)

We saw most of the important parliamentary buildings like the White House, Washington Monument and Lincoln’s Statue. Seeing them in the flesh was spectacular, however, I half expected a UFO to appear over the Monument like in ‘Independence Day’, and for Abraham Lincoln to suddenly get up off his chair like in the movie ‘Night at the Museum’ – unfortunately none of that happened.

The White House.

(The White House)

D.C. isn’t as buzzing as London but it definitely has a lot of character, as it embodies an array of thriving ethnic pockets that represented African, Asian and Latin American cultures, and also a lot of Italians. Washington is known for getting its sax on, so me and a few of the Design Team decided to check-out the night scene and hit a local Jazz Club in Georgetown.

...And all the jazz.

(Twins Jazz Club)

On the Sunday, we decided to leave the hustle and bustle of the city and venture to the beautiful Great Falls Park, which was only 10-15 minutes from the hotel. The park was located in the Northern Fairfax County along the banks of the Potomac River, which is an integral part of the George Washington Memorial Parkway. Its creeks and rapids made for some great selfie opportunities…

Great Falls Park.

(Great Falls Park)

Read more
Luca Paulina

A few weeks ago we launched ‘Machine view’ for Juju, a feature designed to allow users to easily visualise and manage the machines running in their cloud environments. In this post I want to share with you some of the challenges we faced and the solutions we designed in the process of creating it.

A little bit about Juju…
For those of you that are unfamiliar with Juju, a brief introduction. Juju is a software tool that allows you to design, build and manage application services running in the cloud. You can use Juju through the command-line or via a GUI and our team is responsible for the user experience of Juju in the GUI.

First came ‘Service View’
In the past we have primarily focused on Juju’s ‘Service view’ – a virtual canvas that enables users to design and connect the components of their given cloud environment.


This view is fantastic for modelling the concept and relationships that define an application environment. However, for each component or service block, a user can have anything from one unit to hundreds or thousands of units, depending on the scale of the environment, and before machine view, units means machines.

The goal of machine view was to surface these units and enable users to visualise, manage and optimise their use of machines in the cloud.

‘Machine view’: design challenges
There were a number of challenges we needed to address in terms of layout and functionality:

  • The scalability of the solution
  • The glanceability of the data
  • The ability to customise and sort the information
  • The ability to easily place and move units
  • The ability to track changes
  • The ability to deploy easily to the cloud

I’ll briefly go into each one of these topics below.

Scalability: Environments can be made up of a couple of machines or thousands. This means that giving the user a clear, light and accessible layout was incredibly important – we had to make sure the design looked and worked great at both ends of the spectrum.

Machine view


Glanceability: Users need simple comparative information to help choose the right machine at-a-glace. We designed and tested hundreds of different ways of displaying the same data and eventually ended up with an extremely cut back listing which was clean and balanced.

A tour of the many incarnations and iterations of Machine view…

The ability to sort and customise: As it was possible and probable that users would scale environments to thousands of machines, we needed to provide the ability to sort and customise the views. Users can use the menus at the top of each column to hide information from view and customise the data they want visible at a glance. As users become more familiar with their machines they could turn off extra information for a denser view of their machines. Users are also given basic sorting options to help them find and explore their machines in different ways.


The ability to easily place and move units: Machine view is built around the concept of manual placement – the ability to co-locate (put more than one) items on a single machine or to define specific types of machines for specific tasks. (As opposed to automatic placement, where each unit is given a machine of the pre-determined specification). We wanted to enable users to create the most optimised machine configurations for their applications.

Drag and drop was a key interaction that we wanted to exploit for this interface because it would simplify the process of manually placing units by a significant amount. The three column layout aided the use of drag and drop, where users are able to pick up units that need placing on the left hand side and drag them to a machine in the middle column or a container in the third column. The headers also change to reveal drop zones allowing users to create new machines and containers in one fluid action keeping all of the primary interactions in view and accessible at all times.

Drag and drop in action on machine view

The ability to track changes: We also wanted to expose the changes that were being made throughout user’s environments as they were going along and allow them to commit batches of changes altogether. Deciding which changes were exposed and the design of the uncommitted notification was difficult, we had to make sure the notifications were not viewed as repetitive, that they were identifiable and that it could be used throughout the interface.



The ability to deploy easily to the cloud: Before machine view it was impossible for someone to design their entire environment before sending it to the cloud. The deployment bar is a new ever present canvas element that rationalises all of the changes made into a neat listing, it is also where users can deploy or commit those changes. Look for more information about the deployment bar in another post.

The change log exposed

The deployment summary

We hope that machine view will really help Juju users by increasing the level of control and flexibility they have over their cloud infrastructure.

This project wouldn’t have been possible without the diligent help from the Juju GUI development team. Please take a look and let us know what you think. Find out more about Juju, Machine View or take it for a spin.

Read more
Giorgio Venturi

Canonical and Ubuntu at dConstruct

Brighton is not just a lovely seaside town, mostly known for being overcrowded in Summer by Londoners in search for a bit of escapism, but also the home of a thriving community of designers, makers and entrepreneurs. Some of these people run dConstruct, a gathering where creative minds of all sorts converge every year to discuss important themes around digital innovation and culture.

When I found out that we were sponsoring the conference this year, I promptly jumped in to help my colleagues in the Phone, Web and Juju design teams. Our stand was situated in the foyer of the Brighton Dome, flashing the orange banner of Ubuntu and a number of origami unicorns.

The Ubuntu Stand

Origami Unicorns

We had an incredibly positive response from the attendees, as our stand was literally teeming with Ubuntu enthusiasts who were really keen to check our progress with the phone. We had a few BQ phones on display where we showed the new features and designs.

Testing the phone

For us, it was a great occasion to gather fresh impressions of the user experience on the phone and across a variety of apps. After a few moments, people started to understand the edge interactions and began to swipe left and right, giving positive feedback on the responsiveness of the UI. Our pre-release models of BQ phones don’t have the final shell and they still display softkeys, as a result some people found this confusing. We took the opportunity to quickly design our own custom BQ phone by using a bunch of Ubuntu stickers…and viola, problem solved! ;)

Ubuntu phone - customised

Our ‘Make your Unicorn’ competition had a fantastic response. To celebrate the coming release of Utopic Unicorn and of the BQ phone, the maker of the best origami unicorn being awarded a new phone. The crowd did not hesitate to tackle the complex paper-bending challenge and came up with a bunch of creative outcomes. We were very impressed to see how many people managed to complete the instructions, as I didn’t manage to go beyond step 15..

Ubuntu fans

Twitter   Search - #dconstruct #ubuntu

Read more
Luca Paulina

Come and meet us at dConstruct

Ubuntu is sponsoring the dConstruct “Living with the network” event on the 5th of September at the Brighton Dome. Stop by for a chat with the team, grab some goodies and enter our competition for a chance to win an Ubuntu Phone.


Read more
Robin Winslow

On release day we can get up to 8,000 requests a second to from people trying to download the new release. In fact, last October (13.10) was the first release day in a long time that the site didn’t crash under the load at some point during the day (huge credit to the infrastructure team). has been running on Drupal, but we’ve been gradually migrating it to a more bespoke Django based system. In March we started work on migrating the download section in time for the release of Trusty Tahr. This was a prime opportunity to look for ways to reduce some of the load on the servers.

Choosing geolocated download mirrors is hard work for an application

When someone downloads Ubuntu from (on a thank-you page), they are actually sent to one of the 300 or so mirror sites that’s nearby.

To pick a mirror for the user, the application has to:

  1. Decide from the client’s IP address what country they’re in
  2. Get the list of mirrors and find the ones that are in their country
  3. Randomly pick them a mirror, while sending more people to mirrors with higher bandwidth

This process is by far the most intensive operation on the whole site, not because these tasks are particularly complicated in themselves, but because this needs to be done for each and every user – potentially 8,000 a second while every other page on the site can be aggressively cached to prevent most requests from hitting the application itself.

For the site to be able to handle this load, we’d need to load-balance requests across perhaps 40 VMs.

Can everything be done client-side?

Our first thought was to embed the entire mirror list in the thank-you page and use JavaScript in the users’ browsers to select an appropriate mirror. This would drastically reduce the load on the application, because the download page would then be effectively static and cache-able like every other page.

The only way to reliably get the user’s location client-side is with the geolocation API, which is only supported by 85% of users’ browsers. Another slight issue is that the user has to give permission before they could be assigned a mirror, which would slightly hinder their experience.

This solution would inconvenience users just a bit too much. So we found a trade-off:

A mixed solution – Apache geolocation

mod_geoip2 for Apache can apply server rules based on a user’s location and is much faster than doing geolocation at the application level. This means that we can use Apache to send users to a country-specific version of the download page (e.g. the German desktop thank-you page) by adding &country=GB to the end of the URL.

These country specific pages contain the list of mirrors for that country, and each one can now be cached, vastly reducing the load on the server. Client-side JavaScript randomly selects a mirror for the user, weighted by the bandwidth of each mirror, and kicks off their download, without the need for client-side geolocation support.

This solution was successfully implemented shortly before the release of Trusty Tahr.

(This article was also posted on

Read more
Tom Macfarlane

Mobile Asia Expo 2014

Following the success of our new stand design at MWC earlier this
year, we applied the same design principles to the Ubuntu stand at
last month’s Mobile Asia Expo in Shanghai.


With increased floor space, compared to last year, and a new stand
location that was approachable from three key directions, we were
faced with a few new design challenges:

  • How to effectively incorporate existing 7m wide banners into
    the new 8m wide stand?
  • How to make the stand open and approachable from three sides
    with optimum use of floor space and maintaining the maximum
    amount storage space possible?
  • How to maintain our strong brand presence after any necessary
    structural changes?

Proposed layout ideas



MAE_drawing_3 Final layout
The final design utilised maximum floor space and incorporated the
positioning of our bespoke demo pods, that proved successful at MWC.
Along with strong branding featuring our folded paper background
with large graphics showcasing app and scope designs and a new aisle
banner. The main stand banners were then positioned in an alternating
arrangement aligned to the left and to the right above the stand.


Aisle banner




Read more
Inayaili de León Persson

Making responsive: final thoughts

This post is part of the series ‘Making responsive‘.

There are several resources out there on how to create responsive websites, but they tend to go through the process in an ideal scenario, where the project starts with a blank slate, from scratch.

That’s why we thought it would be nice to share the steps we took in converting our main website and framework,, into a fully responsive site, with all the constraints that come from working on legacy code, with very little time available, while making sure that the site was kept up-to-date and responding to the needs to the business.

Before we started this process, the idea of converting seemed like a herculean task. It was only because we divided the work in several stages, smaller projects, tightened scope, and kept things simple, that it was possible to do it.

We learned a lot throughout this past year or so, and there is a lot more we want to do. We’d love to hear about your experience of similar projects, suggestions on how we can improve, tools we should look into, books we should read, articles we should bookmark, and things we should try, so please do leave us your thoughts in the comments section.

Here is the list of all the post in the series:

  1. Intro
  2. Setting the rules
  3. Making the rules a reality
  4. Pilot projects
  5. Lessons learned
  6. Scoping the work
  7. Approach to content
  8. Making our grid responsive
  9. Adapting our navigation to small screens
  10. Dealing with responsive images
  11. Updating font sizes and increasing readability
  12. Our Sass architecture
  13. Ensuring performance
  14. JavaScript considerations
  15. Testing on multiple devices

Note: I will be speaking about making responsive at the Responsive Day Out conference in Brighton, on the 27th June. See you there!

Read more
Inayaili de León Persson

This post is part of the series ‘Making responsive‘.

When working on a responsive project you’ll have to test on multiple operating systems, browsers and devices, whether you’re using emulators or the real deal.

Testing on the actual devices is preferable — it’s hard to emulate the feel of a device in your hand and the interactions and gestures you can do with it — and more enjoyable, but budget and practicability will never allow you to get a hold of and test on all the devices people might use to access your site.

We followed very simple steps that anyone can emulate to decide which devices we tested on.


You can quickly get a grasp of which operating systems, browsers and devices your visitors are using to get to your site just by looking at your analytics.

By doing this you can establish whether some of the more troublesome ones are worth investing time in. For example, if only 10 people accessed your site through Internet Explorer 6, perhaps you don’t need to provide a PNG fallback solution. But you might also get a few less pleasant surprises and find that a hard-to-cater-for browser is one of the preferred ones by your customers.

When we did our initial analysis we didn’t find any real surprises, however, due to the high volume of traffic that sees every month even a very small percentage represented a large number of people that we just couldn’t ignore. It was important to keep this in mind as we defined which browsers, operating systems and devices to test on, and what issues we’d fix where.

Browsers (between 11 February and 13 March 2014)
Browser Percentage usage
Chrome 46.88%
Firefox 36.96%
Internet Explorer Total 7.54%
11 41.15%
8 22.96%
10 17.05%
9 14.24%
7 2.96%
6 1.56%
Safari 4.30%
Opera 1.68%
Android Browser 1.04%
Opera Mini 0.45%
Operating systems (between 11 February and 13 March 2014)
Operating system Percentage usage
Windows Total 52.45%
7 60.81%
8.1 14.31%
XP 13.3%
8.84 8.84%
Vista 2.38%
Linux 35.4%
Macintosh 6.14%
Android Total 3.32%
4.4.2 19.62%
4.3 15.51%
4.1.2 15.39%
iOS 1.76%
Mobile devices (between 12 May and 11 June 2014)
5.41% of total visits
Device Percentage usage (of 5.41%)
Apple iPad 17.33%
Apple iPhone 12.82%
Google Nexus 7 3.12%
LG Nexus 5 2.97%
Samsung Galaxy S III 2.01%
Google Nexus 4 2.01%
Samsung Galaxy S IV 1.17%
HTC M7 One 0.92%
Samsung Galaxy Note 2 0.88%
Not set 16.66%

After analysing your numbers, you can also define which combinations to test in (operating system and browser).

Go shopping

Based on the most popular devices people were using the access our site, we made a short list of the ones we wanted to buy first. We weren’t married to the analytics numbers though: the idea was to cover a range of screen sizes and operating systems and expand as we went along.

  • Nexus 7
  • Samsung Galaxy S III
  • Samsung Galaxy Note II

We opted for not splashing on an iPad or iPhone, as there are quite a few around the office (retina and non-retina) and the money we saved meant we could buy other less common devices.

Testing devicesPart of our current device suite.

When we started to get a few bug reports from Android Gingerbread and Windows phone users, we decided we needed phones with those operating systems installed. This was our second batch of purchases:

  • Samsung Galaxy y
  • Kindle Fire HD (Amazon was having a sale at the time we made the list!)
  • Nokia Lumia 520
  • LG G2

And, last but not least, we use a Nexus 4 to test on Ubuntu on phones.

We didn’t spend much in any of our shopping sprees, but have managed to slowly build an ever-growing device suite that we can test our sites on, which is invaluable when working on responsive projects.


Some operating systems and browsers are trickier to test on in native devices. We have a BrowserStack account that we tend to use mostly to test on older Windows and Internet Explorer versions, although we also test Windows on virtual machines.

browserstackBrowserStack website.


We have to confess we’re not using any special software that synchronises testing and interactions across devices. We haven’t really felt the need for that yet, but at some point we should experiment with a few tools, so we’d like to hear suggestions!

Browser support

We prefer to think of different levels (or ways) of access to the content rather than browser support. The overall rule is that everyone should be able to get to the content, and bugs that obstruct access are prioritised and fixed.

As much as possible, and depending on resources available at the time, we try to fix rendering issues in browsers and devices used by a higher percentage of visitors: degree of usage defines the degree of effort fixing rendering bugs.

And, obviously:

Read the final post in this series: “Making responsive: final thoughts”

Reading list

Read more
Anthony Dillon

table.highlight { margin-bottom: 0; } table.highlight td { text-align: left; font-size: 0.8em; line-height: 1.6; border: 0; }

This post is part of the series ‘Making responsive‘.

The JavaScript used on is very light. We limit its use to small functional elements of the web style guide, which act to enhance the user experience but are never required to deliver the content the user is there to consume.

At Canonical we use YUI as our JavaScript framework of choice. We have many years of using it for our websites and web apps therefore have a large knowledge base to fall back on. We have a single core.js which contains a number of functions called on when required.

Below I will discuss some of the functions and workarounds we have provided in the web style guide.

Providing fallbacks

When considering our transition from PNGs to SVGs across the site, we provided a fallback for background images with Modernizr and reset the background image with the .no-svg class on the body. Our approached to a fallback replacement in markup images was a JavaScript snippet from CSS Tricks – SVG Fallbacks, which I converted to YUI:

The snippet above checks if Modernizr exists in the namespace. It then interrogates the Modernizr object for SVG support. If the browser does not support SVGs we loop through each image with .svg contained in the src and replace the src with the same path and filename but a .png version. This means all SVGs need to have a PNG version at the same location.

Navigation and fallback

The mobile navigation on uses JavaScript to toggle the menu open and closed. We decided to use JavaScript because it’s well supported. We explored using :target as a pure CSS solution, but this selector isn’t supported in Internet Explorer 7, which represented a fair chunk of our visitors.

mobile-open-navThe navigation on, in small screens.

For browsers that don’t support JavaScript we resort to displaying the “burger” icon, which acts as an in-page anchor to the footer which contains the site navigation.

Equal height

As part of the guidelines project we needed a way of setting a number of elements to the same height. We would love to use the flexbox to do this but the browser support is not there yet. Therefore we developed a small JavaScript solution:

This function finds all elements with an .equal-height class. We then look for child divs or lis and measure the tallest one. Then set all these children to the highest value.

Using combined YUI

One of the obstacles discovered when working on this project was that YUI will load modules from an http (non secure) domain as the library requires. This of course causes issues on any site that is hosted on a secure domain. We definitely didn’t want to restrict the use of the web style guide to non secure sites, therefore we need combine all required modules into a combined YUI file.

To combine your own YUI visit YUI configurator. Add the modules you require and copy the code from the Output Console panel into your own hosted file.

Final thoughts

Obviously we had a fairly easy time of making our JavaScript responsive as we only use the minimum required as a general principle on our site. But using integrating tools like Modernizr into our workflow and keeping top of CSS browser support, we can keep what we do lean and current.

Read the next post in this series: “Making responsive: testing on multiple devices”

Reading list

Read more
Anthony Dillon

This post is part of the series ‘Making responsive‘.

Performance has always been one of the top priorities when it came to building the responsive We started with a list of performance snags and worked to improve each one as much as possible in the time we had. Here is a quick run through of the points we collected and the way we managed to improve them.

Asset caching

We now have a number of websites using our web style guide. Because of this, we needed to deliver assets on both http and secure https domains. We decided to build an asset server to support the guidelines and other sites that require asset hosting.

This gave us the ability to increase the far future expires (FFE) of each file. By doing so the file is cached by the server and not resupplied. This gives us a much faster round trip speed. But as we are still able to update a single file we cannot set the FFE too far in the future. We plan to resolve this with a new and improved assets system, which is currently under development.

The new asset system will have a internal frontend to upload a binary file. This will provide a link to the asset with a 6 character hexadecimal attached to the file name.


The new system restricts the ability to edit or update a file. Only upload a new one and change the link in the markup. This guarantees the asset to stay the same forever.

Minification and concatenation

We introduced a minification and concatenation step to the build of the web style guide. This saves precious bytes and reduces the number of requests performed by each page.

We use the sass ruby gem to generate minified and concatenated CSS in production. We also run the small amount of JavaScript we have through UglifyJS before delivering to production.

Compressed images

Images were the main issue when it came to performance.

We had a look at the file sizes of some of our key images (like the ones in the tablet section of the site) and were shocked to discover we hadn’t been treating our visitors’ bandwidth kindly.

After analysing a handful of images, we decided to have a look into our assets folder and flag the images that were over 100 KB as a first go.

One of the largest time consuming jobs in this project was converting all images that could to SVGs. This meant creating pictograms and illustrations as vectors from earlier PNGs. Any images that could not be recreated as a vector graphic were heavy compressed. This squeezed an alarming amount out of the original file.

We continued this for every image on the site. By doing so the total reduction across the site was 7.712MB.

Reduce required fonts

We currently load a large selection of the Ubuntu font.

<link href='//,300,300italic,400italic,700,700italic%7CUbuntu+Mono' rel='stylesheet' type='text/css' />

The designers are exploring the patterns of the present and ideal future to discover unneeded types. Since the move from normal font weight to light a few months ago as our base font style, we rarely use the bold weight (700) anymore, resorting to normal (400) for highlighting text.

Once we determine which weights we can drop, we will be able to make significant savings, as seen below:

google-fonts-beforeandafterReducing loaded fonts: before and after

Using SVG

Taking the leap to SVGs over PNG caused a number of issues. We decided to load SVGs as opposed to inline SVGs to keep our markup clean and easy to read and update. This meant we needed to provide four different coloured images for each pictogram.


We introduced Modernizr to give us an easy way to detect browsers that do not support SVGs and replace the image with PNGs of the same path and name.

Remove unnecessary enhancements

We explored a parallaxing effect for our site’s background with JavaScript. With worked well on normal resolution screens but lagged on retina displays, so we decided not do it and set the background position to static instead — user experience is always paramount and trumps visual enhancements.

Future improvements

One of the things in our roadmap is to remove unused styles remaining in the stylesheets. There are a number of solutions for this such as grunt-uncss.


There is still a lot to do but we have definitely broken the back of the work to steer in the right direction. The aim is to push the site up to 90+ in the speed page tests in the next wave of updates.

Read the next post in this series: “Making responsive: JavaScript considerations”

Reading list

Read more
Anthony Dillon

This post is part of the series ‘Making responsive‘.

When working to make the current web style guide responsive, we made some large updates to the core Sass. We decided to update the file and folder structure of our styles. I love reading about other people or organisations Sass architectures, so I thought it would be only right to share the structure that has evolved over time here at Canonical.

Let’s get right to it.

  • core.scss
  • core-constants.scss
  • core-grid.scss
  • core-mixins.scss
  • core-print.scss
  • core-templates.scss
  • patterns
    • patterns.scss
    • _arrows.scss
    • _blockquotes.scss
    • _boxes.scss
    • _buttons.scss
    • _contextual-footer.scss
    • _footer.scss
    • _forms.scss
    • _header.scss
    • _helpers.scss
    • _image-centered.scss
    • _inline-logos.scss
    • _lists.scss
    • _notifications.scss
    • _resource.scss
    • _rows.scss
    • _slider.scss
    • _structure.scss
    • _tabbed-content.scss
    • _tooltips.scss
    • _typography.scss
    • _vertical-divider.scss

I won’t describe each file as some are self-explanatory but let’s just go through the core files to understand the structure.

core.scss contains the core HTML element styling. Such as img, p, ul, etc. You could say this acts as a reset file customised to match our style.

core-constants.scss is home to all variables used throughout. This file contains all the set colours used on the site. Base font size and some extra grid variables used to extend the layout.

core-grid.scss holds the entire responsive grid styles. This file mainly consists of generated code from Gridinator which we extended with breakpoints to modify the layout as the viewport gets smaller. You can read more about how we did this in “Making responsive: making our grid responsive”.

core-mixins.scss holds all the mixins used in our Sass.

core-templates.scss is used to hold full pages styling classes. Without applying a template class to the <body> of a page you get a standard page style, if you add a template class, you will get the styles that are appropriate for that template.

webteam frontend working on web style guideWeb team front end working on the web style guide.

Divide and conquer

Patterns were originally all in one huge scss file, which became difficult to maintain. So we decided to split the patterns file apart in a pattern folder. This allows us to find and work in a much more modular way. This involved manually working through the file. Removing all the components styles into a new file and import back into the same position.

Naming conventions

Our mission when setting up the naming convention for our CSS was to make the markup as human readable as possible.

We decided early on to almost use a object oriented, inheritance system for large structural elements. For example, the class .row can be extended by adding the .row-enterprise class which applies a dark aubergine background and modifies the elements inside to be display correctly on a dark background.

We switch to a single class approach for small modular components, such as lists. If you apply the class .list the list items are styled with our simple Ubuntu list style. This can be modified by changing the class to .list-ubuntu or .list-canonical, which apply their corresponding branding themed bullets to the items.

list-stylesList styles.

The decision to use different systems arose from the desire to keep the markup clean and easy to skim read by limiting the classes applied to each element. We could have continued with the inheritance system for smaller elements but that would have lead to two or more classes (.list and .list-canonical) for each element. We felt this was overkill for every small component. For large structural elements such as rows it’s easier to start with a .row class and have added functionality and styling by adding classes.


We mainly use mixins to handle browser prefixes as we haven’t yet added a “prefixer” step to our build system.

A lot of our styles are quite specific and therefore would not benefit from being included as a mixin.

A note on Block, Element, Modifier syntax

We would like to have used the Block, Element, Modifier (BEM) syntax as we think it is a good convention and easy for people external to the project to understand and use. Since we started this project back in 2013 with the above syntax, which is now used on a number of sites across the Canonical/Ubuntu web real estate, the effort to convert every class name to follow the BEM naming convention would far outweigh the benefits it would return.


By splitting our bloated patterns file into multiple small modular files we have made it much easier to maintain and diagnose bugs within components. I would recommend anyone in a similar situation to find the time to split the components into separate files sooner rather then later. The effort grows exponentially the longer it’s left.

Introducing linting to the production of the guidelines will keep our coding style the same throughout the team and help readability to new members of the team.

Read the next post in this series: “Making responsive: ensuring performance”

Reading list

Read more