Canonical Voices

Posts tagged with 'development'

Anthony Dillon

Why we needed a new framework

Some time ago the web team at Canonical developed a CSS framework the we called ‘Guidelines’. Guidelines helped us to maintain our online visual language across all our sites and comprised of a number of base and component Sass files which were combined and served as a monolithic CSS file on our asset server.

We began to use Guidelines as the baseline styles for a number of our sites; www.ubuntu.com, www.canonical.com, etc.

This worked well until we needed to update a component or base style. With each edit we had to check it wasn’t going to break any of the sites we knew used it and hope it didn’t break the sites we were not aware.

Another deciding factor for us was was the feedback that we started receiving as internal teams started adopting Guidelines. We received a resounding request to break the components into modular parts so they could customise which ones they could include. Another request we heard a lot was the ability to pull the Sass files locally for offline development but keep the styling up to date.

Therefore, we set out to develop a new and improved build and delivery system, which lead us to a develop a whole new architecture and we completely refactored the Sass infrastructure.

This gave birth the Vanilla; our new and improved CSS framework.

Building Vanilla

The first decision we made was to remove the “latest” version target, so sites could no longer directly link to the bleeding edge version of the styles. Instead sites should target a specific version of Vanilla and manually upgrade as new versions are released. This helps twofold, shifting the testing and QA to the maintainers of each particular site allows for staggered updates without a sweeping update to all sites at once. Secondly, allowed us to modify current modules without updating the sites until the update was applied.

We knew that we needed to make the update process as easy as possible to help other teams keep their styles up to date. We decided against using Bower as our package manager and chose NPM to reduce the number of dependencies required to use Vanilla.

We knew we needed a build system and, as it was a greenfield project, the world was our oyster. Really it came down to Gulp vs Grunt. We had a quick discussion and decided to run with Gulp as we had more experience with it. Gulp had all the plugins we required and we all preferred the Gulp syntax instead of the Grunt spaghetti.

We had a number of JavaScript functions in Guidelines to add simple dynamic functionality to our sites, such as, equal heights or tabbed content. The team decided we wanted to try and remove the JS dependency for Vanilla and make it a pure CSS framework. So we stepped through each function and tried to work out if we, most importantly, required it at all. If so, we tried to develop a CSS replacement with an acceptable degradation for less modern browsers. We managed to cover all required functions with CSS and removed some older functionality we did not want any more.

Using Vanilla

Importing Vanilla

To start using Vanilla simple run $ npm install vanilla-framework --save in the root of your site. Then in your main stylesheet simple add:


@import ../path/to/node_modules/vanilla-framework/build/scss/build.scss
@include vanilla;

The first line in the code above imports the main build file of the vanilla-framework. Then included as it is entirely controlled with mixins, which will be explained in a future post.

Now that you have Vanilla imported correctly you should see the some default styling applied to your site. To take full advantage of the framework we require a small amount of mark up changes.

Mark up amendments

There are a number of classes used by Vanilla to set up the site wrappers. Please refer to the source for our demo site.

Vanilla-framework

Conclusion

This is still a work in progress project but we are close to releasing www.ubuntu.com and www.canonical.com based on Vanilla. Please do use Vanilla and any feedback would be very much appreciated.

For more information please visit the Vanilla project page.

Read more
Pierre Bertet

.gsa-example { margin: 1em 0; } .gsa-example p { display: none; } .gsa-grid { font-size: 14px; color: #555; border: 0.5px solid #CCC; } .gsa-grid-header { height: 30px; line-height: 30px; text-align: center; background: #F3F3F3; border-bottom: 0.5px solid #CCC; } .gsa-grid-container { display: table; width: 100%; height: 80px; } .gsa-grid-part { display: table-cell; text-align: center; vertical-align: middle; } .gsa-grid-margin { font-size: 12px; background: #FBFFCF; } .gsa-grid-content { background: #D6FED6; } .gsa-grid-panel { position: relative; background: #FFECDE; } .gsa-grid-panel:before { content: ''; position: absolute; left: 0; top: 0; bottom: 0; width: 0.5px; background: #666; } .gsa-grid-panel:first-child:before { display: none; } .gsa-grid-mcl-margin { background: #FBFFCF; } .gsa-grid-mcl-gutter { background: #CFFFFF; } .gsa-grid-mcl-column { background: #C3CBE4; } .gsa-pseudocode { font-size: 1em; margin-bottom: 1em; } .post-content h2 { margin-top: 0.863em; }

Following the article “To converge onto mobile, tablet, and desktop, think Grid Units”, here is a technical description of the way the Grid System behave. We will go through the following concepts: a Grid Unit, a Layout, a Panel, and a Multi-Column Layout.

Grid Unit

A Grid Unit (GU) is a virtual subdivision of screen space. The actual size, in pixels, of one Grid Unit is assigned by the OS depending on the device’s screen size and density, freeing the developer from worrying about these device-specific details. For more description of the system and its benefits, please see this design blog posting.

Note: There are only three target short-side screens in the grid system: 40, 50, and 90. A Grid Unit can not contain a fractional number of pixels, so if the screen width can not divide by the desired number of Grid Units (40, 50, or 90), the remainder becomes side margins.

Grid Unit Calculation

The width of a single Grid Unit is calculated as follows:

  • The width of the short edge of the screen is divided by the desired number of grid units (integer division).
  • The remainder, if any, gives us the size of the margins.
  • The quotient gives us the size of one Grid Unit.

In pseudocode:

margins = total_width mod layout_grid_units
grid_width = total_width - margins
grid_unit_width = grid_width / layout_grid_units

Example with a 540×960 screen and a 50 GU Layout

540px (total portrait width)
20px
500px or 50 GU (total width without margins)
20px

margins = 540 mod 50 = 40
grid_width = 540 - margins = 500
grid_unit = grid_width / 50 = 10

Example with a 1600×2560 screen and a 90 GU Layout

1600px (total portrait width)
35px
1530px or 90 GU (total width without margins)
35px

margins = 1600 mod 90 = 70
grid_width = 1600 - margins = 1530
grid_unit = grid_width / 90 = 17

Layout

A Layout represents the desired number of Grid Units for the short edge of the screen. That number will be used to calculate the width of a single Grid Unit in pixels, using the method described in the Grid Units section. For touch devices, the available layouts are 40 GU, 50 GU (phones or phablets), and 90 GU (tablets).

Landscape Grid Units Count Calculation

The number of Grid Units in Landscape Orientation is calculated as follows:

  • The width of the long edge of the screen is divided by the width of of a single grid unit (integer division).
  • The remainder, if any, gives us the size of the margins.
  • The quotient gives us the number of Grid Units in the Landscape Orientation.

In pseudocode:

margins = total_width mod grid_unit_width
grid_width = total_width - margins
grid_unit_count = grid_width / grid_unit_width

Example with a 540×960 screen, 50 GU Layout and 1 GU = 10px

960px (total landscape width)
96 GU (total width, no margins)

margins = 960 mod 10 = 0
grid_width = 960 - margins = 960
grid_unit_count = grid_width / 10 = 96

Panel

A Panel is a group of Grid Units. The amount of Grid Units can be any of the Layout sizes (according that it fits in the total amount of Grid Units), or variable for the remaining part.

Examples

90 GU Layout

90 GU (portrait orientation)
40 GU Panel
50 GU Panel

147 GU Layout

147 GU (landscape orientation)
40 GU Panel
50 GU Panel
57 GU Panel (variable)

Try more combinations using the Grid System Tool.

Multi-Column Layout

A Multi-Column Layout is a set of columns that can be defined inside of a panel. It contains the following properties:

  • Side margins (before the first column and after the last column)
  • Gutters (between two columns)
  • Columns

It can use from one to six columns. In 40, 50 and 90 GU Panels, the Multi-Column Layouts have been manually selected. For other widths, an algorithm tries to find the best candidate.

The margins and gutters tend to have a 2 GU width, but it can vary depending on the available possibilities.

Examples

3 Columns in a 50 GU Panel

50 GU
2
14 GU
2
14 GU
2
14 GU
2

3 Columns in a 60 GU Panel (variable)

60 GU
2
18 GU
1
18 GU
1
18 GU
2

Try more combinations using the Grid System Tool.

Read more
Benjamin Keyser

In the converged world of Unity-8, applications will work on small mobile screens, tablets and desktop monitors (with a mouse and keyboard attached) as if by magic. To achieve this transformation for your own app with little to no extra work required when considering the UI, simply design using grid units for a few predetermined virtual screen targets. Combined with Ubuntu off-the-shelf UI components built with convergence in mind, most of the hard work is done, freeing developers and designers to focus on what’s most important to their users.

What’s a grid unit? And why 40, 50, or 90 of them?

A grid unit (GU) is a virtual measure of screen space that’s independent of device hardware details like pixels or aspect ratio: those complexities are mapped under the covers by Ubuntu. Instead, by targeting just three ‘fixed’ virtual GU portrait widths—40, 50, and 90 GU— you’re guaranteed to be addressing the largest number of devices, including the desktop, to a high degree of design quality and consistency where relative spacing and content sizing just works.

The 40, 50, and 90 GU dimensions correspond to smaller smartphones, larger smartphones/phablets, and tablets respectively in portrait mode. These particular panel-widths weren’t chosen arbitrarily: they were selected by analyzing the most popular device specs on the market and picking the portrait dimensions that would embrace the largest number of possibilities most successfully, including for the desktop (more on that later).

For example, compact phones such as the BQ Aquarius E4.5 are best suited to the 40 GU-wide virtual portrait screen, offering the right balance of content to screen real estate for palm-sized viewing. For larger phones with more screen space such as the Meizu MX4, the 50 GU layout is most fitting, allowing more room for content. Finally, for edge-to-edge tablet portrait layouts for the N7 or N10, the 90 GU layout works best.

Try this exercise

Having trouble envisioning the system in action? Close your eyes and imagine a two-dimensional graph paper divided into squares that can adapt according to just three simple rules:

  • It can only be 40, 50, or 90 whole units along the short edge but the long edge can be variable
  • The long edge (in landscape mode or on the desktop) will be the whole number of GUs that carves out the maximum area rectangle that will fit within any given device’s physical screen in landscape mode based on the physical dimension of the GU determined from portrait mode (in pixels)
  • The last rule is simple but key: the squares of the graph paper must always be square—the graph paper, just to push the image a bit too far—is made of something more like graphene than polypropylene (no squeezed or stretched GUs allowed.)

Try it for yourself here: https://dl.dropboxusercontent.com/u/360991/canonical/grid-units/grid-units.html

There is one additional factor that can impact the final available screen area, but it’s a bit of a technical convolution. The under-the-covers pixels to grid unit mapping can’t include fractional pixels (this may seem like an obvious point, admittedly). But at the end of the day, the user sees the largest possible version of the 40, 50, or 90 GU wide virtual screen that’s possible on any given device. That means that all you have to do as a designer or developer is plan for the virtual dimensions we’ve been talking about, and you’re assured your user is getting the best possible rendering.

Though the system may seem abstract at first, the benefits of this system are all to easy to understand from a developer or designer standpoint: it’s far more predictable and simpler to design for layouts that follow rules rather than trying to account for a universe of idiosyncratic device possibilities. In addition, by using these layouts as the foundation, the convergence goal is much more easily achieved.

What about landscape & desktop? Use building blocks

By assembling these key portrait views together, it’s far easier to achieve landscape and desktop layouts than ever before. For example, if your app lends itself to a two panel layout, simply join together 40 and 50 GU phone layouts (that you’ve already designed) to achieve a landscape layout (or even a portrait tablet layout!)

Similarly, switching from portrait to landscape mode on tablet—also a desktop-friendly layout—could be as simple as joining a 40 GU layout and a 90 GU layout for a total of 130 GU, which fits nicely within both 16:9 and 16:10 tablet landscape screens as well as on any desktop monitor.

Since landscape and desktop layouts are the least predictable due to device variations and manual stretching by users, you can designate that of one of your panel layouts be of flexible width to fill the available space using one of these strategies:

  • Center the layout in the available space
  • Stretch or squeeze the layout to fit the available space
  • Combine these two, depending on the individual components within the layout

More complex layouts can also be achieved by joining three or more portrait layouts, too. For example, three 40 GU layouts can be joined side by side, which happen to fit perfectly into a 4:3 landscape tablet screen.

Columns, too

To help developers even further with one of the most common layouts—columnar or grid types—we’re adding a capability that maintains column-to-content size relationships across devices and the desktop the same way that type sizes are specified. This makes it very simple to achieve the proper content readability and density regardless of the device. For example, by specifying a “medium” sized column filled with “small” type, these relative relationships can be preserved throughout the converged-device experience without having to manually dig into pixel measurements.

The column capability can also adapt responsively to extra wide, variable landscape layouts, such as 16:10 aspect ratio tablets or manually stretched desktop layouts. This means that as more space becomes available as a user stretches the corners of the app window on the desktop, additional columns can be added on cue, providing more room for content.

Putting it all together across all form factors

By making screen dimensions virtual, we can minimize the vagaries of individual hardware specs that can frustrate device-convergent thinking and help developers focus more on their user’s needs. A combination of snap-together layouts, automated column layouts, and adaptive UI toolkit components like the header, list component, and bottom edge component help ensure users will experience a consistent, elegant journey from mobile to desktop and back again.

 

 

 

Read more
Robin Winslow

Despite some reservations, it looks like HTTP/2 is very definitely the future of the Internet.

Speed improvements

HTTP/2 may not be the perfect standard, but it will bring with it many long-awaited speed improvements to internet communication:

  • Sending of many different resources in the first response
  • Multiplexing requests to prevent blocking
  • Header compression
  • Keep connections alive
  • Bi-directional communication

Changes in long-held performance practices

I read a very informative post today (via Web Operations Weekly) which laid out all the ways this will change some deeply embedded performance principles for front-end developers. Namely:

Each of these practices are hacks which make website setups more complex and more opaque, but with the goal of speeding up front-end performance by working around limitations in HTTP. Fortunately, these somewhat ugly practices are no longer necessary with HTTP/2.

Importantly, Matt Wilcox points out that in an HTTP/2 world, these practices might actually slow down your website, for the following reasons:

  • If you serve concatenated CSS, Javascript or image files, it’s likely you’re sending more content than you strictly need to for each page
  • Serving assets from different domains prevents HTTP/2 from reusing existing connections, forcing it to open extra ones

But not yet…

This is all very exciting, but note that we can’t and shouldn’t start changing our practices yet. Even server-side support for HTTP/2 is still patchy, with nginx only promising full support by the end of 2015 (with Microsoft’s IIS, surprisingly, putting other servers to shame).

But of course the main limiting factor will, as usual, be browsers:

  • Firefox leads the way, with support since version 36
  • Chrome has support for spdy4 (the precursor to HTTP/2), but it isn’t enabled by default yet
  • Internet Explorer 11 supports HTTP/2 only in Windows 10 beta

As usual the main limiting factor will be waiting for market share of older versions of Internet Explorer to drop off. Braver organisations may want to be progressive by deliberately slowing down the experience for people on older browsers to speed up the more up-to-date and hence push adoption of good technology.

If you want to get really clever, you could serve a different website structure based on the user agent string, but this would really be a pain to implement and I doubt many people would want to do this.

Even with the most progressive strategy, I doubt anyone will be brave enough to drop decent HTTP/1 performance until at least 2016, as this is when nginx support should land; Windows 10 and therefore IE 11 will have had some time to gain traction and of course Internet Explorer market share in general will have continued to drop in favour of Chrome and Firefox.

TL;DR: We front-end developers should be ready to change our ways, but we don’t need to worry about it just yet.

Originally posted on robinwinslow.co.uk.

Read more
Jouni Helminen

Ubuntu community devs Andrew Hayzen and Victor Thompson chat with lead designer Jouni Helminen. Andrew and Victor have been working in open source projects for a couple of years and have done a great job on the Music application that is now rolling out on phone, tablet and desktop. In this chat they are sharing their thoughts on open source, QML, app development, and tips on how to get started contributing and developing apps.

If you want to start writing apps for Ubuntu, it’s easy. Check out http://developer.ubuntu.com, get involved on Google+ Ubuntu App Dev – https://plus.google.com/communities/1… – or contact alan.pope@canonical.com – you are in good hands!

Check out the video interview here :)

Read more
Jussi Pakkanen

There are two different schools of thought when it comes to embedded development such as working on Ubuntu phone. The first group feels that compiling on the device is better and the other group prefers doing cross-builds. Both have their advantages and disadvantages.

One of the biggest disadvantages for compiling on-device is the fact that a reflash wipes all your state and you need to compile your project from scratch again. This is unfortunate if a fresh build takes 20 minutes (which is not that uncommon).

The question then becomes whether you can avoid this. It turns out that you can by doing a ccache transplant. Here’s how to do it.

First you install ccache and set up your project:

sudo apt install ccache
CC='ccache gcc' CXX='ccache g++' cmake <your flags>

and develop as normal. Just before reflashing do a backup:

tar czf ccachedir.tar.gz .ccache/ (on device) 
adb pull /home/phablet/ccachedir.tar.gz (on desktop)

Then flash and put your ccachedir back:

adb push ccachedir.tar.gz /home/phablet/ccachedir.tar.gz (on desktop) 
tar xf ccachedir.tar.gz (on device)

Then set up your project with the same CMake stanza as above, run your build command and you are back to having a fully up-to-date build on your reflashed device.

A test done with a small-to-medium Qt5 project dropped the build time from 6 minutes to 30 seconds.

Read more
Robin Winslow

In the design team we keep some projects in Launchpad (as canonical-webmonkeys), and some project in Github (as UbuntuDesign), meaning we work in both Bazaar and Git.

The need to synchronise Github to Launchpad

Some of our Github projects need to be also stored in Launchpad, as some of our systems only have access to Launchpad repositories.

Initally we were converting these projects manually at regular intervals, but this quickly became too cumbersome.

The Bazaar synchroniser

To manage this we created a simple web-service project to synchronise Git projects to Bazaar. This script basically automates the techniques described in our previous article to pull down the Github repository, convert it to Bazaar and push it up to Launchpad at a specified location.

It’s a simple Python WSGI app which can be run directly or through a server that understands WSGI like gunicorn.

Setting up the server

Here’s a guide to setting up our bzr-sync project on a server somewhere to sync Github to Launchpad.

System dependencies

Install necessary system dependencies:

User permissions

First off, you’ll have to make sure you set up a user on whichever server is to run this service which has read access to your Github projects and write access to your Launchpad projects:

Cloning the project

Then you should clone the project and install dependencies. We placed it at /srv/bzr-sync but you can put it anywhere:

Preparing gunicorn

We should serve this over HTTPS, so our auth_token will remain secret. This means you’ll need a SSL certificate keyfile and certfile. You should get one from a certificate authority, but for testing you could just generate a self-signed-certificate.

Put your certificate files somewhere accessible (like /srv/bzr-sync/certs/), and then test out running your server with gunicorn:

Try out the sync server

You should now be able to synchronise a Github repository with Launchpad by pointing your browser at:

https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}

You should be able to see the progress of the conversion as command-line output from the above gunicorn command.

Add upstart job

Rather than running the server directly, we can setup an upstart job to manage running the process. This way the bzr-sync service will restart if the server restarts.

Here’s an example of an upstart job, which we placed at /etc/init/bzr-sync.conf:

You can now start the bzr-sync server as a service:

And output will be logged to /etc/upstart/bzr-sync.log.

Setting up Github projects

Now to use this sync server to automatically synchronise your Github projects to Launchpad, you simply need to add a post-commit webhook to ping a URL of the form:

https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}

Creating a webhook

Creating a webhook

In your repository settings, select “Webhooks and Services”, then “Add webhook”, and enter the following information:

  • Payload URL: https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}
  • Content type: “application/json”
  • Secret: -leave blank-
  • Select Just the push event
  • Tick Active
Saving a webhook

Saving a webhook

NB: Notice the Disable SSL verification button. By default, the hook will only work if your server has a valid certificate. If you are testing with a self-signed one then you’ll need to disable this SSL verification.

Now whenever you commit to your Github repository, Github should ping the URL, and the server should synchronise your repository into Launchpad.

Read more
Robin Winslow

Here in the design team we use both Bazaar and Git to keep track our projects’ hostory.

We quite often end up coverting our projects from Bazaar to Git or vice-versa. Here are some tips on how to do that.

To convert revision history between Git and Bazaar, we will use their respective fastimport features.

Install bzr-fastimport

In either case, you need the fastimport plugin for Bazaar, which installs both bzr fast-import and bzr fast-export:

Bazaar to Git

To convert a Bazaar branch to Git, open a Bazaar branch of your project and do the following:

Now you should have all the revision history for that Bazaar branch in Git:

(From Astrofloyd’s blog)

 

Git to Bazaar

Converting from Git to Bazaar is slightly different. Because Bazaar stores branches in sub-folders, while Git stores branches all in the same directory, when you convert a Git repository to Bazaar, it will create a directory tree for the branches:

bzr-repo will now contain a folder for each branch that was in your Git repository. You’re probably most interested in trunk, which will be at bzr-repo/trunk, or perhaps bzr-repo/trunk.remote:

(From the Bazaar wiki)

 

Keeping a project in both Git and Bazaar

You may wish to keep a project in both Git and Bazaar.

 

Create ignore files for both systems

As your project may be used in either Git or Bazaar, you should create practically duplicate .gitignore and .bzrignore files, the only difference being that the .bzrignore should ignore the .git directory, and the .gitignore should ignore the .bzr directory. You should also make sure you ignore the bzr-repo directory – e.g.:

And keep both ignore files in all versions of the project.

Only work in one repository

It is not practical to be doing your actual work in both systems, because converting from one to the other will overwrite any history in the destination repository. For this reason you need to choose to do all your work in either Git or Bazaar, and then regularly convert it to the other using the above conversion instructions.

Read more
Daniel Holbach

For some time we have had training materials available for learning how to write Ubuntu apps.  We’ve had a number of folks organising App Dev School events in their LoCo team. That’s brilliant!

What’s new now are training materials for developing scopes!

It’s actually not that hard. If you have a look at the workshop, you can prepare yourself quite easily for giving the session at a local event.

As we are working on an updated developer site, right now, for now take a look at the following pages if you’re interested in running such a session yourself:

I would love to get feedback, so please let me know how the materials work out for you!

Read more
Jussi Pakkanen

Bug finding tools

In Canonical’s recent devices sprint I held a presentation on automatic bug detection tools. The slides are now available here and contain info on tools such as:

Enjoy.

 

Read more
Robin Winslow

On release day we can get up to 8,000 requests a second to ubuntu.com from people trying to download the new release. In fact, last October (13.10) was the first release day in a long time that the site didn’t crash under the load at some point during the day (huge credit to the infrastructure team).

Ubuntu.com has been running on Drupal, but we’ve been gradually migrating it to a more bespoke Django based system. In March we started work on migrating the download section in time for the release of Trusty Tahr. This was a prime opportunity to look for ways to reduce some of the load on the servers.

Choosing geolocated download mirrors is hard work for an application

When someone downloads Ubuntu from ubuntu.com (on a thank-you page), they are actually sent to one of the 300 or so mirror sites that’s nearby.

To pick a mirror for the user, the application has to:

  1. Decide from the client’s IP address what country they’re in
  2. Get the list of mirrors and find the ones that are in their country
  3. Randomly pick them a mirror, while sending more people to mirrors with higher bandwidth

This process is by far the most intensive operation on the whole site, not because these tasks are particularly complicated in themselves, but because this needs to be done for each and every user – potentially 8,000 a second while every other page on the site can be aggressively cached to prevent most requests from hitting the application itself.

For the site to be able to handle this load, we’d need to load-balance requests across perhaps 40 VMs.

Can everything be done client-side?

Our first thought was to embed the entire mirror list in the thank-you page and use JavaScript in the users’ browsers to select an appropriate mirror. This would drastically reduce the load on the application, because the download page would then be effectively static and cache-able like every other page.

The only way to reliably get the user’s location client-side is with the geolocation API, which is only supported by 85% of users’ browsers. Another slight issue is that the user has to give permission before they could be assigned a mirror, which would slightly hinder their experience.

This solution would inconvenience users just a bit too much. So we found a trade-off:

A mixed solution – Apache geolocation

mod_geoip2 for Apache can apply server rules based on a user’s location and is much faster than doing geolocation at the application level. This means that we can use Apache to send users to a country-specific version of the download page (e.g. the German desktop thank-you page) by adding &country=GB to the end of the URL.

These country specific pages contain the list of mirrors for that country, and each one can now be cached, vastly reducing the load on the server. Client-side JavaScript randomly selects a mirror for the user, weighted by the bandwidth of each mirror, and kicks off their download, without the need for client-side geolocation support.

This solution was successfully implemented shortly before the release of Trusty Tahr.

(This article was also posted on robinwinslow.co.uk)

Read more
Jussi Pakkanen

A use case that pops up every now and then is to have a self-contained object that needs to be accessed from multiple threads. The problem appears when the object, as part of its usual things calls its own methods. This leads to tricky locking operations, a need to use a recursive mutex or something else that is nonoptimal.

Another common approach is to use the pimpl idiom, which hides the contents of an object inside a hidden private object. There are ample details on the internet, but the basic setup of a pimpl’d class is the following. First of all we have the class header:

class Foo {
public:
    Foo();
    void func1();
    void func2();

private:
    class Private;
    std::unique_ptr<Private> p;
};

Then in the implementation file you have first the defintiion of the private class.

class Foo::Private {
public:
    Private();
    void func1() { ... };
    void func2() { ... };

private:
   void privateFunc() { ... };
   int x;
};

Followed by the definition of the main class.

Foo::Foo() : p(new Private) {
}

void Foo::func1() {
    p->func1();
}

void Foo::func2() {
    p->func2();
}

That is, Foo only calls the implementation bits in Foo::Private.

The main idea to realize is that Foo::Private can never call functions of Foo. Thus if we can isolate the locking bits inside Foo, the functionality inside Foo::Private becomes automatically thread safe. The way to accomplish this is simple. First you add a (public) std::mutex m to Foo::Private. Then you just change the functions of Foo to look like this:

void Foo::func1() {
    std::lock_guard<std::mutex> guard(p->m);
    p->func1()
}

void Foo::func2() {
    std::lock_guard<std::mutex> guard(p->m);
    p->func2();
}

This accomplishes many things nicely:

  • Lock guards make locks impossible to leak, no matter what happens
  • Foo::Private can pretend that it is single-threaded which usually makes implementation a lot easier

The main drawback of this approach is that the locking is coarse, which may be a problem when squeezing out ultimate performance. But usually you don’t need that.

Read more
Jussi Pakkanen

There are usually two different ways of doing something. The first is the correct way. The second is the easy way.

As an example of this, let’s look at using the functionality of C++ standard library. The correct way is to use the fully qualified name, such as std::vector or std::chrono::milliseconds. The easy way is to have using std; and then just using the class names directly.

The first way is the “correct” one as it prevents symbol clashes and for a bunch of other good reasons. The latter leads to all sorts of problems and for this reason many style guides etc prohibit its use.

But there is a catch. Software is written by humans and humans have a peculiar tendency.

They will always do the easy thing.

There is no possible way for you to prevent them from doing that, apart from standing behind their back and watching every letter they type.

Any sort of system that relies, in any way, on the fact that people will do the right thing rather than the easy thing are doomed to fail from the start. They. Will. Not. Work. And they can’t be made to work. Trying to force it to work leads only to massive shouting and bad blood.

What does this mean to you, the software developer?

It means that the only way your application/library/tool/whatever is going to succeed is that correct thing to do must also be the simplest thing to do. That is the only way to make people do the right thing consistently.

Read more
Anthony Dillon

table.highlight { margin-bottom: 0; } table.highlight td { text-align: left; font-size: 0.8em; line-height: 1.6; border: 0; }

This post is part of the series ‘Making ubuntu.com responsive‘.

The JavaScript used on ubuntu.com is very light. We limit its use to small functional elements of the web style guide, which act to enhance the user experience but are never required to deliver the content the user is there to consume.

At Canonical we use YUI as our JavaScript framework of choice. We have many years of using it for our websites and web apps therefore have a large knowledge base to fall back on. We have a single core.js which contains a number of functions called on when required.

Below I will discuss some of the functions and workarounds we have provided in the web style guide.

Providing fallbacks

When considering our transition from PNGs to SVGs across the site, we provided a fallback for background images with Modernizr and reset the background image with the .no-svg class on the body. Our approached to a fallback replacement in markup images was a JavaScript snippet from CSS Tricks – SVG Fallbacks, which I converted to YUI:

The snippet above checks if Modernizr exists in the namespace. It then interrogates the Modernizr object for SVG support. If the browser does not support SVGs we loop through each image with .svg contained in the src and replace the src with the same path and filename but a .png version. This means all SVGs need to have a PNG version at the same location.

Navigation and fallback

The mobile navigation on ubuntu.com uses JavaScript to toggle the menu open and closed. We decided to use JavaScript because it’s well supported. We explored using :target as a pure CSS solution, but this selector isn’t supported in Internet Explorer 7, which represented a fair chunk of our visitors.

mobile-open-navThe navigation on ubuntu.com, in small screens.

For browsers that don’t support JavaScript we resort to displaying the “burger” icon, which acts as an in-page anchor to the footer which contains the site navigation.

Equal height

As part of the guidelines project we needed a way of setting a number of elements to the same height. We would love to use the flexbox to do this but the browser support is not there yet. Therefore we developed a small JavaScript solution:

This function finds all elements with an .equal-height class. We then look for child divs or lis and measure the tallest one. Then set all these children to the highest value.

Using combined YUI

One of the obstacles discovered when working on this project was that YUI will load modules from an http (non secure) domain as the library requires. This of course causes issues on any site that is hosted on a secure domain. We definitely didn’t want to restrict the use of the web style guide to non secure sites, therefore we need combine all required modules into a combined YUI file.

To combine your own YUI visit YUI configurator. Add the modules you require and copy the code from the Output Console panel into your own hosted file.

Final thoughts

Obviously we had a fairly easy time of making our JavaScript responsive as we only use the minimum required as a general principle on our site. But using integrating tools like Modernizr into our workflow and keeping top of CSS browser support, we can keep what we do lean and current.

Read the next post in this series: “Making ubuntu.com responsive: testing on multiple devices”

Reading list

Read more
Anthony Dillon

This post is part of the series ‘Making ubuntu.com responsive‘.

Performance has always been one of the top priorities when it came to building the responsive ubuntu.com. We started with a list of performance snags and worked to improve each one as much as possible in the time we had. Here is a quick run through of the points we collected and the way we managed to improve them.

Asset caching

We now have a number of websites using our web style guide. Because of this, we needed to deliver assets on both http and secure https domains. We decided to build an asset server to support the guidelines and other sites that require asset hosting.

This gave us the ability to increase the far future expires (FFE) of each file. By doing so the file is cached by the server and not resupplied. This gives us a much faster round trip speed. But as we are still able to update a single file we cannot set the FFE too far in the future. We plan to resolve this with a new and improved assets system, which is currently under development.

The new asset system will have a internal frontend to upload a binary file. This will provide a link to the asset with a 6 character hexadecimal attached to the file name.

/ho686yst/favicon.ico

The new system restricts the ability to edit or update a file. Only upload a new one and change the link in the markup. This guarantees the asset to stay the same forever.

Minification and concatenation

We introduced a minification and concatenation step to the build of the web style guide. This saves precious bytes and reduces the number of requests performed by each page.

We use the sass ruby gem to generate minified and concatenated CSS in production. We also run the small amount of JavaScript we have through UglifyJS before delivering to production.

Compressed images

Images were the main issue when it came to performance.

We had a look at the file sizes of some of our key images (like the ones in the tablet section of the site) and were shocked to discover we hadn’t been treating our visitors’ bandwidth kindly.

After analysing a handful of images, we decided to have a look into our assets folder and flag the images that were over 100 KB as a first go.

One of the largest time consuming jobs in this project was converting all images that could to SVGs. This meant creating pictograms and illustrations as vectors from earlier PNGs. Any images that could not be recreated as a vector graphic were heavy compressed. This squeezed an alarming amount out of the original file.

We continued this for every image on the site. By doing so the total reduction across the site was 7.712MB.

Reduce required fonts

We currently load a large selection of the Ubuntu font.

<link href='//fonts.googleapis.com/css?family=Ubuntu:400,300,300italic,400italic,700,700italic%7CUbuntu+Mono' rel='stylesheet' type='text/css' />

The designers are exploring the patterns of the present and ideal future to discover unneeded types. Since the move from normal font weight to light a few months ago as our base font style, we rarely use the bold weight (700) anymore, resorting to normal (400) for highlighting text.

Once we determine which weights we can drop, we will be able to make significant savings, as seen below:

google-fonts-beforeandafterReducing loaded fonts: before and after

Using SVG

Taking the leap to SVGs over PNG caused a number of issues. We decided to load SVGs as opposed to inline SVGs to keep our markup clean and easy to read and update. This meant we needed to provide four different coloured images for each pictogram.

pictopack-smiles

We introduced Modernizr to give us an easy way to detect browsers that do not support SVGs and replace the image with PNGs of the same path and name.

Remove unnecessary enhancements

We explored a parallaxing effect for our site’s background with JavaScript. With worked well on normal resolution screens but lagged on retina displays, so we decided not do it and set the background position to static instead — user experience is always paramount and trumps visual enhancements.

Future improvements

One of the things in our roadmap is to remove unused styles remaining in the stylesheets. There are a number of solutions for this such as grunt-uncss.

Conclusion

There is still a lot to do but we have definitely broken the back of the work to steer ubuntu.com in the right direction. The aim is to push the site up to 90+ in the speed page tests in the next wave of updates.

Read the next post in this series: “Making ubuntu.com responsive: JavaScript considerations”

Reading list

Read more
Anthony Dillon

This post is part of the series ‘Making ubuntu.com responsive‘.

When working to make the current web style guide responsive, we made some large updates to the core Sass. We decided to update the file and folder structure of our styles. I love reading about other people or organisations Sass architectures, so I thought it would be only right to share the structure that has evolved over time here at Canonical.

Let’s get right to it.

  • core.scss
  • core-constants.scss
  • core-grid.scss
  • core-mixins.scss
  • core-print.scss
  • core-templates.scss
  • patterns
    • patterns.scss
    • _arrows.scss
    • _blockquotes.scss
    • _boxes.scss
    • _buttons.scss
    • _contextual-footer.scss
    • _footer.scss
    • _forms.scss
    • _header.scss
    • _helpers.scss
    • _image-centered.scss
    • _inline-logos.scss
    • _lists.scss
    • _notifications.scss
    • _resource.scss
    • _rows.scss
    • _slider.scss
    • _structure.scss
    • _tabbed-content.scss
    • _tooltips.scss
    • _typography.scss
    • _vertical-divider.scss

I won’t describe each file as some are self-explanatory but let’s just go through the core files to understand the structure.

core.scss contains the core HTML element styling. Such as img, p, ul, etc. You could say this acts as a reset file customised to match our style.

core-constants.scss is home to all variables used throughout. This file contains all the set colours used on the site. Base font size and some extra grid variables used to extend the layout.

core-grid.scss holds the entire responsive grid styles. This file mainly consists of generated code from Gridinator which we extended with breakpoints to modify the layout as the viewport gets smaller. You can read more about how we did this in “Making ubuntu.com responsive: making our grid responsive”.

core-mixins.scss holds all the mixins used in our Sass.

core-templates.scss is used to hold full pages styling classes. Without applying a template class to the <body> of a page you get a standard page style, if you add a template class, you will get the styles that are appropriate for that template.

webteam frontend working on web style guideWeb team front end working on the web style guide.

Divide and conquer

Patterns were originally all in one huge scss file, which became difficult to maintain. So we decided to split the patterns file apart in a pattern folder. This allows us to find and work in a much more modular way. This involved manually working through the file. Removing all the components styles into a new file and import back into the same position.

Naming conventions

Our mission when setting up the naming convention for our CSS was to make the markup as human readable as possible.

We decided early on to almost use a object oriented, inheritance system for large structural elements. For example, the class .row can be extended by adding the .row-enterprise class which applies a dark aubergine background and modifies the elements inside to be display correctly on a dark background.

We switch to a single class approach for small modular components, such as lists. If you apply the class .list the list items are styled with our simple Ubuntu list style. This can be modified by changing the class to .list-ubuntu or .list-canonical, which apply their corresponding branding themed bullets to the items.

list-stylesList styles.

The decision to use different systems arose from the desire to keep the markup clean and easy to skim read by limiting the classes applied to each element. We could have continued with the inheritance system for smaller elements but that would have lead to two or more classes (.list and .list-canonical) for each element. We felt this was overkill for every small component. For large structural elements such as rows it’s easier to start with a .row class and have added functionality and styling by adding classes.

Mixins

We mainly use mixins to handle browser prefixes as we haven’t yet added a “prefixer” step to our build system.

A lot of our styles are quite specific and therefore would not benefit from being included as a mixin.

A note on Block, Element, Modifier syntax

We would like to have used the Block, Element, Modifier (BEM) syntax as we think it is a good convention and easy for people external to the project to understand and use. Since we started this project back in 2013 with the above syntax, which is now used on a number of sites across the Canonical/Ubuntu web real estate, the effort to convert every class name to follow the BEM naming convention would far outweigh the benefits it would return.

Conclusion

By splitting our bloated patterns file into multiple small modular files we have made it much easier to maintain and diagnose bugs within components. I would recommend anyone in a similar situation to find the time to split the components into separate files sooner rather then later. The effort grows exponentially the longer it’s left.

Introducing linting to the production of the guidelines will keep our coding style the same throughout the team and help readability to new members of the team.

Read the next post in this series: “Making ubuntu.com responsive: ensuring performance”

Reading list

Read more
Jussi Pakkanen

Code review is generally acknowledged to be one of the major tools in modern software development. The reasons for this are simple, it spreads knowledge of the code around the team, obvious bugs and design flaws are spotted early, which makes everyone happy and so on.

But yet our code bases are full of horrible flaws, glaring security holes, unoptimizable algorithms and everything that drives a grown man to sob uncontrollably. These are the sorts of things code review was designed to stop and prevent, so why are they there. Let’s examine this with a thought experiment.

Suppose you are working on a team. Your team member Joe has been given the task of implementing new functionality. For simplicity’s sake let us assume that the functionality is adding together two integers. Then off Joe goes and returns after a few days (possibly weeks) with something.

And I really mean Something.

Instead of coding the addition function, he has created an entire new framework for arbitrary arithmetic operations. The reasoning for this is that it is “more general” because it can represent any mathematical operation (only addition is implemented, though, and trying to use any other operation fails silently with corrupt data). The core is implemented in a multithreaded async callback spaghetti hell that only has a data race on 93% of the time (the remaining 7% covers the one existing test case).

There is only one possible code review for this kind of an achievement.

Review result: Rejected
Comments: If there was a programmer's equivalent to chemical
castration, it would already have been administered to you.

In the ideal world that would be it. The real world has certain impurities as far as this thing goes. The first thing to note is that you have to keep working with whoever wrote the code for an unforeseeable amount of time. Aggravating your coworkers consistenly is not a very nice thing to do and gives you the unfortunate label of “assh*ole”. Things get even worse if Joe is in any way your boss, because critizising his code may get you on the fast track to the basement or possibly the unemployment line. The plain fact, however, is that this piece of code must never be merged. It can’t be fixed by review comments. All of it must be thrown away and replaced with something sane.

At this point office politics enter the fray. Most corporations have deadlines to meet and products to ship. Should you try to block the entry of this code (which implements a Feature, no less, or at least a fraction of one) makes you the bad guy. The code that Joe has written is an expense and if there is one thing organisations do not want to hear it is the fact that they have just wasted a ton of effort. The Code is There and it Must Be Used this Instant! Expect to hear comments of the following kind:

  • Why are you being so negative?
  • The Product Vision requires addition of two numbers. Why are you working against the Vision?
  • Do you want to be the guy that single-handedly destroyed the entire product?
  • This piece of code adds a functionality we did not have before. It is imperative that we get it in now (the product is expected to ship in one year from now)!
  • There is no time to rewrite Joe’s work so we must merge this (even though reimplementing just the functionality would take less effort than even just fixing the obvious bugs)

This onslaught continues until eventually you give in, do the “team decision”, accept the merge and drink yourself unconscious fully aware of the fact that you have to fix all these bugs once someone starts using them (you are not allowed to rewrite it, you must fix the existing code, for that is Law). For some reason or another Joe seems to have magically been transferred somewhere else to work his magic.

For this simple reason code review does not work very well in most offices. If you only ever get comments about how to format your braces, this may be affecting you. In contrast code reviews work quite well in other circumstances. The first one of them is the Linux kernel.

The code that gets into the kernel is being watched over by lots of people. More importantly it is being watched over by people who don’t work for you, your company or their subsidiaries. Linus Torvalds does not care one iota about your company’s quarterly sales goals, launch dates or corporate goals. The only thing he cares about is whether your code is any good. If it is not, it won’t get merged and there is nothing you can do about it. There is no middle manager you can appeal to or HR you can usurp on someone. Unless you have proven yourself, the code reviewers will treat you like an enemy. Anything you do will be scrutinised, analysed, dissected and even outright rejected. This intercorporate fire wall is good because it ensures that terrible code is not merged (sometimes poor code falls through the cracks, though, but such is life). On the other hand this sort of thing causes massive flame wars every now and then.

This does not work in corporate environments, though, for the reasons listed. One way to make it work is to have a master code reviewer who does not care about what other people might think. Someone who can summarily reject awful code without a lengthy “let’s see if we can make it better” discussion. Someone who, when the sales people come to him demanding something to be done half-assed, can tell them to buzz of. Someone who does not care about hurting people’s feelings.

In other words, a psychopath.

Like most things in life, having a psychopath in charge of your code has some downsides. Most of them flow from the fact that psychopaths are usually not very nice to work with.Also, one of the things that is worse than not having code review is having a psychopath master code reviewer that is incompetent or otherwise deluded. Unfortunately most psychopaths are of the latter kind.

So there you have it: the path to high quality code is paved with psychopaths and sworn enemies.

Read more
Jussi Pakkanen

Threads are a bit like fetishes: some people can’t get enough of them and other people just can’t see what the point is. This leads to eternal battles between “we need the power” and “this is too complex”. These have a tendency to never end well.

One inescapable fact about multithreaded and asynchronous programming is that it is hard. A rough estimate says that a multithreaded solution is between ten and 1000 times harder to design, write, debug and maintain than a single threaded one. Clearly, this should not be done without heavy duty performance needs. But how much is that?

Let’s do an experiment to find out. Let’s create a simple C++ network echo server the source code of which can be downloaded here. It can serve an arbitrary amount of clients but it uses only one thread to do so. The implementation uses a simple epoll loop over the open connections.

For our test we use 10 clients that do 10 000 queries each. To reduce the effects of network latency, the clients run on the same machine. The test hardware is a Nexus 4 running the latest Ubuntu phone.

The test finishes in 11 seconds, which means that a single threaded server can serve roughly 10 000 requests a second using basic ARM hardware. It should be noted that because the clients run on the same machine, they are stealing CPU time from the server. The service rates would be bigger if the server process got its own processor. It would also be bigger if compiler optimizations had been enabled but who needs those, anyway.

The end result of all this is that unless you need massive amounts of queries per second or your backend is incredibly slow, multithreading probably won’t do you much good and you’ll be much better of doing everything single-threaded. You’ll spend a lot less time in a debugger and will be generally happier as well.

Even if you need these, multithreading might still not be the way to go. There are other ways of parallelization, such as using multiple processes, which provides additional memory safety and error tolerance as well. This is not to say threads are bad. They are a wonderful tool for many different use cases. You should just be aware the some times the best way to use threads is not to use them at all.

Actually, make that “most times”.

Read more
David Murphy (schwuk)

Today I was adding tox and Travis-CI support to a Django project, and I ran into a problem: our project doesn’t have a setup.py. Of course I could have added one, but since by convention we don’t package our Django projects (Django applications are a different story) – instead we use virtualenv and pip requirements files – I wanted to see if I could make tox work without changing our project.

Turns out it is quite easy: just add the following three directives to your tox.ini.

In your [tox] section tell tox not to run setup.py:

skipsdist = True

In your [testenv] section make tox install your requirements (see here for more details):

deps = -r{toxinidir}/dev-requirements.txt

Finally, also in your [testenv] section, tell tox how to run your tests:

commands = python manage.py test

Now you can run tox, and your tests should run!

For reference, here is a the complete (albeit minimal) tox.ini file I used:

[tox]
envlist = py27
skipsdist = True

[testenv]
deps = -r{toxinidir}/dev-requirements.txt
setenv =
    PYTHONPATH = {toxinidir}:{toxinidir}
commands = python manage.py test

Read more
Jussi Pakkanen

People often wonder why even the simplest of things seem to take long to implement. Often this is accompanied by uttering the phrase made famous by Jeremy Clarkson: how hard can it be.

Well let’s find out. As an example let’s look into a very simple case of creating a shared library that grabs a screen shot from a video file. The problem description is simplicity itself: open the file with GStreamer, seek to a random location and grab the pixels from the buffer. All in all, ten lines of code, should take a few hours to implement including unit tests.

Right?

Well, no. The very first problem is selecting a proper screenshot location. It can’t be in the latter half of the video, for instance. The simple reason for this is that it may then contain spoilers and the mere task of displaying the image might ruin the video file for viewers. So let’s instead select some suitable point, like 2/7:ths of the way in the video clip.

But in order to do that you need to first determine the length of the clip. Fortunately GStreamer provides functionality for this. Less fortunately some codec/muxer/platform/whatever combinations do not implement it. So now we have the problem of trying to determine a proper clip location for a file whose duration we don’t know. In order to save time and effort let’s just grab the screen shot at ten seconds in these cases.

The question now becomes what happens if the clip is less than ten seconds long? Then GStreamer would (probably) seek to the end of the file and grab a screenshot there. Videos often end in black so this might lead to black thumbnails every now and then. Come to think of it, that 2/7:th location might accidentally land on a fade so it might be all black, too. What we need is an image analyzer that detects whether the chosen frame is “interesting” or not.

This rabbit hole goes down quite deep so let’s not go there and instead focus on the other part of the problem.

There are mutually incompatible versions of GStreamer currently in use: 0.10 and 1.0. These two can not be in the same process at the same time due interesting technical issues. No matter which we pick, some client application might be using the other one. So we can’t actually link against GStreamer but instead we need to factor this functionality out to a separate executable. We also need to change the system’s global security profile so that every app is allowed to execute this binary.

Having all this functionality we can just fork/exec the binary and wait for it to finish, right?

In theory yes, but multimedia codecs are tricky beasts, especially hardware accelerated ones on mobile platforms. They have a tendency to freeze at any time. So we need to write functionality that spawns the process, monitors its progress and then kills it if it is not making progress.

A question we have not asked is how does the helper process provide its output to the library? The simple solution is to write the image to a file in the file system. But the question then becomes where should it go? Different applications have different security policies and can access different parts of the file system, so we need a system state parser for that. Or we can do something fancier such as creating a socket pair connection between the library and the client executable and have the client push the results through that. Which means that process spawning just got more complicated and you need to define the serialization protocol for this ad-hoc network transfer.

I could go on but I think the point has been made abundantly clear.

Read more