Canonical Voices

Tristram Oaten

Publishing Vanilla

We’ve got a new CSS framework at Canonical, named Vanilla. My colleague Ant has a great write-up introducing Vanilla. Essentially it’s a CSS microframework powered by Sass. The build process consists of two steps, an open source build, and a private build.

Open Source Build

While there are inevitably componants that need to be kept private (keys, tokens, etc.) being Canonical, we want to keep much of the build in the open, in addition to the code. We wanted the build to be as automated and close to CI/CD principles as possible. Here’s what happens:

Committing to our github repository kicks off a travis build that runs gulp tests, which include sass-lint. And we also use david-dm.org to make sure our npm dependencies are up to date. All of these have nice badges we can link to right from our github page, so the first thing people see is the heath of our project. I really like this, it keeps us honest, and informs the community.

Not everything can be done with travis, however, as publishing Vanilla to npm, updating our project page and demo site require some private credentials. For the confidential build, we use Jenkins. (formally Hudson, a java-based build management system.).

Private Build with Jenkins

Our Jenkins build does a few things:

  1. Increment the package.json version number
  2. npm publish (package)
  3. Build Sass with npm install
  4. Upload css to our assets server
  5. Update Sassdoc
  6. Update demo site with new CSS

Robin put this functionality together in a neat bash script: publish.sh.

We use this script in a Jenkins build that we kick off with a few parameters, point, minor and major to indicate the version to be updated in package.json. This allows our devs push-button releases on the fly, with the same build, from bugfixes all the way up to stable releases (1.0.0)

After less than 30 seconds, our demo site, which showcases framework elements and their usage, is updated. This demo is styled with the latest version of Vanilla, and also serves as documentation and a test of the CSS. We take advantage of github’s html publishing feature, Github Pages. Anyone can grab – or even hotlink – the files on our release page.

The Future

It’d be nice for the regression test (which we currently just eyeball) to be automated, perhaps with a visual diff tool such as PhantomCSS or a bespoke solution with Selenium.

Wrap-up

Vanilla is ready to hack on, go get it here and tell us what you think! (And yes, you can get it in colours other than Ubuntu Orange)

Read more
Robin Winslow

pre {font-size: 1em; margin-bottom: 0.75em; padding: 0.75em} code {padding-left: 0.5em; padding-right: 0.5em} pre code {padding: 0; display: block;}

I recently tried to setup OpenID for one of our sites to support authentication with login.ubuntu.com, and it took me much longer than I’d anticipated because our site is behind a reverse-proxy.

My problem

I was trying to setup OpenID with the django-openid-auth plugin. Normally our sites don’t include absolute links (https://example.com/hello-world) back to themselves, because relative URLs (/hello-world) work perfectly well, so normally Django doesn’t need to know the domain name that it’s hosted it.

However, when authenticating with OpenID, our website needs to send the user off to login.ubuntu.com with a callback url so that once they’re successfully authenticed they can be directed back to our site. This means that the django-openid-auth needs to ask Django for an absolute URL to send off to the authenticator (e.g. https://example.com/openid/complete).

The problem with proxies

In our setup, the Django app is served with a light Gunicorn server behind an Apache front-end which handles HTTPS negotiation:

User <-> Apache <-> Gunicorn (Django)

(There’s actually an additional HAProxy load-balancer in between, which I thought was complicating matters, but it turns out HAProxy was just passing through requests absolutely untouched and so was irrelevant to the problem.)

Apache was setup as a reverse-proxy to Django, meaning that the user only ever talks to Apache, and Apache goes off to get the response from Django itself, with Django’s local network IP address – e.g. 10.0.0.3.

It turns out this is the problem. Because Apache, and not the user directly, is making the request to Django, Django sees the request come in at http://10.0.0.3/openid/login rather than https://example.com/openid/login. This meant that django-openid-auth was generating and sending the wrong callback URL of http://10.0.0.3/openid/complete to login.ubuntu.com.

How Django generates absolute URLs

django-openid-auth uses HttpRequest.build_absolute_uri which in turn uses HttpRequest.get_host to retrieve the domain. get_host then normally uses the HTTP_HOST header to generate the URL, or if it doesn’t exist, it uses the request URL (e.g.: http://10.0.0.3/openid/login).

However, after inspecting the code for get_host I discovered that if and only if settings.USE_X_FORWARDED_HOST is True then Django will look for the X-Forwarded-Host header first to generate this URL. This is the key to the solution.

Solving the problem – Apache

In our Apache config, we were initially using mod_rewrite to forward requests to Django.

RewriteEngine On
RewriteRule ^/?(.*)$ http://10.0.0.3/$1 [P,L]

However, when proxying with this method Apache2 doesn’t send the X_Forwarded_Host header that we need. So we changed it to use mod_proxy:

ProxyPass / http://10.0.0.3/
ProxyPassReverse / http://10.0.0.3/

This then means that Apache will send three headers to Django: X-Forwarded-For, X-Forwarded-Host and X-Forwarded-Server, which will contain the information for the original request.

In our case the Apache frontend used HTTPS protocol, whereas Django was only using so we had to pass that through as well by manually setting Apache to pass an X-Forwarded-Proto to Django. Our eventual config changes looked like this:

<VirtualHost *:443>
    ...
    RequestHeader set X-Forwarded-Proto 'https' env=HTTPS

    ProxyPass / http://10.0.0.3/
    ProxyPassReverse / http://10.0.0.3/
    ...
</VirtualHost>

This meant that Apache now passes through all the information Django needs to properly build absolute URLs, we just need to make Django parse them properly.

Solving the problem – Django

By default, Django ignores all X-Forwarded headers. As mentioned earlier, you can set get_host to read the X-Forwarded-Host header by setting USE_X_FORWARDED_HOST = True, but we also needed one more setting to get HTTPS to work. These are the settings we added to our Django settings.py:

# Setup support for proxy headers
USE_X_FORWARDED_HOST = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')

After changing all these settings, we now have Apache passing all the relevant information (X-Forwarded-Host, X-Forwarded-Proto) so that Django is now able to successfully generate absolute URLs, and django-openid-auth now works a charm.

Read more
Robin Winslow

We recently introduced Vanilla framework, a light-weight styling framework which is intended to replace the old Guidelines framework as the basis for our Ubuntu and Canonical branded sites and others.

One of the reasons we created Vanilla was because we ran into significant problems trying to use Guidelines across multiple different sites because of the way it was made. In this article I’m going to explain how we structured Vanilla to hopefully overcome these problems.

You may wish to skip the rationale and go straight to “Overall structure” or “How to use the framework”.

Who’s it for?

We in Canonical’s design team will definitely be using Vanilla, and we also hope that other teams within Canonical can start to use it (as they did with Guidelines before it).

But most importantly, it would be fantastic if Vanilla offers a solid enough styling basis that members of the wider community feel comfortable using it as well. Guidelines was never really safe for the community at large to use with confidence.

This is why we’ve made an effort to structure Vanilla in such a way that any or all of it can be used with confidence by anyone.

Limitations of Guidelines

Guidelines was initially intended to solve exactly one problem – to be a single resource containing all the styling for ubuntu.com. This would mean that we could update Guidelines whenever we needed to update ubuntu.com’s styling, and those changes would propagate across all our other Ubuntu-branded sites (e.g.: cn.ubuntu.com or developer.ubuntu.com).

So we simply structured the markup of these sites in the same way, and then created a single hosted CSS file, and linked to it from all the sites that needed Ubuntu styling.

As time went on, two large problems with this solution emerged:

  • As over 10 sites were linking to the same CSS file, updating that file became very cumbersome, as we’d have to test the changes on every site first.
  • As the different sites became more individual over time, we found we were having to override the base stylesheet more and more, leading to overly complex and confusing local styling.

This second problem was only exacerbated when we started using Guidelines as the basis for Canonical-branded sites (e.g.: canonical.com) as well, which had a significantly different look.

Architecture goals for Vanilla

Learning from our experiences with Guidelines, we planned to solve a few specific problems with Vanilla:

  • Website projects could include only the CSS code they actually needed, so they don’t have to override lots of unnecessary CSS.
  • We could release new changes to the framework without worrying about breaking existing sites, allowing us to iterate quickly.
  • Other projects could still easily copy the styles we use on our sites with minimal work

To solve these problems, we decided on the following goals:

  • Create a basic framework (Vanilla) which only contains the common elements shared across all our sites.

    • This framework should be written in a modular way, so it’s easy to include only the parts you need
  • Extend the basic framework in “theme” projects (e.g. ubuntu-vanilla-theme) which will apply specific styling (colours etc.) for that specific brand.

    • These themes should also only contain code which needs to be shared. Site-specific styling should be kept local to the project
  • Still provide hosted compiled CSS for sites to hotlink to if they like, but force them to link to a specific version (e.g. vanilla-framework-version-0.0.15.css) rather than “latest” so that we can release a new version without worry.

Sass modularisation

This modular structure would be impossible in pure CSS. CSS itself offers no mechanism for encapsulation. Fortunately, our team has been using Sass to write our CSS for a while now, and Sass offers some important mechanisms that help us modularise our code. So what we decided to create is actually a Sass mixin library (like Bourbon for example) using the following mechanisms:

Default variables

Setting global variables is essential for the framework, so we can keep consistent settings (e.g. font colours, padding etc.). Variables can also be declared with the !default flag. This allows the framework’s settings to be overridden when extending the framework:

We’ve used this pattern in each of the Vanilla themes we’ve created.

Separating concerns into separate files

Sass’s @import feature allows us to encapsulate our code into files. This not only keeps our code tidier, but it means that anyone hoping to include some parts of our framework can choose which files they want:

Keeping everything in a mixin

When a Sass file is imported any loose CSS is compiled directly to the output. But anything declared inside a @mixin will not be output unless you call the mixin.

Therefore, we set a goal of ensuring that all parts of our library can be imported without any CSS being output, so that you can import the whole module but just choose what you want output into your compiled CSS:

Namespacing

To avoid conflicts with any local sass setup, we decided to namespace all our mixins with the vf- prefix – e.g. vf-grid or vf-header.

Overall structure

Using the aforementioned techniques, we created one base framework, Vanilla Framework, which contains (at the time of writing) 19 separate “modules” (vf-buttons, vf-grid etc.). You can see the latest release of the framework on the project’s homepage, and see the framework in action on the demo page.

The framework can be customised by overriding any of the global settings inside your local Sass, as described above.

We then extended this basic framework with three branded themes which we will use across our sites:

You can of course create your own themes by extending the framework in the same way.

NPM modules

To make it easy to include Vanilla Framework in our projects, we needed to pick a package manager to use for installing it and tracking versions. We experimented with Bower, but in the end we decided to use the Node package manager. So now anyone can install and use any of the following packages:

Hotlinks for compiled CSS

Although for in-depth usage of our framework we recommend that you install and extend it locally, we also provide hosted compiled CSS files, both minified and unminified, for the Vanilla framework itself and all Vanilla themes, which you can hotlink to if you like.

To find the links to the latest compiled CSS files, please visit the project homepage.

How to use the framework

The simplest way to use the framework is to hotlink to it. To do this, simply link to the latest version (minified or unminified) directly in your HTML:

However, if you want to take full advantage of the framework’s modular nature, you’ll probably want to install it directly in your project.

To do this, add the latest version of vanilla-framework to your project’s package.json as follows:

Then, after you’ve npm installed, include the framework from the node_modules folder:

The future

We will continue to develop Vanilla Framework, with version 0.1.0 just around the corner. You can track our progress over on the project homepage and on Github.

In the near future we’ll switch over ubuntu.com and canonical.com to using it, and when we do we’ll definitely blog about it.

Read more
Richard McCartney

Converting old guidelines to vanilla

How the previous guidelines worked

Guidelines essentially is a framework built by the Canonical web design team. The whole framework has an array of tools to make it easy to create a Ubuntu themed sites. The guidelines were a collaboration between developers and designers and followed consistent look which meant in-house teams and community websites could have a consistent brand feel.

It worked in one way, a large framework of modules, helpers and components which built the Ubuntu style for all our sites. The structure of this required a lot of overrides and work arounds for different projects and added to a bloated nature that the guidelines had become. Canonical and cloud sites required a large set of overrides to imprint their own visual requirements and created a lot of duplication and overhead for each site.

There was no build system nor a way to update to the latest version unless using the hosted pre-compiled guidelines or pulled from our bazaar repository. Not having any form of build step meant having to rely on a local Sass compiler or setup a watcher for each project. Also we had no viable way to check linting errors or create a concrete coding standard.

The actual framework its self was a ported CSS framework into Sass. Not utilising placeholders or mixins correctly and with a bloated amount of variables. To change one colour for example or changing the size of an element wouldn’t be as easy as passing a mixin with set values or changing one variable.

Unlike how we have currently built in Vanilla, all preprocessor styles are created via mixins. Creating responsive changes would be done in a large media query at the end of any document and this again would be repeated for our Canonical or Cloud styles too.

Removing Ubuntu and Canonical from theme

Our first task in building Vanilla was referencing all elements which were ‘Ubuntu’ centric. Anything which had a unique class, colour or style. Once identified the team systematically took one section of each part of guidelines and removed the classes or variables and creating new versions. Once this stage was achieved the team was able to then look at refactoring and updating the code.

Clean-up and making it generic

We decided when starting this project to update how we write any new module / element. Linting was a big factor and when using a build system like gulp we finally had the ability to adhere to a coding standard. This meant a lot of modules / elements had to be rewritten and also improved upon, trimming down the Sass nesting, applying new techniques such as flex box and cleaning duplicated styles.

But the main goal was to make it generic, extendable and easy. Not the simplest of tasks, this meant removing any custom modules or specific style / classes but also building the framework to change via a variable update or a value change with in a mixin. We wanted the Vanilla theme to inherit another developers style and that would cascade through out the whole framework with ease. Setting the brand colour for example would effect the whole framework and change a multiple of modules / elements. But you are not restricted which we had as a bottle neck with the old guidelines.

Using Sass mixins

Mixins are a powerful part of Sass which we weren’t utilising. In guidelines they were used to create preprocessor polyfills, something which was annoying. Gulp now replaces that need. We used mixins to modularise the entire framework, thus giving flexibility over which parts of the framework a project requires.

The ability to easily turn on/off a section of vanilla felt very powerful but required. We wanted a developer to choose what was needed for their project. This was the opposite of guidelines where you would receive the entire framework. In Vanilla, each section our elements or modules would also be encapsulated with in mixins and on some have values which would effect them. For example the buttons mixin;

@mixin vf-button($button-color, $button-bg, $border-color) {
  @extend %button-pattern;
  color: $button-color;
  background: $button-bg;
    
  @if $border-color != null {
    border: 1px solid $border-color;
  }
    
  &:hover {
    background: darken($button-bg, 6.2%);
      
    @if $button-bg == $transparent {
      text-decoration: underline;
    }
  }
}



The above code shows how this mixin isn’t attached to fixed styles or colours. When building a new Vanilla theme a few variable changes will style any button to the projects requirements. This is something we have replicated through out the project and creates a far better modular framework.

Creating new themes

As I have mentioned earlier a few changes can setup a whole new theme in Vanilla, using it as a base and then adding or extending new styles. Change the branding or a font family just requires overwriting the default value e.g $brand-colour: $orange !default; is set in the global variables document. Amending this in another document and setting it to $brand-colour: #990000; will change any element effected by brand colour thus creating the beginning of a new theme.

We can also take this per module mixin. Including the module into a new class or element and then extend or add upon it. This means themes are not constricted to just using what is there but gives more freedom. This method is particularly useful for the web team as we build themes for Ubuntu, Canonical and Cloud products.

An example of a live theme we have created is the Ubuntu vanilla theme. This is an extension of the Vanilla framework and is set up to override any required variables to give it the Ubuntu brand. Diving into the theme.scss It shows all elements used from Vanilla but also Ubuntu specific modules. These are exclusively used just for the Ubuntu brand but are also structured in the same manner as the Vanilla framework. This reduces complexity in maintaining these themes and developers can easily pick up what has been built or use it as a reference to building their own theme versions.

Read more
Steph Wilson

We have given our monochromatic icons a small facelift to make them more elegant, lighter and consistent across the platform by incorporating our Suru language and font style.

The rationale behind the new designs are similar to that of our old guidelines, where we have kept to our recurring font patterns but made them more streamlined and legible with lighter strokes, negative spaces, and a minimal solid shape.

What we have changed:

  • Reduced and standardized the strokes width from 6 or 8 pixels to 4.
  • Less solid shapes and more outlines.
  • The curvature radius of rectangles and squares has been slightly reduced (e.g message icon) to make them less ‘clumsy’.
  • Few outlines are ‘broken’ (e.g bookmark, slideshow, contact, copy, paste, delete) for more personality. This negative space can also represent a shadow cast.

 

Less solid shapes

Before

Screenshot 2015-07-15 16.39.59

After

Screenshot 2015-07-15 16.38.19

Lighter strokes

 

Before

Screenshot 2015-07-15 17.27.20

After

Screenshot 2015-07-15 17.26.34

Negative spaces

 

Before

Screenshot 2015-07-15 17.30.01

 

After

Screenshot 2015-07-15 17.50.50

 

Font patterns 

Oblique lines are slightly curved

Screenshot 2015-07-16 13.39.03

Arcs are not perfectly rounded but rather curved

 

Screenshot 2015-07-15 16.38.19

Uppercase letters use right or sharp angles

Screenshot 2015-07-16 13.42.56

Vertical lines have oblique upper terminations.

Screenshot 2015-07-15 17.26.34

Nice soft curves

Screenshot 2015-07-16 13.44.16

 

Action

blogpost-actions

Devices

blogpost-devices

Indicators

blogpost-indicators

Weather

blogpost-weather

Read more
Peter Mahnke

Ubuntu is a big Open Source project and there are a lot of websites in our community. The web team at Canonical literally doesn’t even know how many sites there are. We have heard there are over 200 ubuntu.com subdomains alone, but we know that there are many more that are owned by local groups and teams outside that single ubuntu.com domain.

Traditionally most of our work has been on www.ubuntu.com and www.canonical.com, but over the years, we have designed, often built and occasionally are responsible for the content of a series of key sites like: insights.ubuntu.com, design.ubuntu.com, developer.ubuntu.com, design.canonical.com. And we have often attempted to provide on-brand versions of wiki and WordPress templates.

As the number of sites grew, we got tired of re-creating grids, templates, CSS all the time.

Enter guidelines

To resolve these issues, we created Ubuntu web guidelines. Instead of sites of cobbled together CSS and a borrowed grid, guidelines gave us something far more formalised and systematic. A grid, typography, core styles and pattern, all with our beautiful Ubuntu brand guidelines. We were not only able to maintain a whole set of sites from a single hosted set of CSS files, but others could borrow and use it easily. We even transitioned the guidelines to be responsive without breaking our sites. You can read more in our series of posts Making ubuntu.com responsive.

Exit guidelines

Around two years ago, the web team started supporting the design and development of some of Canonical’s cloud apps, including Juju, MAAS, and Canonical OpenStack Autopilot installer. These apps have a different look and feel than ubuntu.com. And they often have special requirements, for example, MAAS is likely to be run in data centres without internet access for things like fonts, images, or CSS, that the guidelines did not natively support.

We looked at how to best adapt the guidelines to work with these web apps. We looked at how we were already making www.canonical.com work, essentially overriding the Ubuntu branded guidelines and decided to change the entire approach.

Enter Vanilla

For Vanilla, we wanted to start over, but not have to rewrite everything. So our quick list of project goals was:

  • Minimise the changes to our existing html
  • Create a core theme that distilled the guidelines to its basic Ubuntu-ness
  • Make everything more modular, easy to add or remove components
  • Make it easy for anyone to create themes for each new project that could borrow from other themes
  • Create themes for ubuntu and canonical websites
  • Remove our reliance on javascript
  • Make it work stand-alone
  • Make it easy to build, develop and update
  • Invite other people both inside and outside Canonical to start using the framework

The future

So now we are close to releasing the first version of Vanilla. Canonical.com and ubuntu.com will be moved over the coming months. Then we will look at moving other projects, like MAAS, jujucharms.com, Landscape to the framework.

Please keep reading these posts, you can see Ant’s first post, Introducing Vanilla. And take a look at the project on GitHub and let us know what you think.

Read more
Anthony Dillon

Why we needed a new framework

Some time ago the web team at Canonical developed a CSS framework the we called ‘Guidelines’. Guidelines helped us to maintain our online visual language across all our sites and comprised of a number of base and component Sass files which were combined and served as a monolithic CSS file on our asset server.

We began to use Guidelines as the baseline styles for a number of our sites; www.ubuntu.com, www.canonical.com, etc.

This worked well until we needed to update a component or base style. With each edit we had to check it wasn’t going to break any of the sites we knew used it and hope it didn’t break the sites we were not aware.

Another deciding factor for us was was the feedback that we started receiving as internal teams started adopting Guidelines. We received a resounding request to break the components into modular parts so they could customise which ones they could include. Another request we heard a lot was the ability to pull the Sass files locally for offline development but keep the styling up to date.

Therefore, we set out to develop a new and improved build and delivery system, which lead us to a develop a whole new architecture and we completely refactored the Sass infrastructure.

This gave birth to Vanilla; our new and improved CSS framework.

Building Vanilla

The first decision we made was to remove the “latest” version target, so sites could no longer directly link to the bleeding edge version of the styles. Instead sites should target a specific version of Vanilla and manually upgrade as new versions are released. This helps twofold, shifting the testing and QA to the maintainers of each particular site allows for staggered updates without a sweeping update to all sites at once. Secondly, allowed us to modify current modules without updating the sites until the update was applied.

We knew that we needed to make the update process as easy as possible to help other teams keep their styles up to date. We decided against using Bower as our package manager and chose NPM to reduce the number of dependencies required to use Vanilla.

We knew we needed a build system and, as it was a greenfield project, the world was our oyster. Really it came down to Gulp vs Grunt. We had a quick discussion and decided to run with Gulp as we had more experience with it. Gulp had all the plugins we required and we all preferred the Gulp syntax instead of the Grunt spaghetti.

We had a number of JavaScript functions in Guidelines to add simple dynamic functionality to our sites, such as, equal heights or tabbed content. The team decided we wanted to try and remove the JS dependency for Vanilla and make it a pure CSS framework. So we stepped through each function and tried to work out if we, most importantly, required it at all. If so, we tried to develop a CSS replacement with an acceptable degradation for less modern browsers. We managed to cover all required functions with CSS and removed some older functionality we did not want any more.

Using Vanilla

Importing Vanilla

To start using Vanilla simple run $ npm install vanilla-framework --save in the root of your site. Then in your main stylesheet simple add:


@import ../path/to/node_modules/vanilla-framework/build/scss/build.scss
@include vanilla;

The first line in the code above imports the main build file of the vanilla-framework. Then included as it is entirely controlled with mixins, which will be explained in a future post.

Now that you have Vanilla imported correctly you should see the some default styling applied to your site. To take full advantage of the framework we require a small amount of mark up changes.

Mark up amendments

There are a number of classes used by Vanilla to set up the site wrappers. Please refer to the source for our demo site.

Vanilla-framework

Conclusion

This is still a work in progress project but we are close to releasing www.ubuntu.com and www.canonical.com based on Vanilla. Please do use Vanilla and any feedback would be very much appreciated.

For more information please visit the Vanilla project page.

Read more
Pierre Bertet

.gsa-example { margin: 1em 0; } .gsa-example p { display: none; } .gsa-grid { font-size: 14px; color: #555; border: 0.5px solid #CCC; } .gsa-grid-header { height: 30px; line-height: 30px; text-align: center; background: #F3F3F3; border-bottom: 0.5px solid #CCC; } .gsa-grid-container { display: table; width: 100%; height: 80px; } .gsa-grid-part { display: table-cell; text-align: center; vertical-align: middle; } .gsa-grid-margin { font-size: 12px; background: #FBFFCF; } .gsa-grid-content { background: #D6FED6; } .gsa-grid-panel { position: relative; background: #FFECDE; } .gsa-grid-panel:before { content: ''; position: absolute; left: 0; top: 0; bottom: 0; width: 0.5px; background: #666; } .gsa-grid-panel:first-child:before { display: none; } .gsa-grid-mcl-margin { background: #FBFFCF; } .gsa-grid-mcl-gutter { background: #CFFFFF; } .gsa-grid-mcl-column { background: #C3CBE4; } .gsa-pseudocode { font-size: 1em; margin-bottom: 1em; } .post-content h2 { margin-top: 0.863em; }

Following the article “To converge onto mobile, tablet, and desktop, think Grid Units”, here is a technical description of the way the Grid System behave. We will go through the following concepts: a Grid Unit, a Layout, a Panel, and a Multi-Column Layout.

Grid Unit

A Grid Unit (GU) is a virtual subdivision of screen space. The actual size, in pixels, of one Grid Unit is assigned by the OS depending on the device’s screen size and density, freeing the developer from worrying about these device-specific details. For more description of the system and its benefits, please see this design blog posting.

Note: There are only three target short-side screens in the grid system: 40, 50, and 90. A Grid Unit can not contain a fractional number of pixels, so if the screen width can not divide by the desired number of Grid Units (40, 50, or 90), the remainder becomes side margins.

Grid Unit Calculation

The width of a single Grid Unit is calculated as follows:

  • The width of the short edge of the screen is divided by the desired number of grid units (integer division).
  • The remainder, if any, gives us the size of the margins.
  • The quotient gives us the size of one Grid Unit.

In pseudocode:

margins = total_width mod layout_grid_units
grid_width = total_width - margins
grid_unit_width = grid_width / layout_grid_units

Example with a 540×960 screen and a 50 GU Layout

540px (total portrait width)
20px
500px or 50 GU (total width without margins)
20px

margins = 540 mod 50 = 40
grid_width = 540 - margins = 500
grid_unit = grid_width / 50 = 10

Example with a 1600×2560 screen and a 90 GU Layout

1600px (total portrait width)
35px
1530px or 90 GU (total width without margins)
35px

margins = 1600 mod 90 = 70
grid_width = 1600 - margins = 1530
grid_unit = grid_width / 90 = 17

Layout

A Layout represents the desired number of Grid Units for the short edge of the screen. That number will be used to calculate the width of a single Grid Unit in pixels, using the method described in the Grid Units section. For touch devices, the available layouts are 40 GU, 50 GU (phones or phablets), and 90 GU (tablets).

Landscape Grid Units Count Calculation

The number of Grid Units in Landscape Orientation is calculated as follows:

  • The width of the long edge of the screen is divided by the width of of a single grid unit (integer division).
  • The remainder, if any, gives us the size of the margins.
  • The quotient gives us the number of Grid Units in the Landscape Orientation.

In pseudocode:

margins = total_width mod grid_unit_width
grid_width = total_width - margins
grid_unit_count = grid_width / grid_unit_width

Example with a 540×960 screen, 50 GU Layout and 1 GU = 10px

960px (total landscape width)
96 GU (total width, no margins)

margins = 960 mod 10 = 0
grid_width = 960 - margins = 960
grid_unit_count = grid_width / 10 = 96

Panel

A Panel is a group of Grid Units. The amount of Grid Units can be any of the Layout sizes (according that it fits in the total amount of Grid Units), or variable for the remaining part.

Examples

90 GU Layout

90 GU (portrait orientation)
40 GU Panel
50 GU Panel

147 GU Layout

147 GU (landscape orientation)
40 GU Panel
50 GU Panel
57 GU Panel (variable)

Try more combinations using the Grid System Tool.

Multi-Column Layout

A Multi-Column Layout is a set of columns that can be defined inside of a panel. It contains the following properties:

  • Side margins (before the first column and after the last column)
  • Gutters (between two columns)
  • Columns

It can use from one to six columns. In 40, 50 and 90 GU Panels, the Multi-Column Layouts have been manually selected. For other widths, an algorithm tries to find the best candidate.

The margins and gutters tend to have a 2 GU width, but it can vary depending on the available possibilities.

Examples

3 Columns in a 50 GU Panel

50 GU
2
14 GU
2
14 GU
2
14 GU
2

3 Columns in a 60 GU Panel (variable)

60 GU
2
18 GU
1
18 GU
1
18 GU
2

Try more combinations using the Grid System Tool.

Read more
Steph Wilson

Last week the SDK team gathered in London for a sprint that focused on convergence, which consisted of pulling apart each component and discussing ways in which each would adapt to different form factors.

The SDK provides off-the-shelf UI components that make up our Ubuntu apps; however now we’re entering the world of Unity 8 convergence, some tweaking is needed to help them function and look visually pleasing on different screen sizes, such as desktop, tablet and other larger screens.

20150602_100153

To help with converging your app, the Design Team have created a set of predefined grid layouts screen targets: 40, 50, 90 GU (grid units), which makes life a lot easier to visualize where to place components in different screen sizes.

Scheduled across the week were various sessions focusing on different components from the SDK such as list items, date and time pickers; together with patterns like the Bottom Edge and PageStack. Each session gathered developers, visual and UX designers, where they ran through how a component might look (visual), the usability (UX) and how it will be implemented (developer) on different form factors.

Here’s the mess they made…

Here's the mess they made...

Main topics covered:

– Multi-column layouts, panel behaviors and pagestack

– Header, Bottom Edge and edit mode

– Focus handling

– List item layouts

– Date and time pickers

– Drop-down menus

– Scrollbars

– Application menu

– Tooltips

 

Here are some of the highlights:

 

  • Experiments and explorations were discussed around how the Bottom Edge will look in a multi-column view, and how the content will appear when it is revealed in the Bottom Edge view. Also, design animations were explored around the ‘Hint’ and how they will appear on each panel in a multi-column layout.
  • Explorations on how each panel will behave, look and breakpoints of implementing on different grid units (40,50,90).
  • A lot of discussion was had around the Header; looking at how it will transform from a phone  layout to a multi-column view in a tablet or desktop. Currently the header holds up to four actions placed on the right, a title, and navigational functions on the left, with a separate header section underneath that acts as a navigation to different views within the app. The Design Team had created wireframes that explored how many headers would appear in a multi-column layout, together with how the actions and header section would fit in.
  • Different list item layouts were explored, looking at how many actions, titles and summaries can be placed in different scenarios. Together with a potentially new context/popover menu to accompany the leading, trailing and default options.
  • The Design Team experimented with a new animation that happens during a focused state on the desktop.
  • The new system exposes all the features of a components, so developers are able to customize and style it more conveniently.

Overall the convergence sprint was a success, with both the SDK and Design Team working in unison to reach decisions and listing priorities for the coming months. Each agreed that this method of working was very beneficial, as it brought together the designers and developers to really focus on the user and developer needs.

 

They enjoyed some downtime too…

Arrival dinner at Byron Burgers

Arrival dinner at Byron Burgers

 

Out in Soho

Out in Soho

Wine tasting in the office (not a regular occurrence)

Wine tasting in the office (not a regular occurrence)

 

Read more
Benjamin Keyser

In the converged world of Unity-8, applications will work on small mobile screens, tablets and desktop monitors (with a mouse and keyboard attached) as if by magic. To achieve this transformation for your own app with little to no extra work required when considering the UI, simply design using grid units for a few predetermined virtual screen targets. Combined with Ubuntu off-the-shelf UI components built with convergence in mind, most of the hard work is done, freeing developers and designers to focus on what’s most important to their users.

What’s a grid unit? And why 40, 50, or 90 of them?

A grid unit (GU) is a virtual measure of screen space that’s independent of device hardware details like pixels or aspect ratio: those complexities are mapped under the covers by Ubuntu. Instead, by targeting just three ‘fixed’ virtual GU portrait widths—40, 50, and 90 GU— you’re guaranteed to be addressing the largest number of devices, including the desktop, to a high degree of design quality and consistency where relative spacing and content sizing just works.

The 40, 50, and 90 GU dimensions correspond to smaller smartphones, larger smartphones/phablets, and tablets respectively in portrait mode. These particular panel-widths weren’t chosen arbitrarily: they were selected by analyzing the most popular device specs on the market and picking the portrait dimensions that would embrace the largest number of possibilities most successfully, including for the desktop (more on that later).

For example, compact phones such as the BQ Aquarius E4.5 are best suited to the 40 GU-wide virtual portrait screen, offering the right balance of content to screen real estate for palm-sized viewing. For larger phones with more screen space such as the Meizu MX4, the 50 GU layout is most fitting, allowing more room for content. Finally, for edge-to-edge tablet portrait layouts for the N7 or N10, the 90 GU layout works best.

Try this exercise

Having trouble envisioning the system in action? Close your eyes and imagine a two-dimensional graph paper divided into squares that can adapt according to just three simple rules:

  • It can only be 40, 50, or 90 whole units along the short edge but the long edge can be variable
  • The long edge (in landscape mode or on the desktop) will be the whole number of GUs that carves out the maximum area rectangle that will fit within any given device’s physical screen in landscape mode based on the physical dimension of the GU determined from portrait mode (in pixels)
  • The last rule is simple but key: the squares of the graph paper must always be square—the graph paper, just to push the image a bit too far—is made of something more like graphene than polypropylene (no squeezed or stretched GUs allowed.)

Try it for yourself here: https://dl.dropboxusercontent.com/u/360991/canonical/grid-units/grid-units.html

There is one additional factor that can impact the final available screen area, but it’s a bit of a technical convolution. The under-the-covers pixels to grid unit mapping can’t include fractional pixels (this may seem like an obvious point, admittedly). But at the end of the day, the user sees the largest possible version of the 40, 50, or 90 GU wide virtual screen that’s possible on any given device. That means that all you have to do as a designer or developer is plan for the virtual dimensions we’ve been talking about, and you’re assured your user is getting the best possible rendering.

Though the system may seem abstract at first, the benefits of this system are all to easy to understand from a developer or designer standpoint: it’s far more predictable and simpler to design for layouts that follow rules rather than trying to account for a universe of idiosyncratic device possibilities. In addition, by using these layouts as the foundation, the convergence goal is much more easily achieved.

What about landscape & desktop? Use building blocks

By assembling these key portrait views together, it’s far easier to achieve landscape and desktop layouts than ever before. For example, if your app lends itself to a two panel layout, simply join together 40 and 50 GU phone layouts (that you’ve already designed) to achieve a landscape layout (or even a portrait tablet layout!)

Similarly, switching from portrait to landscape mode on tablet—also a desktop-friendly layout—could be as simple as joining a 40 GU layout and a 90 GU layout for a total of 130 GU, which fits nicely within both 16:9 and 16:10 tablet landscape screens as well as on any desktop monitor.

Since landscape and desktop layouts are the least predictable due to device variations and manual stretching by users, you can designate that of one of your panel layouts be of flexible width to fill the available space using one of these strategies:

  • Center the layout in the available space
  • Stretch or squeeze the layout to fit the available space
  • Combine these two, depending on the individual components within the layout

More complex layouts can also be achieved by joining three or more portrait layouts, too. For example, three 40 GU layouts can be joined side by side, which happen to fit perfectly into a 4:3 landscape tablet screen.

Columns, too

To help developers even further with one of the most common layouts—columnar or grid types—we’re adding a capability that maintains column-to-content size relationships across devices and the desktop the same way that type sizes are specified. This makes it very simple to achieve the proper content readability and density regardless of the device. For example, by specifying a “medium” sized column filled with “small” type, these relative relationships can be preserved throughout the converged-device experience without having to manually dig into pixel measurements.

The column capability can also adapt responsively to extra wide, variable landscape layouts, such as 16:10 aspect ratio tablets or manually stretched desktop layouts. This means that as more space becomes available as a user stretches the corners of the app window on the desktop, additional columns can be added on cue, providing more room for content.

Putting it all together across all form factors

By making screen dimensions virtual, we can minimize the vagaries of individual hardware specs that can frustrate device-convergent thinking and help developers focus more on their user’s needs. A combination of snap-together layouts, automated column layouts, and adaptive UI toolkit components like the header, list component, and bottom edge component help ensure users will experience a consistent, elegant journey from mobile to desktop and back again.

 

 

 

Read more
Robin Winslow

Despite some reservations, it looks like HTTP/2 is very definitely the future of the Internet.

Speed improvements

HTTP/2 may not be the perfect standard, but it will bring with it many long-awaited speed improvements to internet communication:

  • Sending of many different resources in the first response
  • Multiplexing requests to prevent blocking
  • Header compression
  • Keep connections alive
  • Bi-directional communication

Changes in long-held performance practices

I read a very informative post today (via Web Operations Weekly) which laid out all the ways this will change some deeply embedded performance principles for front-end developers. Namely:

Each of these practices are hacks which make website setups more complex and more opaque, but with the goal of speeding up front-end performance by working around limitations in HTTP. Fortunately, these somewhat ugly practices are no longer necessary with HTTP/2.

Importantly, Matt Wilcox points out that in an HTTP/2 world, these practices might actually slow down your website, for the following reasons:

  • If you serve concatenated CSS, Javascript or image files, it’s likely you’re sending more content than you strictly need to for each page
  • Serving assets from different domains prevents HTTP/2 from reusing existing connections, forcing it to open extra ones

But not yet…

This is all very exciting, but note that we can’t and shouldn’t start changing our practices yet. Even server-side support for HTTP/2 is still patchy, with nginx only promising full support by the end of 2015 (with Microsoft’s IIS, surprisingly, putting other servers to shame).

But of course the main limiting factor will, as usual, be browsers:

  • Firefox leads the way, with support since version 36
  • Chrome has support for spdy4 (the precursor to HTTP/2), but it isn’t enabled by default yet
  • Internet Explorer 11 supports HTTP/2 only in Windows 10 beta

As usual the main limiting factor will be waiting for market share of older versions of Internet Explorer to drop off. Braver organisations may want to be progressive by deliberately slowing down the experience for people on older browsers to speed up the more up-to-date and hence push adoption of good technology.

If you want to get really clever, you could serve a different website structure based on the user agent string, but this would really be a pain to implement and I doubt many people would want to do this.

Even with the most progressive strategy, I doubt anyone will be brave enough to drop decent HTTP/1 performance until at least 2016, as this is when nginx support should land; Windows 10 and therefore IE 11 will have had some time to gain traction and of course Internet Explorer market share in general will have continued to drop in favour of Chrome and Firefox.

TL;DR: We front-end developers should be ready to change our ways, but we don’t need to worry about it just yet.

Originally posted on robinwinslow.co.uk.

Read more
Jouni Helminen

Ubuntu community devs Andrew Hayzen and Victor Thompson chat with lead designer Jouni Helminen. Andrew and Victor have been working in open source projects for a couple of years and have done a great job on the Music application that is now rolling out on phone, tablet and desktop. In this chat they are sharing their thoughts on open source, QML, app development, and tips on how to get started contributing and developing apps.

If you want to start writing apps for Ubuntu, it’s easy. Check out http://developer.ubuntu.com, get involved on Google+ Ubuntu App Dev – https://plus.google.com/communities/1… – or contact alan.pope@canonical.com – you are in good hands!

Check out the video interview here :)

Read more
Robin Winslow

In the design team we keep some projects in Launchpad (as canonical-webmonkeys), and some project in Github (as UbuntuDesign), meaning we work in both Bazaar and Git.

The need to synchronise Github to Launchpad

Some of our Github projects need to be also stored in Launchpad, as some of our systems only have access to Launchpad repositories.

Initally we were converting these projects manually at regular intervals, but this quickly became too cumbersome.

The Bazaar synchroniser

To manage this we created a simple web-service project to synchronise Git projects to Bazaar. This script basically automates the techniques described in our previous article to pull down the Github repository, convert it to Bazaar and push it up to Launchpad at a specified location.

It’s a simple Python WSGI app which can be run directly or through a server that understands WSGI like gunicorn.

Setting up the server

Here’s a guide to setting up our bzr-sync project on a server somewhere to sync Github to Launchpad.

System dependencies

Install necessary system dependencies:

User permissions

First off, you’ll have to make sure you set up a user on whichever server is to run this service which has read access to your Github projects and write access to your Launchpad projects:

Cloning the project

Then you should clone the project and install dependencies. We placed it at /srv/bzr-sync but you can put it anywhere:

Preparing gunicorn

We should serve this over HTTPS, so our auth_token will remain secret. This means you’ll need a SSL certificate keyfile and certfile. You should get one from a certificate authority, but for testing you could just generate a self-signed-certificate.

Put your certificate files somewhere accessible (like /srv/bzr-sync/certs/), and then test out running your server with gunicorn:

Try out the sync server

You should now be able to synchronise a Github repository with Launchpad by pointing your browser at:

https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}

You should be able to see the progress of the conversion as command-line output from the above gunicorn command.

Add upstart job

Rather than running the server directly, we can setup an upstart job to manage running the process. This way the bzr-sync service will restart if the server restarts.

Here’s an example of an upstart job, which we placed at /etc/init/bzr-sync.conf:

You can now start the bzr-sync server as a service:

And output will be logged to /etc/upstart/bzr-sync.log.

Setting up Github projects

Now to use this sync server to automatically synchronise your Github projects to Launchpad, you simply need to add a post-commit webhook to ping a URL of the form:

https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}

Creating a webhook

Creating a webhook

In your repository settings, select “Webhooks and Services”, then “Add webhook”, and enter the following information:

  • Payload URL: https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}
  • Content type: “application/json”
  • Secret: -leave blank-
  • Select Just the push event
  • Tick Active
Saving a webhook

Saving a webhook

NB: Notice the Disable SSL verification button. By default, the hook will only work if your server has a valid certificate. If you are testing with a self-signed one then you’ll need to disable this SSL verification.

Now whenever you commit to your Github repository, Github should ping the URL, and the server should synchronise your repository into Launchpad.

Read more
Robin Winslow

Here in the design team we use both Bazaar and Git to keep track our projects’ hostory.

We quite often end up coverting our projects from Bazaar to Git or vice-versa. Here are some tips on how to do that.

To convert revision history between Git and Bazaar, we will use their respective fastimport features.

Install bzr-fastimport

In either case, you need the fastimport plugin for Bazaar, which installs both bzr fast-import and bzr fast-export:

Bazaar to Git

To convert a Bazaar branch to Git, open a Bazaar branch of your project and do the following:

Now you should have all the revision history for that Bazaar branch in Git:

(From Astrofloyd’s blog)

 

Git to Bazaar

Converting from Git to Bazaar is slightly different. Because Bazaar stores branches in sub-folders, while Git stores branches all in the same directory, when you convert a Git repository to Bazaar, it will create a directory tree for the branches:

bzr-repo will now contain a folder for each branch that was in your Git repository. You’re probably most interested in trunk, which will be at bzr-repo/trunk, or perhaps bzr-repo/trunk.remote:

(From the Bazaar wiki)

 

Keeping a project in both Git and Bazaar

You may wish to keep a project in both Git and Bazaar.

 

Create ignore files for both systems

As your project may be used in either Git or Bazaar, you should create practically duplicate .gitignore and .bzrignore files, the only difference being that the .bzrignore should ignore the .git directory, and the .gitignore should ignore the .bzr directory. You should also make sure you ignore the bzr-repo directory – e.g.:

And keep both ignore files in all versions of the project.

Only work in one repository

It is not practical to be doing your actual work in both systems, because converting from one to the other will overwrite any history in the destination repository. For this reason you need to choose to do all your work in either Git or Bazaar, and then regularly convert it to the other using the above conversion instructions.

Read more
Steph Wilson

Community members at the Sprint

Victor and Andrew are two inspiring Community developers that have devoted their spare time to contribute to the Ubuntu Touch Music App team. I sat down with them during the Washington Device Sprint in October where they told us how they drew inspiration from the Design Team, and what drives them to contribute to Ubuntu.

You can read more about Victor and Andrew through their blogs, where they post interesting articles on their work and personal projects.

From left to right: Riccardo, Andrew, Filippo and Victor

From left to right: Riccardo, Andrew, Filippo and Victor

Hey guys, so when did you first get involved with Ubuntu?

Victor: “I started to contribute to the Ubuntu platform in March/April 2013 where I noticed there was no music app, so I started putting one together. It was pretty sketchy to start with, but it worked. I didn’t have a device to test it on so I mostly tested it using the platform on my desktop – so things were a bit hit and miss.

There was also another developer doing a music app, and at the time there was no core capability of playing music through an application for the proposed devices. Michael Hall (Open Source Software Developer) and Alan Pope (Engineering Manager) pulled Daniel Holm and I together, where we merged our core bases and started the music core app.

We didn’t have as much time as other applications, so we more or less sprinted like we are now to get things done. It was very spec driven and specific, which was helpful but sometimes it was hard to put together a full vision of what the designers wanted. So now we are redoing it from the feedback we have gathered, and it’s going pretty well. A little more agile than it was previously as to do thing faster, but it’s been fun the whole time. It’s nice to work on an application that people need and gets visibility, never get sick of hacking at it.”

Andrew: “I’m from North London, where I’m currently studying Software Engineering at Oxford Brookes University. I was working on my own music app where I just taught myself how to do things using my own framework, then I saw that these guys at Ubuntu had a similar problem to me, and so I thought I’d provide a patch. This then built up from there, and now here I am!”

Steph: “It’s amazing that someone can be in their bedroom writing codes and then suddenly your app is on a phone!”

Victor: “The other great thing about it is the Community Managers make it easy and apparent that you can contribute to different projects.”

Andrew: “Yeah someone just got in contact with me and asked me if I wanted to join the team and told me how open source projects work.”

What inspired you to contribute?

Victor: “A lot of my original inspiration was from what the Design Team had previously done. The previous iteration design spec was very large for the music app and it wasn’t as future driven, more just visually pleasing.”

Do you find it hard to implement some designs?

Victor: “We try to make it as close to the designs as we can, but obviously there’s compromises. There was some very flow driven things such as: sized cover arts that were hard to implement, but we can implement them now. It’s nice because they use the same pattern from other applications.”

Andrew: “Usually we just tell the designer that this is just not possible.”

What is it about open source that you like?

Victor: “I have been a user since 2006, but I have never been a large open source developer myself. It is hard to get involved with when you don’t know what you want to contribute to.”

Andrew: “Most applications are so developed already, so you would have to learn the existing code base and develop on it, whereas if you start a new you know everything from the get-go. Seeing your application on the device and knowing it can be on other devices too, is pretty exciting!”

How does it fit into your lifestyles?

Victor: “I’m a software engineer as well, so I write a lot of code. I haven’t really done QML or QT until I started doing these applications with the Ubuntu platform, so it has been a learning experience. I am learning something new from experienced people.”

Have you made any other applications for Ubuntu?

Victor: I’ve made a few games like Piano Tiles, and another that’s kind of like a clone of that but in QML – It’s a simple app but a good time waster haha.”

How much time does it take you to develop an app?

Victor: “It took me like a day. Andrew made a game last night! In 2 hours…”

Andrew: “Yeah we did! Loads of us at the sprint just got together in a room and made a few games.”

So you’re used to working remotely, does that put a barrier against things?

Andrew: “It sometimes delay things. However, you start to build this image of a person, so when you actually get to meet them you start to understand how they are and what makes them tick.

Victor: “Depends on how personal it really needs to be. If you are collaborating together and it’s mostly writing code and coming up with ideas, it doesn’t necessarily need to be face-to-face. It is obviously nicer, but you also get the benefit if the other person is a night owl in a different country where sometimes our hours overlap, two different chunks of time we’re working in.

Andrew: “There’s usually someone on IRC to speak to, it’s like a 24 hour operation haha.”

What’s the vibe like in the Community at the moment?

Victor: “It’s a pretty small Community at the moment, with close ties. Everyone is receptive to feedback, so if it was larger Community I don’t think it would be as receptive.”

Steph: “Thanks for your time guys!”

Here’s a sneaky preview of the music app, more will be revealed soon:

Album detail

Landing page

Read more
Steph Wilson

We sat down with some of Ubuntu’s unsung Community heroes at the recent Devices Sprint in Washington D.C.

Riccardo and Filippo are two young and passionate developers who have adapted their own software to benefit the whole of the Ubuntu Community. We spoke about how and why they contribute to Ubuntu, and what motivates them to keep giving.

The Community hard at work

(Community gathering at the Sprint)

Riccardo Padovani 

Italian Community site:

http://ubuntu.it/

Personal blog:

http://blog.rpadovani.com/

So Riccardo, how did you get involved in Ubuntu?

I started 3 years ago with the Italian Ubuntu Community as they were looking for someone who could manage the website. I was young and wanted to learn about computer science, so I started for myself. While I was contributing I started to understand what was behind the Ubuntu project and their philosophy, and I thought this was a great project for software. So then I started to do stuff for Ubuntu Touch, where I made new friends and at the same time improved my English and computer science skills.

How does working in the Italian Ubuntu Community fit into your lifestyle?

I’m at University, so in the evenings instead of watching television I open my notebook and do some coding. For me it’s very fun. It’s not something I do because someone is telling me to, I do it for me. I prefer writing code than watching TV haha.

What kind of things have you contributed to Ubuntu so far?

Last year I was mainly working on the Reminder App, but now more recently I’ve started to contribute towards the Web Browser. As I use Ubuntu as my main phone I love seeing the improvements in the software I use everyday, as I know I can do something to improve it. People will benefit when the phone is released, more so on the Italian Community Site for example: when there’s something wrong and someone reports it to me, loads of people can see my work and I can fix it. It’s awesome, as I am getting better experience at the same time.

How did you start to contribute to the Community? How does it work?

I started to use Ubuntu in 2008, but before 2012 I did nothing until I found a project I wanted to get involved with. I think for every project and Community you need to find something you love and want to improve. Opening a new bug when something is wrong is the first step to contributing to an Ubuntu project.

First you find out how the Community works and then you begin to know who you can speak to, which then graduates into a natural evolution.

Does your Community regularly meet-up?

It depends on the team, as some teams are split and do different things. Every 6 months we have a meeting where we can have a beer and socialise. The rest of the year we try to do public hangouts, and then private hangouts on what we’re doing in the next month or so.

Do you find these sprints helpful?

I think during this sprint it takes more energy to do code, because I’m busier talking to people and learning new things. For the people who can or have taught me something I can meet them and say thank you in person, it is nice.

Filippo Scognamiglio

Personal blog:

http://swordfishslabs.wordpress.com/

Hey Filippo, so tell me how did you get involved with Ubuntu?

I started with some gaming applications where I first made MineSweeper. MineSweeper is not in the Ubuntu Store at the moment due to some technicalities and design issues, but it’s all working and should be implemented soon. I also made another game called Ubuntu Netwalk where you connect sources of energy to destinations and then rotate the pieces to solve the puzzle.

I started a new project that was completely unrelated to Ubuntu, which was a terminal emulator. A terminal emulator is a program that emulates a video terminal within some other display architecture.

I published a video of my work and no one cared at the start, but then a few months after I made another video and everyone loses their mind! I was really busy answering emails and questions about it. Then David Planella (Ubuntu Community Team Manager) approached me and asked me to import the terminal to the Ubuntu Touch, as the engine was the same, and so that’s where my Ubuntu story really began.

So, what’s your background?

I am currently studying Computer Engineering at University back in Italy.

Being part of a Community, what does it mean?

I wasn’t part of the Community before doing something relevant, then I got a part after I was approached. Usually people first start with commenting on the forums or fixing bugs, where you begin to build a presence in the Community. For me it was just like falling from the sky, now I want to be more involved in the Community. I never knew all these guys, today I only knew Riccardo, Alan Pope (Engineering Manager) and David Planella through email exchange, that’s it!

How’s the Sprint going for you? 

The Sprint itself is a nice opportunity to see the USA as it is my first time here. For me it is a great opportunity to finally meet the people I have been working with remotely and say thanks. I find it hard when I work from home as you’re on your own, but now I’m here at the sprint I can go grab people and interact more.

When I compare myself to my schoolmates who aren’t involved in Ubuntu or other projects, I can see the benefits it will give me in my career after university.

What motivates you? 

I get motivated by the people I can learn from. In Ubuntu I’m involved with people who are much more experienced than me, so they can teach me new things and I can produce at the same time. Learning from others on your own project or part of Ubuntu is not possible with closed source projects, because with closed source you can have an opinion on what’s good or not. They can’t tell you should do this, they simply have an external point of view.

Another good thing about open source is that you can do a lot more things with less effort. My terminal was taken from another terminal, if it wasn’t open source I would have had to write the terminal from the engine to the user interface. I drew influenced from other engines that have been made and then adapted it to my needs, of which those people who made that engine probably took it from someone else – that’s the beauty of open source.

I am happy if my project goes on and influences something/someone else, and they can take my software and adapt it to their own needs.

(From left to right: Riccardo, Andrew, Filippo and Victor)

(Community meal out)

Read more
Steph Wilson

Last week was a week of firsts for me: my first trip to America, my first Sprint and my first chili-dog.

Introducing myself as the new (only) Editorial and Web Publisher, I dove head first into the world of developers, designers and Community members. It was a very absorbing week, which after felt more like a marathon than a sprint.

After being grilled by Customs, finally we arrived at Tyson’s Corner where 200 or so other developers, designers and Community members gathered for the Devices Sprint. It was a great opportunity for me to see how people from each corner of the world contribute to Ubuntu, and share their passion for open source. I especially found it interesting to see how designers and developers work together, given their different mind sets and how they collaborated together.

The highlight for me was talking to some of the Community guys, it was really interesting to talk to them about why and how they contribute from all corners of the world.

From left to right: Riccardo, Andrew, Filippo and Victor.

(From left to right: Riccardo, Andrew, Filippo and Victor)

The main ballroom.

(The main Ballroom)

Design Team dinner.  From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni.

(Design Team dinner. From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni)

I caught up with Olga and Giorgio to share their thoughts and experiences from the Sprint:

So how did the Sprint go for you guys?

Olga: “It was very busy and productive in terms of having face time with development, which was the main reason we went, as we don’t get to see them that often.

For myself personally, I have a better understanding of things in terms of what the issues are and what is needed, and also what can or cannot be done in certain ways. I was very pleased with the whole sprint. There was a lot of running around between meetings, where I tried to use the the time in-between to catch-up with people. On the other hand as well, Development made the approach to the Design Team in terms of guidance, opinions and a general catch-up/chat, which was great!

Steph: “I agree, I found it especially productive in terms of getting the right people in the same room and working face-to-face, as it was a lot more productive than sharing a document or talking on IRC.”

Giorgio: “Working remotely with the engineers works well for certain tasks, but the Design Team sometimes needs to achieve a higher bandwidth through other means of communication, so these sprints every 3 months are incredibly useful.

What a Sprint allows us to do is to put a face to the name and start to understand each other’s needs, expectations and problems, as stuff gets lost in translation.

I agree with Olga, this Sprint was a massive opportunity to shift to much higher level of collaboration with the engineers.

What was your best moment?

Giorgio: “My best moment was when the engineers perception towards the efforts of the Design Team changed. My goal is to better this collaboration process with each Sprint.”

Did anything come up that you didn’t expect?

Giorgio: “Gaming was an underground topic that came up during the Sprint. There was a nice workshop on Wednesday on it, which was really interesting.”

Steph: “Andrew a Community Developer I interviewed actually made two games one evening during the Sprint!”

Olga: “They love what they do, they’re very passionate and care deeply.”

Do you feel as a whole the Design Team gave off a good vibe?

Giorgio: “We got a good vibe but it’s still a working progress, as we need to raise our game and become even better. This has been a long process as the design of the Platform and Apps wasn’t simply done overnight. However, now we are in a mature stage of the process where we can afford to engage with Community more. We are all in this journey together.

Canonical has a very strong engineering nature, as it was founded by engineers and driven by them, and it is has evolved because of this. As a result, over the last few years the design culture is beginning to complement that. Now they expect steer from the Design Team on a number of things, for example: Responsive design and convergence.

The Sprint was good, as we finally got more of a perception on what other parties expect from you. It’s like a relationship, you suddenly have a moment of clarity and enlightenment, where you start to see that you actually need to do that, and that will make the relationship better.”

Olga: The other parties and the Development Team started to understand that initiated communication is not just the responsibility of the Design Team, but it’s an engagement we all need to be involved in.”

In all it was a very productive week, as everyone worked hard to push for the first release of the BQ phone; together with some positive feedback and shout-outs for the Design Team :)

Unicorn hard at work.

(Unicorn hard at work)

There was a bit of time for some sightseeing too…

It would have been rude not to see what the capital had to offer, so on the weekend before the sprint we checked out some of Washington’s iconic sceneries.

The Washington Monument.

(The Washington Monument)

We saw most of the important parliamentary buildings like the White House, Washington Monument and Lincoln’s Statue. Seeing them in the flesh was spectacular, however, I half expected a UFO to appear over the Monument like in ‘Independence Day’, and for Abraham Lincoln to suddenly get up off his chair like in the movie ‘Night at the Museum’ – unfortunately none of that happened.

The White House.

(The White House)

D.C. isn’t as buzzing as London but it definitely has a lot of character, as it embodies an array of thriving ethnic pockets that represented African, Asian and Latin American cultures, and also a lot of Italians. Washington is known for getting its sax on, so me and a few of the Design Team decided to check-out the night scene and hit a local Jazz Club in Georgetown.

...And all the jazz.

(Twins Jazz Club)

On the Sunday, we decided to leave the hustle and bustle of the city and venture to the beautiful Great Falls Park, which was only 10-15 minutes from the hotel. The park was located in the Northern Fairfax County along the banks of the Potomac River, which is an integral part of the George Washington Memorial Parkway. Its creeks and rapids made for some great selfie opportunities…

Great Falls Park.

(Great Falls Park)

Read more
Luca Paulina

A few weeks ago we launched ‘Machine view’ for Juju, a feature designed to allow users to easily visualise and manage the machines running in their cloud environments. In this post I want to share with you some of the challenges we faced and the solutions we designed in the process of creating it.

A little bit about Juju…
For those of you that are unfamiliar with Juju, a brief introduction. Juju is a software tool that allows you to design, build and manage application services running in the cloud. You can use Juju through the command-line or via a GUI and our team is responsible for the user experience of Juju in the GUI.

First came ‘Service View’
In the past we have primarily focused on Juju’s ‘Service view’ – a virtual canvas that enables users to design and connect the components of their given cloud environment.

Service_view

This view is fantastic for modelling the concept and relationships that define an application environment. However, for each component or service block, a user can have anything from one unit to hundreds or thousands of units, depending on the scale of the environment, and before machine view, units means machines.

The goal of machine view was to surface these units and enable users to visualise, manage and optimise their use of machines in the cloud.

‘Machine view’: design challenges
There were a number of challenges we needed to address in terms of layout and functionality:

  • The scalability of the solution
  • The glanceability of the data
  • The ability to customise and sort the information
  • The ability to easily place and move units
  • The ability to track changes
  • The ability to deploy easily to the cloud

I’ll briefly go into each one of these topics below.

Scalability: Environments can be made up of a couple of machines or thousands. This means that giving the user a clear, light and accessible layout was incredibly important – we had to make sure the design looked and worked great at both ends of the spectrum.

Machine view

Simple_machine_view

Glanceability: Users need simple comparative information to help choose the right machine at-a-glace. We designed and tested hundreds of different ways of displaying the same data and eventually ended up with an extremely cut back listing which was clean and balanced.

A tour of the many incarnations and iterations of Machine view…

The ability to sort and customise: As it was possible and probable that users would scale environments to thousands of machines, we needed to provide the ability to sort and customise the views. Users can use the menus at the top of each column to hide information from view and customise the data they want visible at a glance. As users become more familiar with their machines they could turn off extra information for a denser view of their machines. Users are also given basic sorting options to help them find and explore their machines in different ways.

More_menu

The ability to easily place and move units: Machine view is built around the concept of manual placement – the ability to co-locate (put more than one) items on a single machine or to define specific types of machines for specific tasks. (As opposed to automatic placement, where each unit is given a machine of the pre-determined specification). We wanted to enable users to create the most optimised machine configurations for their applications.

Drag and drop was a key interaction that we wanted to exploit for this interface because it would simplify the process of manually placing units by a significant amount. The three column layout aided the use of drag and drop, where users are able to pick up units that need placing on the left hand side and drag them to a machine in the middle column or a container in the third column. The headers also change to reveal drop zones allowing users to create new machines and containers in one fluid action keeping all of the primary interactions in view and accessible at all times.

Drag and drop in action on machine view

The ability to track changes: We also wanted to expose the changes that were being made throughout user’s environments as they were going along and allow them to commit batches of changes altogether. Deciding which changes were exposed and the design of the uncommitted notification was difficult, we had to make sure the notifications were not viewed as repetitive, that they were identifiable and that it could be used throughout the interface.

Uncommitted_SV

Uncommitted_MV

The ability to deploy easily to the cloud: Before machine view it was impossible for someone to design their entire environment before sending it to the cloud. The deployment bar is a new ever present canvas element that rationalises all of the changes made into a neat listing, it is also where users can deploy or commit those changes. Look for more information about the deployment bar in another post.

The change log exposed

The deployment summary

We hope that machine view will really help Juju users by increasing the level of control and flexibility they have over their cloud infrastructure.

This project wouldn’t have been possible without the diligent help from the Juju GUI development team. Please take a look and let us know what you think. Find out more about Juju, Machine View or take it for a spin.

Read more
Giorgio Venturi

Canonical and Ubuntu at dConstruct

Brighton is not just a lovely seaside town, mostly known for being overcrowded in Summer by Londoners in search for a bit of escapism, but also the home of a thriving community of designers, makers and entrepreneurs. Some of these people run dConstruct, a gathering where creative minds of all sorts converge every year to discuss important themes around digital innovation and culture.

When I found out that we were sponsoring the conference this year, I promptly jumped in to help my colleagues in the Phone, Web and Juju design teams. Our stand was situated in the foyer of the Brighton Dome, flashing the orange banner of Ubuntu and a number of origami unicorns.

The Ubuntu Stand

Origami Unicorns

We had an incredibly positive response from the attendees, as our stand was literally teeming with Ubuntu enthusiasts who were really keen to check our progress with the phone. We had a few BQ phones on display where we showed the new features and designs.

Testing the phone

For us, it was a great occasion to gather fresh impressions of the user experience on the phone and across a variety of apps. After a few moments, people started to understand the edge interactions and began to swipe left and right, giving positive feedback on the responsiveness of the UI. Our pre-release models of BQ phones don’t have the final shell and they still display softkeys, as a result some people found this confusing. We took the opportunity to quickly design our own custom BQ phone by using a bunch of Ubuntu stickers…and viola, problem solved! ;)

Ubuntu phone - customised

Our ‘Make your Unicorn’ competition had a fantastic response. To celebrate the coming release of Utopic Unicorn and of the BQ phone, the maker of the best origami unicorn being awarded a new phone. The crowd did not hesitate to tackle the complex paper-bending challenge and came up with a bunch of creative outcomes. We were very impressed to see how many people managed to complete the instructions, as I didn’t manage to go beyond step 15..

Ubuntu fans

Twitter   Search - #dconstruct #ubuntu

Read more
Luca Paulina

Come and meet us at dConstruct

Ubuntu is sponsoring the dConstruct “Living with the network” event on the 5th of September at the Brighton Dome. Stop by for a chat with the team, grab some goodies and enter our competition for a chance to win an Ubuntu Phone.

nexus4_hero_shots

Read more