Canonical Voices

Posts tagged with 'development'

Grazina Borosko

April marks the release of Xerus 16.4 and with it we bring a new design of our iconic wallpaper. This post will take you through our design process and how we have integrated our Suru visual language.

Evolution

The foundation of our recent designs are based on our Suru visual language, which encompasses our core brand values, bringing consistency across the Ubuntu brand.

Our Suru language is influenced by the minimalist nature of Japanese culture. We have taken elements of their Zen culture that give us a precise yet simplistic rhythm and used it in our designs. Working with paper metaphors we have drawn inspiration from the art of origami that provides us with a solid and tangible foundation to work from. Paper is also transferable, meaning it can be used in all areas of our brand in two and three dimensional forms.

Design process

We started by looking at previously released wallpapers across Ubuntu to see how each has evolved from each other. After seeing the previous designs we started to apply our new Suru patterns, which inspired us to move in a new direction.

Ubuntu 14.10 ‘Utopic Unicorn’’

wallpaper_unicorn

Ubuntu 15.04 ‘Vivid Vervet’

suru-desktop-wallpaper-ubuntu-vivid (1)

Ubuntu 15.10 ‘Wily Werewolf’

ubuntu-1510-wily-werewolf-wallpaper

Step-by-step process

Step 1. Origami animal

Since every new Ubuntu release is named after animal, the Design Team wanted to bring this idea closer to the wallpaper and the Suru language. The folds are part of the origami animal and become the base of which we start our design process.

Origarmi

To make your own origami Xerus squirrel, you can find the instructions here.

Step.2 Searching for the shape

We started to look at different patterns by using various techniques with origami paper. We zoomed into particular folds of the paper, experimented with different light sources, photography, and used various effects to enhance the design.

The idea was to bring actual origami to the wallpaper as much as possible. We had to think about composition that would work across all screen sizes, especially desktop. As the wallpaper is a prominent feature in a desktop environment, we wanted to make sure that it was user friendly, allowing users to find documents and folders located on the computer screen easily. The main priority was to not let the design get in the way of everyday usage, but enhance it aesthetically and provide a great user experience.

After all the experiments with fold patterns and light sources, we started to look at colour. We wanted to integrate both the Ubuntu orange and Canonical aubergine to balance the brightness and played with gradient levels.

We balanced the contrast of the wallpaper color palette by using a long and subtle gradient that kept the bright and feel look. This made the wallpaper became brighter and more colorful.

Step.3 Final product

The result was successful. The new concept and usage of Suru language helped to create a brighter wallpaper that fitted into our overall visual aesthetic. We created a three-dimensional look and feel that gives the feeling of actual origami appearance. The wallpaper is still recognizable as Ubuntu, but at the same time looks fresh and different.

Ubuntu 16.04 Xenial Xerus

Xerus - purple

Ubuntu 16.04 Xenial Xerus ( light version)

Xerus - Grey

What is next?

The Design Team is now looking at ways to bring the Suru language into animation and fold usage. The idea is to bring an overall seamless and consistent experience to the user, whilst reflecting our tone of voice and visual identity.

Read more
Anthony Dillon

The Juju web resources are made up of two entities: a website jujucharms.com and an app called Juju GUI, which can be demoed at demo.jujucharms.com.

Applying Vanilla to jujucharms.com

Luckily the website was already using our old style guidelines, which we refactored and improved to become Vanilla, so I removed the guidelines link from the head and the site fell to pieces. Once I NPM installed vanilla-framework and included it into the main sass file things started to look up.

A few areas of the site needed to be updated, like moving the search markup outside of the nav element. This is due to header improvements in the transition from guidelines to Vanilla. Also we renamed and BEMed our inline-list component, so I updated its markup in the process. The mobile navigation was also replaced with the new non-JavaScript version from Vanilla.

To my relief with these minor changes the site looked almost exactly as it did before. There were some padding differences, which resulted in some larger spacing between rows, but this was a purposeful update.

All in all the process of replacing guidelines with Vanilla on the website was quick and easy.

Now into the unknown…

Applying Vanilla to Juju GUI

I expected this step to be trickier as the GUI had not started life using guidelines and was using entirely bespoke CSS. So I thought: let’s install it, link the Vanilla framework and see how it looks.

To my surprise the app stayed together, apart from some element movement and overriding of input styling. We didn’t need the entire framework to be included so I selectively included only the core modules like typography, grid, etc.

The only major difference is that Vanilla applies bottom margin to lists, which did not exist on the app before, so I applied “margin-bottom: 0” to each list component as a local override.

Once I completed these changes it looked exactly as before.

What’s the benefit

You might be thinking, as I did at the beginning of the project, “that is a lot of work to have both projects look exactly the same”, when in fact it brings a number of benefits.

Now we have consistent styling across the Juju real estates, which are tied together with one single base CSS framework. This means we have exactly the same grid, buttons, typography, padding and much more. The tech debt to keep these in sync has been cut and allows designers to work from a single component list.

Future

We’re not finished there, as Vanilla framework is a bare bones CSS framework it also has a concept of theming. The next step will be to refactor the SCSS on both projects and identify the common components. The theme itself depends on Vanilla, so we have logical layering.

In the end

It is exciting to see how versatile Vanilla is. Whether it’s a web app or web site, Vanilla helps us keep our styles consistent. The layered inheritance gives us the flexibility to selectively include modules and extend them when required.

Read more
Rae Shambrook

We previously posted about the clock app’s new look and today we are getting to know one of the developers behind the clock ( as well as other community apps.)  Bartosz Kosiorek gives us a glimpse into developing for Ubuntu and how he got started.

1) First, can you give us a bit of background about yourself and tell us how you started developing for Ubuntu?

My name is Bartosz and I’m from Poland. Currently I’m the developer for the Ubuntu Clock and Ubuntu Calculator. I started contributing to Ubuntu in 2008, by submitting bug reports into launchpad and fixing translations. Later I participated in the One Hundred Papercuts project. I made SRU verifications and eventually started developing.

My adventure with Ubuntu started from Ubuntu 8.10 (Interpid Idex). Previously I tried many different distributions (Debian, Fedora, SuSE etc.). I chose Ubuntu because it is easy to use and after installation, I have fully functional system. I like that after Ubuntu is installed, there are no duplicate applications and those already installed work perfectly with the system.

2) How long have you been working on the Clock and Calculator? How did you get involved in these projects?

I started to develop for Ubuntu about two years ago when I first heard about Ubuntu Touch and convergence. I started by contributing to Ubuntu Core Apps by testing, submitting bug reports and patches. Most commits were done for Ubuntu Calculator and Ubuntu Clock by fixing bugs which were approved by Riccardo Padovani, Mihir Soni and Nekhelesh Ramananthan. After some time, I became member of Ubuntu Core Apps. It’s very fun to work with these guys and the Ubuntu community. I’ve learned a lot about Qt/QML and user experience design.

3) How do you approach implementing a design in your apps?

Generally I follow the design document during implementation and sometimes find parts that need to be improved. After speaking with the Ubuntu UX team, we discuss various issues and agree on a final design solution. Sometimes the UX team gives us freehand, so we could design some parts by ourselves (eg. Stopwatch, Welcome Wizard, Landscape View). It’s really fun to work with such awesome guys.

4) What feature would you like to see in the future?

I think from user perspective, longer battery life is a very important topic. The power usage is higher with white background: https://www.quora.com/Does-a-white-background-use-more-energy-on-a-LCD-than-if-it-was-set-to-black especially with OLED screen. I wish that Ubuntu Touch came with a darker theme, to save battery on OLED screens.

Read more
Robin Winslow

It’s becoming more and more important for websites to carefully consider how their resources are cached in users’ browsers. Get the caching wrong, and you either end up with a woefully slow experience for the user, or a very strange looking website as users are left with stale CSS files and images.

Or often both.

For our China site, we’ve decided that the HTML pages should be cached for 5 minutes, and the CSS and JavaScript can be cached for a year – as every time we update them we change the URL.

Caching headers in Django

Telling the browser how long to cache a resource is done with one of two headers:

  • Cache-Control: In HTTP/1.1, this can set the maximum age before a resource should be re-downloaded.
  • Expires: In the older HTTP/1.0 standard, this sets the date and time that a resource becomes outdated and should be refreshed.

To control these headers in Django is less simple than you might think. If you’re happy to use the cache framework then it will take care of these headers for you, but as we have a separate Squid cache in front of our application, this was a more heavyweight solution than we needed.

Modifying HTML responses using View classes

In our case, all of our HTML pages are served with an extended version of the TemplateView class:

To add headers, we need to modify the HTTPResponse, which we can intercept by extending the render_to_response method.

Django also provides patch_response_headers a handy helper function to generate our caching headers for us and attach them to the response:

And now we can see our extra caching headers in the HTTP response:

Browsers and proxies will now cache the HTML pages for 5 minutes.

Controlling caching for static files

Django recommends serving static files separately from the rest of your application.

However, for simplicity and dev-prod parity we’ve been using DJ-Static to serve static files with the Django WSGI app, as introduced by Kenneth Reitz. This was also, at the time we implemented it, the method recommended by Heroku for managing static files in Django.

However, as it turns out DJ-Static doesn’t offer any control over caching headers. And Heroku now recommend using WhiteNoise instead.

Serving static files with WhiteNoise is pretty simple (as it was with DJ-Static):

WhiteNoise will add a Cache-Control header, although it doesn’t support set the older Expires header. By default, the Cache-Control header is initially set to no caching:

We wanted our static files to be cached for a year, so we set the WHITENOISE_MAX_AGE setting in settings.py:

This will set the max-age in the Cache-Control header to achieve the browser caching we’re looking for:

Now we have control

Leveraging browser caching is an invaluable tool in performance, and so understanding how we can control the user’s cache with Django is very helpful.

Hopefully I’ve demonstrated some ways that this can be achieved, which we’ve just implemented on cn.ubuntu.com.

Also published on my blog.

Read more
Jouni Helminen

Visual design of convergent apps

It is an exciting time as we’re starting to see more and more of the new, convergence-enabled UI toolkit and features for Unity 8 come to life. Some classic X11 apps (Gimp, Libre Office and a few others) are already running on Unity 8 using new hardware from our partners, including the award winning M10 tablet from BQ – very cool.

At the same time, we want to help people write or port more applications to our platform, using our modern UI toolkit designed to smoothly flow the user experience through touch and pointer inputs, a range of screen and keyboard types and all of the permutations in between! It has been an interesting design challenge to imagine, design, and begin to build a world where all interfaces, regardless of input type or form factor, all emerge from the same core user experience and design language.

Where we are now

Our UX and SDK teams have been working on version 1.3 of Qt based UI toolkit, which allows developers to write applications that can be used comfortably with both touch and pointer interfaces. The work is still very much in progress, but some of it can be used today. You can check out the developer docs here.

Suru, our visual design language, has evolved into a new, much lighter, flatter and modern approach. It not only looks great (in our humble opinion), but helps app developers design good looking and well-functioning apps with less effort. Continuous visual and user experience refinements will will be rolling out across the whole OS (scopes, shell and apps) this coming year.

The new design guidelines for UX and UI patterns as well as Suru will be out soon. In the meanwhile hopefully these example apps will inspire you to have a look at the developer docs, get active on IRC and have a go yourself. We will also be releasing design source files and templates for the refreshed UI toolkit so that you can start applying them in your own app designs.

Dekko – Email

email-phone-tablet

The first example app is Dekko – the default email client  for mobile and tablet devices from BQ and Meizu. We have been very lucky to have the incredible talents of our community member Dan Chapman working on the development of Dekko, and the app is progressing at a fantastic rate. James Mulholland helped Dan with the UX and I have been working on the UI.

Like many apps, Dekko uses a list view to represent the primary level, and a detail view to show the secondary level. Where there’s room, these views can be displayed side by side, but on small screen screens or very shrunk windows, a PageStack showing only the list becomes the primary screen. On larger screens or expanded windows, the page stack automatically progresses into the familiar two-panel configuration. This adaptive layout is common on responsive websites, and our SDK team have built a component in the UI toolkit that does most of the hard work for you – AdaptivePageLayout.

email-desktop

The list item, which lives in the list component, is another example of ready made component that helps developers write convergent apps with less effort. The new ListItem in our toolkit has useful, well designed default layouts baked in when using ListItemLayout. It is also optimised for both touch and pointer interaction – via ListItemActions. A common pattern of interacting with list items on touch devices is to drag them left or right revealing key actions such as delete. When using a pointer, however, you would typically right click and use the contextual menu to access the same actions. Our UI Toolkit supports both types of input at all times, so you could drag the item left or right using a mobile or touch-enabled monitor, or right click using a mouse. We believe users should be free to mix how they interact with our components using whatever means is at their disposal and to their liking.

This behaviour is already baked into our ListItem component, so users will have a consistent experience when using apps, and developers will save time not having to roll their own solutions.

Music

 convergence-music
The music app is another example of the super talented Ubuntu community getting involved in building some of our core apps together with our internal teams. You might remember Andrew Hayzen and Victor Thompson from a previous interview on this blog. They have since been adding features and functionality to the app, and a convergent music app using multiple panels is currently working in a branch and will be landing in the master release soon. We are also looking at adding support for streaming music functionality, keep an eye out for this in the near future :)

music-closeup

The multi-panel music app reacts to window size changes intelligently – the album cards resize and shuffle themselves on window size changes. On smaller screen devices we have a persistent “Now playing” control bar at the bottom of the screen, but on larger screen sizes we have enough real estate to reimagine the play bar as an extra panel on the right with “Now playing” information, along with cover art, controls and a scrollable queue.

Calendar

convergent_calendar

The calendar app has been on the phone for a while but until now it hadn’t really had any UI design love or designs for larger screens.  We wanted to apply our visual language in the context of an app that is by default very minimal, allowing the few design elements to stand on their own.

Suru, our visual language, is light and flat, minimizing distractions, with carefully selected tones of gray, consistent spacing and margins to help the content breathe. We’ve added considered splashes of highlight colours that enhance the visual hierarchy without overwhelming it.

On the calendar app we are again making use of multiple panels, surfacing several layers when we have the real estate available. The same feature set of the app is of course available on all sizes, and the navigation feels intuitive with whatever input method or screen size you are using.

calendar-closeup

This design hasn’t been implemented yet, and in fact we are looking for new developers to join our Community Team. If you are a developer who would like to get involved in writing some of the core apps people use on Ubuntu – get in touch with alan.pope@canonical.com – we would love to hear from you!

Hopefully these examples have given inspiration and pointers to anyone who would like to have a go at designing apps for convergent Ubuntu. If you have any questions, don’t hesitate to reach out – jouni.helminen@canonical.com

 

Read more
Barry McGee

Maybe, like me, you seen more of the inside of your gym in January than you had for the six months previous. New year, new diet, new me.. or something like that.

A big creeping problem in recent years is that websites have been on an all out binge, and not just over the winter holidays — big videos, big images, fancy fonts, third-party libraries — they just can’t get enough of ’em.

Average page weights increased by 15% in 2014 and although I haven’t yet seen any similar research done for 2015 yet, I’m willing to bet that trend did not reverse.

Last week I was tasked with making some performance optimisations to the Ubuntu online tour.

This legacy codebase stretches all the way back to 2012, and as such was not benefitting from some of the modern tools we now have at our disposal as web developers.

We have been maintaining our largest codebases such as ubuntu.com and canonical.com to ensure they are as performant as they can be but this Ubuntu tour repository slipped through the cracks somewhat.

We have users all over the world and many of them don’t enjoy the luxury of fat internet pipes that we enjoy in our London office. Time to trim the fat…

At first look, I noted on load of the site it required 235 HTTP requests to download 2.7MB of data. Chunky Charlie!

 

Network waterfall screenshot

 

Delving into the codebase, I immediately spotted some big areas ripe for improvement:

  • The CSS files were not being concatenated nor were they minified.
  • The Javascript was also being loaded in separate files, also un-minified.
  • The image assets were uncompressed.
  • The HTML was un-minified.

Beyond that – I ran the site URL through Google’s PageSpeed Insights and also discovered;

  • Browser cacheing was not being being leveraged as static assets did not have any Expires headers specified
  • There were quite a few CSS and javascript dependancies blocking rendering of the page.

As you see, the site was only scoring a lowly 46/100, not great.

 

Google Page Speed Insights screenshot

 

For jobs such as this, my first weapon of choice is the task runner, Gulp. It’s quick and easy to drop Gulp on top of any existing site and use some of it’s wide array of plugins to optimise source assets for performance.

For this job I used gulp-concat, gulp-htmlmin, gulp-imagemin, gulp-minify-css, gulp-renamegulp-uglify, gulp with critical & gulp-rev.

Explaining how to use each of them is beyond the scope of this article but you can view my Gulpfile.js and accompanying package.json file to see what I did.

When retro-optimising a site, you might find you have to make certain compromises such as placing “src” folders inside folders you are optimising to store the original documents, then output the optimised versions into the original folder to ensure everything is backwards compatible and you haven’t broken any relative links. You should also be careful when globbing Javascript files as they may need to be loaded in a certain order to prevent race conditions. This is also true when concatenating and including Javascript libraries such as jQuery.

In an ideal world, you would not deploy any files from the repository you have compiled locally. They should be ignored by version control and compiled on the fly by running your task runner on the server using a continuous integration engine such as Jenkins or Travis CI. This is much cleaner and will prevent merge conflicts when multiple developers are working on the same codebase.

So — when we have all of the above configured and then run it over our legacy codebase, how much weight did it shave?

 

Network Waterfall - After

 

Good news! Now to load the site, we only need 166 HTTP (-29%) requests to download 2.2MB(-18%) of data. Slim(mer) Jim for the win!

This should mean our users with slower connections will have a much improved experience.

When we run the leaner site now deployed through Google Pagespeed Insights – we now get a much healthier score also.

 

Google Pagespeed - After

 

This was a valuable exercise for our team and reminded us we not only have a responsibility to keep all our new and upcoming work performant but we should also address any legacy sites still currently in use wherever possible.

A leaner web is a faster web and I’m sure that’s something we can all get behind.

 

Read more
Femma

We arrived in Helsinki on Sunday evening, ready to start our week long SDK sprint on Monday. Our hotel was in a nice location, by the sea.

The work stuff

The SDK is a core part of Ubuntu and provides an array of components and flexibility needed to create applications across staged and windowed form factors, with good design and user experience in mind.

The purpose of the sprint was to have the designers and engineers come together to work on tools and components such as palette themes, bottom edge, header, scrollbars, focus handling, dialogs, buttons, menus, text selections and developer tasks such as IDE, packaging and application startup.

Monday morning started with walking into our venue that looked somewhat like a classroom.

 

Classroom

The first task of the day required some physical activity of moving all the tables around so that the environment was much more conducive to a collaborative sprint.

Jouni presenting

Each day we broke off into working groups for our respective sessions and ironed out any existing issues, as well as working through new and exciting features that would enhance different SDK components.

Theme palette sessionJamie, Pierre and Zsombor working hard on the colour palette.

Jamie the professor

Old school pointing devices, Jamie gives it a go, looking very much like a professor!

What we achieved

During the course of the week we achieved what we’d set out to do:

  • Amended the theme palette to include any missing colours and then apply these to various components
  • Completed the implementation and release the bottom edge component into the staging environment
  • Completed the section scrolling prototype and have it reviewed by visual design and UX
  • Completed the portrait and landscape edit mode header prototype
  • Worked out behaviour of complex SDK components for focus handling and added some best practice examples to the specification
  • Communicated and gained concensus on the context menu design, who are now gearing up for some pre-requisite work and then implementation of context menus
  • Prepared the visual rules for buttons and made the Ubuntu shape ready to use for buttons
  • Completed the design for sliders  
  • Discussed a tree view component for navigation
  • Created a first draft of tabs wireframes and functionality agreed
  • Created a first draft of text selections visuals and reviewed, UX and functionality was discussed ready to include in the specification
  • Created the Libertine packaging project and containers
  • Tidied up the IDE
  • Created some Snapp packages and got them working
  • Ramped up some new  investigative work that arose in our collaboration

The planets aligned… literally

In the early hours of Wednesday morning  (before breakfast) a few of us managed to witnessed a planetary conjunction (Venus, Mars and Jupiter) which was truly amazing… a surprise benefit of sprinting in the arctic circle.
Even though there were a few hours of daylight, we managed to embrace the cold and stand outside to enjoy the beautiful views during lunch and coffee breaks.

The bay

All in all, it was a very productive and fun sprint. We left with a sense of accomplishment and camaraderie.

Read more
Steph Wilson

Today we celebrate our amazing Ubuntu Community and show our appreciation for all the hard work put into making Ubuntu what it is today.

Ubuntu is not just an operating system, it is a whole community in which everybody collaborates with everybody to bring to the life a wonderful human experience. When you download the ISO, burn it, install it and start to enjoy it, you know that a lot of people made magnificent efforts to deliver the best Ubuntu OS possible.

To show our appreciation, the Community Managers and Designers have nominated several community application developers to receive a special thank you for their outstanding work:

  • Dan Chapman (dekko)
  • Boren Zhang (dekko)
  • Kunal Parmar (calendar)
  • Stefano Verzegnassi (docviewer)
  • Riccardo Padovani (calculator, notes)
  • Bartosz Kosiorek (calculator, clock)
  • Roman Shchekin (shorts, docviewer)
  • Joey Chan (shorts)
  • Victor Thomson (music, weather)
  • Andrew Hazen (music, weather)
  • Nekhelesh Ramananthan (clock)
  • Niklas Wenzel (terminal, dekko/platform)

We’ll send everyone an official Ubuntu keychain and sticker pack.


 

We also got hold of some other special Ubuntu items and because it is impossible to pick favourites, names were drawn out of a hat:

1 

 

The following folks will be receiving a special Ubuntu gift from us:


3rd prize: An official Ubuntu hat – Niklas Wenzel

 

2nd prize: An official Ubuntu pad from Castelli – Andrew Hazen

 

1st prize: An official Ubuntu wireless mouse from Xoopar – Joey Chan

 

Well done guys!

Community Appreciation Day merchandise pack

Models not included.


Show your appreciation:

  • Ping an IRC Ubuntu channel and leave a thank you
  • Send an email to a mailing list; you can do it to a LoCo mailing list
  • On social media:
  • Or if you see a community member in the street, go up to them and give them a well-deserved pat on the back :)

For everyone who works out of passion and love for Ubuntu: we thank you, and hope it will encourage more contributors to join and make Ubuntu even better!

Read more
Anthony Dillon

Using Vanilla with Jekyll

We’re using NPM as Vanilla’s package manager. Which gives us a number of advantages such as, an easy way to install and update the CSS framework. This all worked fine until we hit an issue with Github Pages. They do not supporting install scripts therefore it is not possible in npm install. Highlighted in this issue #4 on the Jekyll Vanilla theme project.

There are a number of ways to use Vanilla with Jekyll. Here are the number of methods we discussed with their pros and cons.

Commit node_modules

This is not recommended as it duplicates a lot of code. The repo will grow in size as it will include all the framework code also.

Clone and commit Vanilla without NPM

Again this will include the entire framework in the repos code base. Another downfall would be the loss of the NPM update process.

Use Git submodules

This is the method we went with in the end. Creating a submodule in the git repo does not add all the code to the project but includes a reference and path to include the framework.

By running the following command it will pull down the framework into the correct location.

We lose NPM’s functionality but submodules are understood and run when a Github Pages are built.

Conclusion

These methods were derived from a short exploration, but solved our issue. Any better methods would be very much welcomed in the comments. You can see a demo of the Vanilla theme running on the projects Github Page below:

Read more
Robin Winslow

Last weekend I went to my first Pycon, my second conference in a fortnight.

The conference runs from Friday to Monday, with 3 days of talks followed by one day of “sprints”, which is basically a hack day.

PyCon has a code of conduct to discourage any form of othering:

Happily, PyCon UK is a diverse community who maintain a reputation as a friendly, welcoming and dynamic group.

We trust that attendees will treat each other in a way that reflects the widely held view that diversity and friendliness are strengths of our community to be celebrated and fostered.

And for me, the conference lived up to this, with a very friendly feel, and a lot of diversity in its attendants. The friendly and informal atmosphere was impressive for such a large event with more than 450 people.

Unfortunately, the Monday sprint day was cut short by the discovery of an unexploded bomb.

Many keynotes, without much Python

There were a lot of “keynote” talks, with 2 on Friday, and one each on Saturday and Sunday. And interestingly none of them were really about Python, instead covering future technology, space travel and the psychology of power and impostor syndrome.

But of course there were plenty of Python talks throughout the rest of the day – you can read about them on my other post. And I think it was a good decision to have more abstract keynotes. It shows that the Python community really is more of a general community than just a special interest group.

Van Lindberg on data economics, Marx and the Internet of Things

In the opening keynote on Friday morning, the PSF chairman showed that total computing power is almost doubling every year, and that by 2020, the total processing power in portable devices will exceed that in PCs and servers.

He then used the fact that data can’t travel faster than 11.8 inches per nanosecond to argue that we will see a fundamental shift in the economics of data processing.

The big-data models of today’s tech giants will be challenged as it starts to be quicker and make more economic sense to process data at source, rather than transfer it to distant servers to be processed. Centralised servers will be relegated to mere aggregators of pre-processed data.

He likened this to Marx seizing the means of production in a movement which will empower users, as our portable Things start to hold the real information, and choose who to share it with.

I really hope he’s right, and that the centralised data companies are doomed to fail to be replaced by the Internet of Autonomous Things, because the world of centralised data is not an equal world.

Does Python have a future on small processors? Isn’t it too inefficient?

In a world where all the interesting software is running on light-weight portable devices, processing efficiency becomes important once again. Van used this to argue that efforts to run Python effectively on low-powered devices, like MicroPython, will be essential for Python as a language to survive.

Daniele Procida: All I really want is power

The second keynote was just after lunch on Friday, Daniele Procida, organiser of DjangoCon Europe openly admitted that what he really wanted out of life was power. He put forward the somewhat controversial idea that power and usefulness are the same thing, and that ideas without power are useless.

He made the very good point that power only comes to those who ask for it, or fight for it. And that if we want power not to be abused, we really need to talk about it a whole lot more, even though it makes people uncomfortable (try asking someone their salary). We should acknowledge who has the power, and what power we have, and watch where the power goes.

He suggested that, while in politics or industry, power is very much a rivalled good, in open source it is entirely an unrivalled good. The way you grab power in the open source community is by doing good for the community, by helping out. And so by weilding power you are actually increasing power for those around you.

I don’t agree with him on this final point. I think power can be and is hoarded and abused in the open source community as well. A lot of people use their power in the community to edge out others, or make others feel small, or to soak up influence through talks and presentations and then exert their will over the will of others. I am certainly somewhat guilty of this. Which is why we should definitely watch the power, especially our own power, to see what effect it’s having.

The takeaway maxim from this for me is that we should always make every effort to share power, as opposed to jealously guarding it. It’s not that sharing power in the open source community is inevitable or necessarily comes naturally, but at least in the open source community sharing power genuinely can help you gain respect, where I fear the same isn’t so true of politics or industry.

Dr Simon Sheridan: Landing on a comet: From planning to reality

Simon Sheridan was an incredibly most humble and unassuming man, given his towering achievements. He is a world-class space scientist who was part of the European Space Agency team who helped to land Rosetta on comet 67P.

Most of what he mentioned was basically covered in the news, but it was wonderful to hear it from his perspective.

Naomi Ceder: Confessions of a True Impostor

When, a short way into her Sunday morning keynote, Naomi Ceder asked the room:

How many of you would say that you have in some way or another suffered from imposter syndrome along with me?

Almost everybody put their hands up. This is why I think this was such an important talk.

She didn’t talk about this per se, but contributing to the open source community is hard. No-one talks about it much, but I certainly feel there’s a lot of pressure. Because of its very nature, your contributions will be open, to be seen by anyone, to be criticised by anyone. And let’s face it, your contributions are never going to be perfect. And the rules of the game aren’t written down anywhere, so the chance of being ridiculed seem pretty high. Open source may be a benevolent idea, but it’s damned scary to take part in.

I believe this is why less than 2% of open source contributors are female, compared with more like 25-30% women in software development in general. And, as with impostor syndrome, the same trend is true of other marginalised groups. It’s not surprising to me that people who are used to being criticised and discriminated against wouldn’t subject themselves to that willingly.

And, as Naomi’s question showed, it is not just marginalised people who feel this pressure, it’s all of us. And it’s a problem. As we know, confidence is no indicator of actual ability, meaning that many many talented people may be too scared to contribute to open source.

As Naomi pointed out, impostor syndrome is a socially created condition – when people are expected to do badly, they do badly. In fact I completely agree with her suggestion that the existing Wikipedia definition of impostor syndrome (at the time of writing) could be more sensitively phrased to define it as a “social condition” rather than a “psychological phenomenon”, as well as avoiding singling out women.

While Naomi chose to focus in her talk on how we personally can try to mitigate feelings of being an impostor, I think the really important message here is one for the community. It’s not our fault that open source is scary, that’s just the nature of openness. But we have to make it more welcoming. The success of the open source movement really does depend on it being diverse and accepting.

What I think is really interesting is that stereotype threat can be mitigated by reminding people of their values, of what’s important to them. And this is what I hope will save open source. The more we express our principles and passion for open source, the more we express our values, the easier it is to counter negative feelings, to be welcoming, to stop feeling like impostors.

A great conference

Overall, the conference was exhausting, but I’m very grateful that I got to attend. It was inspiring and informative, and a great example of how to maintain a great community.

If you want you can now go and read about the other talks.

(Also published on robinwinslow.co.uk)

Read more
Robin Winslow

The weekend before last, I went to PyCon UK 2015.

I already wrote about the keynotes, which were more abstract. Here I’m going to talk about the other talks I saw, which were generally more technical or at least had more to do with Python.

Summary

The talks I saw covered a whole range of topics – from testing through documentation and ways to achieve simplicity to leadership. Here are some key take-aways:

The talks

Following are slightly more in-depth summaries of the talks I thought were interesting.

Friday

Leadership of Technical Teams – Owen Campbell

There were two key points I took away from this talk. The first was Owen’s suggestion that leaders should take every opportunity to practice leading. Find opportunities in your personal life to lead teams of all sorts.

The second point was more complex. He suggested that all leaders exist on two spectra:

  • Amount of control: hand-off to dictatorial
  • Knowledge of the field: novice to expert

The less you know about a field the more hands-off you should be. And conversely, if you’re the only one who knows what you’re talking about, you should probably be more of a dictator.

Although he cautioned that people tend to mis-estimate their ability, and particularly when it comes to process (e.g. agile), people think they know more than they do. No-one is really an expert on process.

He suggested that leading technical teams is particularly challenging because you slide up and down the knowledge scale on a minute-to-minute basis sometimes, so you have to learn to be authoritative one moment and then permissive the next, as appropriate.

Document all the things – Kristian Glass

Kristian spoke about the importance, and difficulty, of good documentation.
Here are some particular points he made:

  • Document why a step is necessary, as well as what it is
  • Remember that error messages are documentation
  • Try pair documentation – novice sitting with expert
  • Checklists are great
  • Stop answering questions face-to-face. Always write it down instead.
  • Github pages are better than wikis (PRs, better tracking)

One of Kristian’s main points was that it goes against the grain to write documentation, ‘cos the person with the knowledge can’t see why it’s important, and the novice can’t write the documentation.

He suggested pair documentation as a solution, which sounds like a good idea, but I was also wondering if a StackOverflow model might work, where users submit questions, and the team treat them like bugs – need to stay on top of answering them. This answer base would then become the documentation.

Saturday

Asking About Gender – the Whats, Whys and Hows – Claire Gowler

Claire spoke about how so many online forms expect people to be either simply “male” or “female”, when the truth can be much more complicated.

My main takeaway from this was the basic point that forms very often ask for much more information than they need, and make too many assumptions about their users. When it comes to asking someone’s name, try radically reducing the complexity by just having one text field called “name”. Or better yet, don’t even ask their name if you don’t need it.

I think this feeds into the whole field of simplicity very nicely. A very many apps try to do much more than they need to, and ask for much more information than they need. Thinking about how little you know about your user can help you realise what you actually don’t need to know about your user.

Finding more bugs with less work – David R. MacIver

David MacIver is the author of the Hypothesis testing library.

Hypothesis is a Python library for creating unit tests which are simpler to write and more powerful when run, finding edge cases in your code you wouldn’t have thought to look for. It is stable, powerful and easy to add to any existing test suite.

When we write tests normally, we choose the input cases, and we normally do this and we often end up being really kind to our tests. E.g.:

What Hypothesis does it help us test with a much wider and more challenging range of values. E.g.:

There are many cases where Hypothesis won’t be much use, but it’s certainly good to have in your toolkit.

Sunday

Simplicity Is A Feature – Cory Benfield

Cory presented simplicity as the opposite of complexity – that is, the fewer options something gives you, the more simple and straightforward it is.

“Simplicity is about defaults”

To present as simple an interface as possible, the important thing is to have many sensible defaults as possible, so the user has to make hardly any choices.

Cory was heavily involved in the Python Requests library, and presented it as an example of how to achieve apparent simplicity in a complex tool.

“Simple things should be simple, complex things should be possible”

He suggested thinking of an “onion model”, where your application has layers, so everything is customisable at one of the layers, but the outermost layer is as simple as possible. He suggested that 3 layers is a good number:

  • Layer 1: Low-level – everything is customisable, even things that are just for weird edge-cases.
  • Layer 2: Features – a nicer, but still customisable interface for all the core features.
  • Layer 3: Simplicity – hardly any mandatory options, sensible defaults
    • People should always find this first
    • Support 80% of users 80% of the time
    • In the face of ambiguity do the right thing

He also mentioned that he likes README driven development, which seems like is an interesting approach.

How (not) to argue – a recipe for more productive tech conversations – Harry Percival

I think this one could be particularly useful for me.

Harry spoke about how many people (including him) have a very strong need to be right. Especially men. Especially those who went to boarding school. And software development tends to be full of these people.

Collaboration is particularly important in open source, and strongly disagreeing with people rarely leads to consensus, in fact it’s more likely to achieve the opposite. So it’s important that we learn how to get along.

He suggests various strategies to try out, for getting along with people better:

  • Try simply giving in, do it someone else’s way once in a while (hard to do graciously)
  • Socratic dialogue: Ask someone to explain their solution to you in simple terms
  • Dogfooding – try out your idea before arguing for its strength
  • Bide your time: Wait for the moment to see how it goes
  • Expose yourself to other social situations, where arguments are less acceptable

All of this comes down to stepping back, waiting and exercising humility. All of which are easier said than done, but all of which are very valuable if I could only manage it.

FIDO – The dog ate my password – Alex Willmer

After covering fairly common ground of how and why passwords suck, Alex introduced the FIDO alliance.

The FIDO alliance’s goal is to standardise authentication methods and hopefully replace passwords. They have created two standards for device-based authentication to try to replace passwords:

  • UAF: First-factor passwordless biometric authentication
  • U2F: Second-factor device authentication

Browsers are just starting to support U2F, whereas support for UAF is farther off. Keep an eye out.

Data Visualisation with Python and Javascript – crafting a data-visualisation for the web – Kyran Dale

Kyran spoke about visualising data, and demoed using Scrapy and Pandas to retrieve the Nobel laureatte data from Wikipedia, using Flask to serve it as a RESTful API, and then using D3 to create an interactive browser-based visualisation.

(Also published on robinwinslow.co.uk)

Read more
Robin Winslow

Prepare for when Ubuntu freezes

I routinely have at least 20 tabs open in Chrome, 10 files open in Atom (my editor of choice) and I’m often running virtual machines as well. This means my poor little X1 Carbon often runs out of memory, at which point Ubuntu completely freezes up, preventing me from doing anything at all.

Just a few days ago I had written a long post which I lost completely when my system froze, because Atom doesn’t yet recover documents after crashes.

If this sounds at all familiar to you, I now have a solution! (Although it didn’t save me in this case because it needs to be enabled first – see below.)

oom_kill

The magic SysRq key can run a bunch of kernel-level commands. One of these commands is called oom_kill. OOM stands for “Out of memory”, so oom_kill will kill the process taking up the most memory, to free some up. In most cases this should unfreeze Ubuntu.

You can run oom_kill from the keyboard with the following shortcut:

Except that this is disabled by default on Ubuntu:

Enabling SysRq functions

For security reasons, SysRq keyboard functions are disabled by default. To enable them, change the value in the file /etc/sysctl.d/10-magic-sysrq.conf to 1:

And to enable the new config run:

SysRq shortcut for the Thinkpad X1

Most laptops don’t have a physical SysRq key. Instead they offer a keyboard combination to emulate the key. On my Thinkpad, this is fn + s. However, there’s a quirk that the SysRq key is only “pressed” when you release.

So to run oom_kill on a Thinkpad, after enabling it, do the following:

  • Press and hold alt
  • To emulate SysRq, press fn and s keys together, then release them (keep holding alt)
  • Press f

This will kill the most expensive process (usually the browser tab running inbox.google.com in my case), and freeup some memory.

Now, if your computer ever freezes up, you can just do this, and hopefully fix it.

(Also posted on robinwinslow.uk)

Read more
Tristram Oaten

Publishing Vanilla

We’ve got a new CSS framework at Canonical, named Vanilla. My colleague Ant has a great write-up introducing Vanilla. Essentially it’s a CSS microframework powered by Sass. The build process consists of two steps, an open source build, and a private build.

Open Source Build

While there are inevitably componants that need to be kept private (keys, tokens, etc.) being Canonical, we want to keep much of the build in the open, in addition to the code. We wanted the build to be as automated and close to CI/CD principles as possible. Here’s what happens:

Committing to our github repository kicks off a travis build that runs gulp tests, which include sass-lint. And we also use david-dm.org to make sure our npm dependencies are up to date. All of these have nice badges we can link to right from our github page, so the first thing people see is the heath of our project. I really like this, it keeps us honest, and informs the community.

Not everything can be done with travis, however, as publishing Vanilla to npm, updating our project page and demo site require some private credentials. For the confidential build, we use Jenkins. (formally Hudson, a java-based build management system.).

Private Build with Jenkins

Our Jenkins build does a few things:

  1. Increment the package.json version number
  2. npm publish (package)
  3. Build Sass with npm install
  4. Upload css to our assets server
  5. Update Sassdoc
  6. Update demo site with new CSS

Robin put this functionality together in a neat bash script: publish.sh.

We use this script in a Jenkins build that we kick off with a few parameters, point, minor and major to indicate the version to be updated in package.json. This allows our devs push-button releases on the fly, with the same build, from bugfixes all the way up to stable releases (1.0.0)

After less than 30 seconds, our demo site, which showcases framework elements and their usage, is updated. This demo is styled with the latest version of Vanilla, and also serves as documentation and a test of the CSS. We take advantage of github’s html publishing feature, Github Pages. Anyone can grab – or even hotlink – the files on our release page.

The Future

It’d be nice for the regression test (which we currently just eyeball) to be automated, perhaps with a visual diff tool such as PhantomCSS or a bespoke solution with Selenium.

Wrap-up

Vanilla is ready to hack on, go get it here and tell us what you think! (And yes, you can get it in colours other than Ubuntu Orange)

Read more
Robin Winslow

pre {font-size: 1em; margin-bottom: 0.75em; padding: 0.75em} code {padding-left: 0.5em; padding-right: 0.5em} pre code {padding: 0; display: block;}

I recently tried to setup OpenID for one of our sites to support authentication with login.ubuntu.com, and it took me much longer than I’d anticipated because our site is behind a reverse-proxy.

My problem

I was trying to setup OpenID with the django-openid-auth plugin. Normally our sites don’t include absolute links (https://example.com/hello-world) back to themselves, because relative URLs (/hello-world) work perfectly well, so normally Django doesn’t need to know the domain name that it’s hosted it.

However, when authenticating with OpenID, our website needs to send the user off to login.ubuntu.com with a callback url so that once they’re successfully authenticed they can be directed back to our site. This means that the django-openid-auth needs to ask Django for an absolute URL to send off to the authenticator (e.g. https://example.com/openid/complete).

The problem with proxies

In our setup, the Django app is served with a light Gunicorn server behind an Apache front-end which handles HTTPS negotiation:

User <-> Apache <-> Gunicorn (Django)

(There’s actually an additional HAProxy load-balancer in between, which I thought was complicating matters, but it turns out HAProxy was just passing through requests absolutely untouched and so was irrelevant to the problem.)

Apache was setup as a reverse-proxy to Django, meaning that the user only ever talks to Apache, and Apache goes off to get the response from Django itself, with Django’s local network IP address – e.g. 10.0.0.3.

It turns out this is the problem. Because Apache, and not the user directly, is making the request to Django, Django sees the request come in at http://10.0.0.3/openid/login rather than https://example.com/openid/login. This meant that django-openid-auth was generating and sending the wrong callback URL of http://10.0.0.3/openid/complete to login.ubuntu.com.

How Django generates absolute URLs

django-openid-auth uses HttpRequest.build_absolute_uri which in turn uses HttpRequest.get_host to retrieve the domain. get_host then normally uses the HTTP_HOST header to generate the URL, or if it doesn’t exist, it uses the request URL (e.g.: http://10.0.0.3/openid/login).

However, after inspecting the code for get_host I discovered that if and only if settings.USE_X_FORWARDED_HOST is True then Django will look for the X-Forwarded-Host header first to generate this URL. This is the key to the solution.

Solving the problem – Apache

In our Apache config, we were initially using mod_rewrite to forward requests to Django.

RewriteEngine On
RewriteRule ^/?(.*)$ http://10.0.0.3/$1 [P,L]

However, when proxying with this method Apache2 doesn’t send the X_Forwarded_Host header that we need. So we changed it to use mod_proxy:

ProxyPass / http://10.0.0.3/
ProxyPassReverse / http://10.0.0.3/

This then means that Apache will send three headers to Django: X-Forwarded-For, X-Forwarded-Host and X-Forwarded-Server, which will contain the information for the original request.

In our case the Apache frontend used HTTPS protocol, whereas Django was only using so we had to pass that through as well by manually setting Apache to pass an X-Forwarded-Proto to Django. Our eventual config changes looked like this:

<VirtualHost *:443>
    ...
    RequestHeader set X-Forwarded-Proto 'https' env=HTTPS

    ProxyPass / http://10.0.0.3/
    ProxyPassReverse / http://10.0.0.3/
    ...
</VirtualHost>

This meant that Apache now passes through all the information Django needs to properly build absolute URLs, we just need to make Django parse them properly.

Solving the problem – Django

By default, Django ignores all X-Forwarded headers. As mentioned earlier, you can set get_host to read the X-Forwarded-Host header by setting USE_X_FORWARDED_HOST = True, but we also needed one more setting to get HTTPS to work. These are the settings we added to our Django settings.py:

# Setup support for proxy headers
USE_X_FORWARDED_HOST = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')

After changing all these settings, we now have Apache passing all the relevant information (X-Forwarded-Host, X-Forwarded-Proto) so that Django is now able to successfully generate absolute URLs, and django-openid-auth now works a charm.

Read more
Robin Winslow

We recently introduced Vanilla framework, a light-weight styling framework which is intended to replace the old Guidelines framework as the basis for our Ubuntu and Canonical branded sites and others.

One of the reasons we created Vanilla was because we ran into significant problems trying to use Guidelines across multiple different sites because of the way it was made. In this article I’m going to explain how we structured Vanilla to hopefully overcome these problems.

You may wish to skip the rationale and go straight to “Overall structure” or “How to use the framework”.

Who’s it for?

We in Canonical’s design team will definitely be using Vanilla, and we also hope that other teams within Canonical can start to use it (as they did with Guidelines before it).

But most importantly, it would be fantastic if Vanilla offers a solid enough styling basis that members of the wider community feel comfortable using it as well. Guidelines was never really safe for the community at large to use with confidence.

This is why we’ve made an effort to structure Vanilla in such a way that any or all of it can be used with confidence by anyone.

Limitations of Guidelines

Guidelines was initially intended to solve exactly one problem – to be a single resource containing all the styling for ubuntu.com. This would mean that we could update Guidelines whenever we needed to update ubuntu.com’s styling, and those changes would propagate across all our other Ubuntu-branded sites (e.g.: cn.ubuntu.com or developer.ubuntu.com).

So we simply structured the markup of these sites in the same way, and then created a single hosted CSS file, and linked to it from all the sites that needed Ubuntu styling.

As time went on, two large problems with this solution emerged:

  • As over 10 sites were linking to the same CSS file, updating that file became very cumbersome, as we’d have to test the changes on every site first.
  • As the different sites became more individual over time, we found we were having to override the base stylesheet more and more, leading to overly complex and confusing local styling.

This second problem was only exacerbated when we started using Guidelines as the basis for Canonical-branded sites (e.g.: canonical.com) as well, which had a significantly different look.

Architecture goals for Vanilla

Learning from our experiences with Guidelines, we planned to solve a few specific problems with Vanilla:

  • Website projects could include only the CSS code they actually needed, so they don’t have to override lots of unnecessary CSS.
  • We could release new changes to the framework without worrying about breaking existing sites, allowing us to iterate quickly.
  • Other projects could still easily copy the styles we use on our sites with minimal work

To solve these problems, we decided on the following goals:

  • Create a basic framework (Vanilla) which only contains the common elements shared across all our sites.

    • This framework should be written in a modular way, so it’s easy to include only the parts you need
  • Extend the basic framework in “theme” projects (e.g. ubuntu-vanilla-theme) which will apply specific styling (colours etc.) for that specific brand.

    • These themes should also only contain code which needs to be shared. Site-specific styling should be kept local to the project
  • Still provide hosted compiled CSS for sites to hotlink to if they like, but force them to link to a specific version (e.g. vanilla-framework-version-0.0.15.css) rather than “latest” so that we can release a new version without worry.

Sass modularisation

This modular structure would be impossible in pure CSS. CSS itself offers no mechanism for encapsulation. Fortunately, our team has been using Sass to write our CSS for a while now, and Sass offers some important mechanisms that help us modularise our code. So what we decided to create is actually a Sass mixin library (like Bourbon for example) using the following mechanisms:

Default variables

Setting global variables is essential for the framework, so we can keep consistent settings (e.g. font colours, padding etc.). Variables can also be declared with the !default flag. This allows the framework’s settings to be overridden when extending the framework:

We’ve used this pattern in each of the Vanilla themes we’ve created.

Separating concerns into separate files

Sass’s @import feature allows us to encapsulate our code into files. This not only keeps our code tidier, but it means that anyone hoping to include some parts of our framework can choose which files they want:

Keeping everything in a mixin

When a Sass file is imported any loose CSS is compiled directly to the output. But anything declared inside a @mixin will not be output unless you call the mixin.

Therefore, we set a goal of ensuring that all parts of our library can be imported without any CSS being output, so that you can import the whole module but just choose what you want output into your compiled CSS:

Namespacing

To avoid conflicts with any local sass setup, we decided to namespace all our mixins with the vf- prefix – e.g. vf-grid or vf-header.

Overall structure

Using the aforementioned techniques, we created one base framework, Vanilla Framework, which contains (at the time of writing) 19 separate “modules” (vf-buttons, vf-grid etc.). You can see the latest release of the framework on the project’s homepage, and see the framework in action on the demo page.

The framework can be customised by overriding any of the global settings inside your local Sass, as described above.

We then extended this basic framework with three branded themes which we will use across our sites:

You can of course create your own themes by extending the framework in the same way.

NPM modules

To make it easy to include Vanilla Framework in our projects, we needed to pick a package manager to use for installing it and tracking versions. We experimented with Bower, but in the end we decided to use the Node package manager. So now anyone can install and use any of the following packages:

Hotlinks for compiled CSS

Although for in-depth usage of our framework we recommend that you install and extend it locally, we also provide hosted compiled CSS files, both minified and unminified, for the Vanilla framework itself and all Vanilla themes, which you can hotlink to if you like.

To find the links to the latest compiled CSS files, please visit the project homepage.

How to use the framework

The simplest way to use the framework is to hotlink to it. To do this, simply link to the latest version (minified or unminified) directly in your HTML:

However, if you want to take full advantage of the framework’s modular nature, you’ll probably want to install it directly in your project.

To do this, add the latest version of vanilla-framework to your project’s package.json as follows:

Then, after you’ve npm installed, include the framework from the node_modules folder:

The future

We will continue to develop Vanilla Framework, with version 0.1.0 just around the corner. You can track our progress over on the project homepage and on Github.

In the near future we’ll switch over ubuntu.com and canonical.com to using it, and when we do we’ll definitely blog about it.

Read more
Richard McCartney

Converting old guidelines to vanilla

How the previous guidelines worked

Guidelines essentially is a framework built by the Canonical web design team. The whole framework has an array of tools to make it easy to create a Ubuntu themed sites. The guidelines were a collaboration between developers and designers and followed consistent look which meant in-house teams and community websites could have a consistent brand feel.

It worked in one way, a large framework of modules, helpers and components which built the Ubuntu style for all our sites. The structure of this required a lot of overrides and work arounds for different projects and added to a bloated nature that the guidelines had become. Canonical and cloud sites required a large set of overrides to imprint their own visual requirements and created a lot of duplication and overhead for each site.

There was no build system nor a way to update to the latest version unless using the hosted pre-compiled guidelines or pulled from our bazaar repository. Not having any form of build step meant having to rely on a local Sass compiler or setup a watcher for each project. Also we had no viable way to check linting errors or create a concrete coding standard.

The actual framework its self was a ported CSS framework into Sass. Not utilising placeholders or mixins correctly and with a bloated amount of variables. To change one colour for example or changing the size of an element wouldn’t be as easy as passing a mixin with set values or changing one variable.

Unlike how we have currently built in Vanilla, all preprocessor styles are created via mixins. Creating responsive changes would be done in a large media query at the end of any document and this again would be repeated for our Canonical or Cloud styles too.

Removing Ubuntu and Canonical from theme

Our first task in building Vanilla was referencing all elements which were ‘Ubuntu’ centric. Anything which had a unique class, colour or style. Once identified the team systematically took one section of each part of guidelines and removed the classes or variables and creating new versions. Once this stage was achieved the team was able to then look at refactoring and updating the code.

Clean-up and making it generic

We decided when starting this project to update how we write any new module / element. Linting was a big factor and when using a build system like gulp we finally had the ability to adhere to a coding standard. This meant a lot of modules / elements had to be rewritten and also improved upon, trimming down the Sass nesting, applying new techniques such as flex box and cleaning duplicated styles.

But the main goal was to make it generic, extendable and easy. Not the simplest of tasks, this meant removing any custom modules or specific style / classes but also building the framework to change via a variable update or a value change with in a mixin. We wanted the Vanilla theme to inherit another developers style and that would cascade through out the whole framework with ease. Setting the brand colour for example would effect the whole framework and change a multiple of modules / elements. But you are not restricted which we had as a bottle neck with the old guidelines.

Using Sass mixins

Mixins are a powerful part of Sass which we weren’t utilising. In guidelines they were used to create preprocessor polyfills, something which was annoying. Gulp now replaces that need. We used mixins to modularise the entire framework, thus giving flexibility over which parts of the framework a project requires.

The ability to easily turn on/off a section of vanilla felt very powerful but required. We wanted a developer to choose what was needed for their project. This was the opposite of guidelines where you would receive the entire framework. In Vanilla, each section our elements or modules would also be encapsulated with in mixins and on some have values which would effect them. For example the buttons mixin;

@mixin vf-button($button-color, $button-bg, $border-color) {
  @extend %button-pattern;
  color: $button-color;
  background: $button-bg;
    
  @if $border-color != null {
    border: 1px solid $border-color;
  }
    
  &:hover {
    background: darken($button-bg, 6.2%);
      
    @if $button-bg == $transparent {
      text-decoration: underline;
    }
  }
}

The above code shows how this mixin isn’t attached to fixed styles or colours. When building a new Vanilla theme a few variable changes will style any button to the projects requirements. This is something we have replicated through out the project and creates a far better modular framework.

Creating new themes

As I have mentioned earlier a few changes can setup a whole new theme in Vanilla, using it as a base and then adding or extending new styles. Change the branding or a font family just requires overwriting the default value e.g $brand-colour: $orange !default; is set in the global variables document. Amending this in another document and setting it to $brand-colour: #990000; will change any element effected by brand colour thus creating the beginning of a new theme.

We can also take this per module mixin. Including the module into a new class or element and then extend or add upon it. This means themes are not constricted to just using what is there but gives more freedom. This method is particularly useful for the web team as we build themes for Ubuntu, Canonical and Cloud products.

An example of a live theme we have created is the Ubuntu vanilla theme. This is an extension of the Vanilla framework and is set up to override any required variables to give it the Ubuntu brand. Diving into the theme.scss It shows all elements used from Vanilla but also Ubuntu specific modules. These are exclusively used just for the Ubuntu brand but are also structured in the same manner as the Vanilla framework. This reduces complexity in maintaining these themes and developers can easily pick up what has been built or use it as a reference to building their own theme versions.

Read more
Peter Mahnke

Ubuntu is a big Open Source project and there are a lot of websites in our community. The web team at Canonical literally doesn’t even know how many sites there are. We have heard there are over 200 ubuntu.com subdomains alone, but we know that there are many more that are owned by local groups and teams outside that single ubuntu.com domain.

Traditionally most of our work has been on www.ubuntu.com and www.canonical.com, but over the years, we have designed, often built and occasionally are responsible for the content of a series of key sites like: insights.ubuntu.com, design.ubuntu.com, developer.ubuntu.com, design.canonical.com. And we have often attempted to provide on-brand versions of wiki and WordPress templates.

As the number of sites grew, we got tired of re-creating grids, templates, CSS all the time.

Enter guidelines

To resolve these issues, we created Ubuntu web guidelines. Instead of sites of cobbled together CSS and a borrowed grid, guidelines gave us something far more formalised and systematic. A grid, typography, core styles and pattern, all with our beautiful Ubuntu brand guidelines. We were not only able to maintain a whole set of sites from a single hosted set of CSS files, but others could borrow and use it easily. We even transitioned the guidelines to be responsive without breaking our sites. You can read more in our series of posts Making ubuntu.com responsive.

Exit guidelines

Around two years ago, the web team started supporting the design and development of some of Canonical’s cloud apps, including Juju, MAAS, and Canonical OpenStack Autopilot installer. These apps have a different look and feel than ubuntu.com. And they often have special requirements, for example, MAAS is likely to be run in data centres without internet access for things like fonts, images, or CSS, that the guidelines did not natively support.

We looked at how to best adapt the guidelines to work with these web apps. We looked at how we were already making www.canonical.com work, essentially overriding the Ubuntu branded guidelines and decided to change the entire approach.

Enter Vanilla

For Vanilla, we wanted to start over, but not have to rewrite everything. So our quick list of project goals was:

  • Minimise the changes to our existing html
  • Create a core theme that distilled the guidelines to its basic Ubuntu-ness
  • Make everything more modular, easy to add or remove components
  • Make it easy for anyone to create themes for each new project that could borrow from other themes
  • Create themes for ubuntu and canonical websites
  • Remove our reliance on javascript
  • Make it work stand-alone
  • Make it easy to build, develop and update
  • Invite other people both inside and outside Canonical to start using the framework

The future

So now we are close to releasing the first version of Vanilla. Canonical.com and ubuntu.com will be moved over the coming months. Then we will look at moving other projects, like MAAS, jujucharms.com, Landscape to the framework.

Please keep reading these posts, you can see Ant’s first post, Introducing Vanilla. And take a look at the project on GitHub and let us know what you think.

Read more
Anthony Dillon

Why we needed a new framework

Some time ago the web team at Canonical developed a CSS framework the we called ‘Guidelines’. Guidelines helped us to maintain our online visual language across all our sites and comprised of a number of base and component Sass files which were combined and served as a monolithic CSS file on our asset server.

We began to use Guidelines as the baseline styles for a number of our sites; www.ubuntu.com, www.canonical.com, etc.

This worked well until we needed to update a component or base style. With each edit we had to check it wasn’t going to break any of the sites we knew used it and hope it didn’t break the sites we were not aware.

Another deciding factor for us was was the feedback that we started receiving as internal teams started adopting Guidelines. We received a resounding request to break the components into modular parts so they could customise which ones they could include. Another request we heard a lot was the ability to pull the Sass files locally for offline development but keep the styling up to date.

Therefore, we set out to develop a new and improved build and delivery system, which lead us to a develop a whole new architecture and we completely refactored the Sass infrastructure.

This gave birth to Vanilla; our new and improved CSS framework.

Building Vanilla

The first decision we made was to remove the “latest” version target, so sites could no longer directly link to the bleeding edge version of the styles. Instead sites should target a specific version of Vanilla and manually upgrade as new versions are released. This helps twofold, shifting the testing and QA to the maintainers of each particular site allows for staggered updates without a sweeping update to all sites at once. Secondly, allowed us to modify current modules without updating the sites until the update was applied.

We knew that we needed to make the update process as easy as possible to help other teams keep their styles up to date. We decided against using Bower as our package manager and chose NPM to reduce the number of dependencies required to use Vanilla.

We knew we needed a build system and, as it was a greenfield project, the world was our oyster. Really it came down to Gulp vs Grunt. We had a quick discussion and decided to run with Gulp as we had more experience with it. Gulp had all the plugins we required and we all preferred the Gulp syntax instead of the Grunt spaghetti.

We had a number of JavaScript functions in Guidelines to add simple dynamic functionality to our sites, such as, equal heights or tabbed content. The team decided we wanted to try and remove the JS dependency for Vanilla and make it a pure CSS framework. So we stepped through each function and tried to work out if we, most importantly, required it at all. If so, we tried to develop a CSS replacement with an acceptable degradation for less modern browsers. We managed to cover all required functions with CSS and removed some older functionality we did not want any more.

Using Vanilla

Importing Vanilla

To start using Vanilla simple run $ npm install vanilla-framework --save in the root of your site. Then in your main stylesheet simple add:


@import ../path/to/node_modules/vanilla-framework/build/scss/build.scss
@include vanilla;

The first line in the code above imports the main build file of the vanilla-framework. Then included as it is entirely controlled with mixins, which will be explained in a future post.

Now that you have Vanilla imported correctly you should see the some default styling applied to your site. To take full advantage of the framework we require a small amount of mark up changes.

Mark up amendments

There are a number of classes used by Vanilla to set up the site wrappers. Please refer to the source for our demo site.

Vanilla-framework

Conclusion

This is still a work in progress project but we are close to releasing www.ubuntu.com and www.canonical.com based on Vanilla. Please do use Vanilla and any feedback would be very much appreciated.

For more information please visit the Vanilla project page.

Read more
Pierre Bertet

.gsa-example { margin: 1em 0; } .gsa-example p { display: none; } .gsa-grid { font-size: 14px; color: #555; border: 0.5px solid #CCC; } .gsa-grid-header { height: 30px; line-height: 30px; text-align: center; background: #F3F3F3; border-bottom: 0.5px solid #CCC; } .gsa-grid-container { display: table; width: 100%; height: 80px; } .gsa-grid-part { display: table-cell; text-align: center; vertical-align: middle; } .gsa-grid-margin { font-size: 12px; background: #FBFFCF; } .gsa-grid-content { background: #D6FED6; } .gsa-grid-panel { position: relative; background: #FFECDE; } .gsa-grid-panel:before { content: ''; position: absolute; left: 0; top: 0; bottom: 0; width: 0.5px; background: #666; } .gsa-grid-panel:first-child:before { display: none; } .gsa-grid-mcl-margin { background: #FBFFCF; } .gsa-grid-mcl-gutter { background: #CFFFFF; } .gsa-grid-mcl-column { background: #C3CBE4; } .gsa-pseudocode { font-size: 1em; margin-bottom: 1em; } .post-content h2 { margin-top: 0.863em; }

Following the article “To converge onto mobile, tablet, and desktop, think Grid Units”, here is a technical description of the way the Grid System behave. We will go through the following concepts: a Grid Unit, a Layout, a Panel, and a Multi-Column Layout.

Grid Unit

A Grid Unit (GU) is a virtual subdivision of screen space. The actual size, in pixels, of one Grid Unit is assigned by the OS depending on the device’s screen size and density, freeing the developer from worrying about these device-specific details. For more description of the system and its benefits, please see this design blog posting.

Note: There are only three target short-side screens in the grid system: 40, 50, and 90. A Grid Unit can not contain a fractional number of pixels, so if the screen width can not divide by the desired number of Grid Units (40, 50, or 90), the remainder becomes side margins.

Grid Unit Calculation

The width of a single Grid Unit is calculated as follows:

  • The width of the short edge of the screen is divided by the desired number of grid units (integer division).
  • The remainder, if any, gives us the size of the margins.
  • The quotient gives us the size of one Grid Unit.

In pseudocode:

margins = total_width mod layout_grid_units
grid_width = total_width - margins
grid_unit_width = grid_width / layout_grid_units

Example with a 540×960 screen and a 50 GU Layout

540px (total portrait width)
20px
500px or 50 GU (total width without margins)
20px

margins = 540 mod 50 = 40
grid_width = 540 - margins = 500
grid_unit = grid_width / 50 = 10

Example with a 1600×2560 screen and a 90 GU Layout

1600px (total portrait width)
35px
1530px or 90 GU (total width without margins)
35px

margins = 1600 mod 90 = 70
grid_width = 1600 - margins = 1530
grid_unit = grid_width / 90 = 17

Layout

A Layout represents the desired number of Grid Units for the short edge of the screen. That number will be used to calculate the width of a single Grid Unit in pixels, using the method described in the Grid Units section. For touch devices, the available layouts are 40 GU, 50 GU (phones or phablets), and 90 GU (tablets).

Landscape Grid Units Count Calculation

The number of Grid Units in Landscape Orientation is calculated as follows:

  • The width of the long edge of the screen is divided by the width of of a single grid unit (integer division).
  • The remainder, if any, gives us the size of the margins.
  • The quotient gives us the number of Grid Units in the Landscape Orientation.

In pseudocode:

margins = total_width mod grid_unit_width
grid_width = total_width - margins
grid_unit_count = grid_width / grid_unit_width

Example with a 540×960 screen, 50 GU Layout and 1 GU = 10px

960px (total landscape width)
96 GU (total width, no margins)

margins = 960 mod 10 = 0
grid_width = 960 - margins = 960
grid_unit_count = grid_width / 10 = 96

Panel

A Panel is a group of Grid Units. The amount of Grid Units can be any of the Layout sizes (according that it fits in the total amount of Grid Units), or variable for the remaining part.

Examples

90 GU Layout

90 GU (portrait orientation)
40 GU Panel
50 GU Panel

147 GU Layout

147 GU (landscape orientation)
40 GU Panel
50 GU Panel
57 GU Panel (variable)

Try more combinations using the Grid System Tool.

Multi-Column Layout

A Multi-Column Layout is a set of columns that can be defined inside of a panel. It contains the following properties:

  • Side margins (before the first column and after the last column)
  • Gutters (between two columns)
  • Columns

It can use from one to six columns. In 40, 50 and 90 GU Panels, the Multi-Column Layouts have been manually selected. For other widths, an algorithm tries to find the best candidate.

The margins and gutters tend to have a 2 GU width, but it can vary depending on the available possibilities.

Examples

3 Columns in a 50 GU Panel

50 GU
2
14 GU
2
14 GU
2
14 GU
2

3 Columns in a 60 GU Panel (variable)

60 GU
2
18 GU
1
18 GU
1
18 GU
2

Try more combinations using the Grid System Tool.

Read more
Benjamin Keyser

In the converged world of Unity-8, applications will work on small mobile screens, tablets and desktop monitors (with a mouse and keyboard attached) as if by magic. To achieve this transformation for your own app with little to no extra work required when considering the UI, simply design using grid units for a few predetermined virtual screen targets. Combined with Ubuntu off-the-shelf UI components built with convergence in mind, most of the hard work is done, freeing developers and designers to focus on what’s most important to their users.

What’s a grid unit? And why 40, 50, or 90 of them?

A grid unit (GU) is a virtual measure of screen space that’s independent of device hardware details like pixels or aspect ratio: those complexities are mapped under the covers by Ubuntu. Instead, by targeting just three ‘fixed’ virtual GU portrait widths—40, 50, and 90 GU— you’re guaranteed to be addressing the largest number of devices, including the desktop, to a high degree of design quality and consistency where relative spacing and content sizing just works.

The 40, 50, and 90 GU dimensions correspond to smaller smartphones, larger smartphones/phablets, and tablets respectively in portrait mode. These particular panel-widths weren’t chosen arbitrarily: they were selected by analyzing the most popular device specs on the market and picking the portrait dimensions that would embrace the largest number of possibilities most successfully, including for the desktop (more on that later).

For example, compact phones such as the BQ Aquarius E4.5 are best suited to the 40 GU-wide virtual portrait screen, offering the right balance of content to screen real estate for palm-sized viewing. For larger phones with more screen space such as the Meizu MX4, the 50 GU layout is most fitting, allowing more room for content. Finally, for edge-to-edge tablet portrait layouts for the N7 or N10, the 90 GU layout works best.

Try this exercise

Having trouble envisioning the system in action? Close your eyes and imagine a two-dimensional graph paper divided into squares that can adapt according to just three simple rules:

  • It can only be 40, 50, or 90 whole units along the short edge but the long edge can be variable
  • The long edge (in landscape mode or on the desktop) will be the whole number of GUs that carves out the maximum area rectangle that will fit within any given device’s physical screen in landscape mode based on the physical dimension of the GU determined from portrait mode (in pixels)
  • The last rule is simple but key: the squares of the graph paper must always be square—the graph paper, just to push the image a bit too far—is made of something more like graphene than polypropylene (no squeezed or stretched GUs allowed.)

Try it for yourself here: https://dl.dropboxusercontent.com/u/360991/canonical/grid-units/grid-units.html

There is one additional factor that can impact the final available screen area, but it’s a bit of a technical convolution. The under-the-covers pixels to grid unit mapping can’t include fractional pixels (this may seem like an obvious point, admittedly). But at the end of the day, the user sees the largest possible version of the 40, 50, or 90 GU wide virtual screen that’s possible on any given device. That means that all you have to do as a designer or developer is plan for the virtual dimensions we’ve been talking about, and you’re assured your user is getting the best possible rendering.

Though the system may seem abstract at first, the benefits of this system are all to easy to understand from a developer or designer standpoint: it’s far more predictable and simpler to design for layouts that follow rules rather than trying to account for a universe of idiosyncratic device possibilities. In addition, by using these layouts as the foundation, the convergence goal is much more easily achieved.

What about landscape & desktop? Use building blocks

By assembling these key portrait views together, it’s far easier to achieve landscape and desktop layouts than ever before. For example, if your app lends itself to a two panel layout, simply join together 40 and 50 GU phone layouts (that you’ve already designed) to achieve a landscape layout (or even a portrait tablet layout!)

Similarly, switching from portrait to landscape mode on tablet—also a desktop-friendly layout—could be as simple as joining a 40 GU layout and a 90 GU layout for a total of 130 GU, which fits nicely within both 16:9 and 16:10 tablet landscape screens as well as on any desktop monitor.

Since landscape and desktop layouts are the least predictable due to device variations and manual stretching by users, you can designate that of one of your panel layouts be of flexible width to fill the available space using one of these strategies:

  • Center the layout in the available space
  • Stretch or squeeze the layout to fit the available space
  • Combine these two, depending on the individual components within the layout

More complex layouts can also be achieved by joining three or more portrait layouts, too. For example, three 40 GU layouts can be joined side by side, which happen to fit perfectly into a 4:3 landscape tablet screen.

Columns, too

To help developers even further with one of the most common layouts—columnar or grid types—we’re adding a capability that maintains column-to-content size relationships across devices and the desktop the same way that type sizes are specified. This makes it very simple to achieve the proper content readability and density regardless of the device. For example, by specifying a “medium” sized column filled with “small” type, these relative relationships can be preserved throughout the converged-device experience without having to manually dig into pixel measurements.

The column capability can also adapt responsively to extra wide, variable landscape layouts, such as 16:10 aspect ratio tablets or manually stretched desktop layouts. This means that as more space becomes available as a user stretches the corners of the app window on the desktop, additional columns can be added on cue, providing more room for content.

Putting it all together across all form factors

By making screen dimensions virtual, we can minimize the vagaries of individual hardware specs that can frustrate device-convergent thinking and help developers focus more on their user’s needs. A combination of snap-together layouts, automated column layouts, and adaptive UI toolkit components like the header, list component, and bottom edge component help ensure users will experience a consistent, elegant journey from mobile to desktop and back again.

 

 

 

Read more