Canonical Voices

Posts tagged with 'notes'

Magdalena Mirowicz

How we introduced Scrum to the Web Team

At the beginning of the year our team – who look after the core websites in the Ubuntu ecosystem – decided to adjust our project delivery process and go full-on Scrum. This is hardly shocking news these days. Actually, you can find way more people writing about how they decided to ditch Scrum and what a terrible idea it was to begin with. But for us, it seemed like a good thing to try and so far the results are promising.

Happy Web Team sprinting Happy Web Team sprinting on the 16.04 release day

What is Scrum?

Scrum was born as a framework to manage the development of complex products where the traditional project methodologies failed. It has a very simple set of rules that are best explained by its founding fathers Ken Schwaber and Jeff Sutherland in the Scrum Guide. Unfortunately, over the years the simple framework has been – on the one hand – contaminated with various additional concepts and on the other, diluted by assumption that Scrum can work in any context or on a pick-and-choose basis.

When you think Scrum do you think user stories, burndown charts and planning poker?

Well, that is what I thought before I realised that all these things are NOT part of Scrum but the optional additions, which sometimes can help but sometimes can be detrimental.

Life before Scrum

I joined the Canonical Web Team at the end of last year in a project manager’s role. At that point, the team was experimenting with introducing some of the Scrum artifacts, like sprint planning and retrospective, into an already familiar Kanban method.

What worked

Standups were a daily bread for the team and kept everyone in the loop. Retrospectives made us reflect on the ways we work and how we can improve. Team members were skilled at planning their tasks and progressing through their work. The fact that we are a relatively small team based in the same space also helped with communication and collaboration.

What didn’t work

It felt like the partial adoption of Scrum was not really allowing us to reap all positive effects. We operated on a one week sprint basis. We had planning sessions at the start of the sprint but they were mostly focused on tasks for individual team members rather than team working out what we can complete each sprint and how to make it happen. We also didn’t involve the stakeholders in our meetings, so often team members didn’t have a good understanding of the business context behind the projects.

Our log of current and incoming work was massive, recorded in few different places, and often not very well defined. We work with multiple stakeholders from different functions and product lines. We also have a handful of internal projects aimed at improving our ways of working (for example a great Vanilla framework initiative you can read about here). It was a challenge for us to understand our short and long term priorities. It also felt that we can benefit if we make the stakeholders a part of the prioritisation and planning exercise.

Life after Scrum

Our team was one leg in the Scrum already and it intuitively felt that pushing it further could help us resolve at least some of the above mentioned problems.

Sprint changes

We extended the duration of our sprints to two weeks in order to have enough time to complete usable product increments. Bonus – it reduced the overhead of weekly planning and review sessions.

We shifted our thinking about what we take on each sprint from the task level items to the backlog level items which ideally, should be demonstrable at the end of the sprint. The goal is also to get better at understanding our velocity, improve estimation and therefore be more accurate in planning what we can accomplish in two week’s timeframe.

This change created a more sustainable work pace while at the same time added a sense of urgency  around most important work. Sprint clock is ticking, we know what we planned to complete and from day one we do our best to succeed. As mentioned by one of our developers, “Team now has bi-weekly rhythm and greater visibility of overall team goals in any given sprint. There is a constant march of progress”.

Introducing a backlog

Product backlog is one of the Scrum artifacts and our team didn’t really have one before. We decided to replace all docs and tools where we captured upcoming projects with one simple spreadsheet, hosting all briefs and other relevant project info. This allowed us to surface all the work that team had in the pipeline, including the internal improvements that didn’t have any owners outside of the Web Team. This new format supports us in defining and building the consensus about our priorities.

One additional shift related to our estimation process. We decided to discard task estimating altogether for the simplicity and focus only on the backlog items. The principle is that the higher the project is in the backlog, the more we break it down into small increments so our estimates are more accurate. We started with a big backlog refining session to get a better understanding of what we have in the pipeline. We now use the burnup chart which I update at the end of the sprint and it helps us to predict when, roughly, we may get to the projects that are further down the pipeline.

Meetings, meetings and few more meetings

We decided to introduce new meetings and slightly alter the existing ones. Every two weeks our Tuesday schedule is pretty hectic but it feels like it actually saves us time in the long run.

We start with a sprint review meeting for all team members and stakeholders. That’s where we look at what was planned for the sprint, demo what we completed and – if needed – explain what we didn’t manage to finish and why. We then move to the backlog planning session with the same group, agreeing what should make its way to the next sprint, clarifying outstanding questions and talking through new briefs that came in the meantime.

The second set of meetings involves only the Web Team. We first run our usual retrospective and after that, we plan the next sprint in details. We discuss all backlog items planned for the sprint, identify tasks needed to complete them and team members pick what they will be working on. After this exercise, even without estimating, we get a pretty good idea whether we’re overloaded or we need to add more work.

So far the review and planning meeting with the business turned out well and helped to improve the communication, understanding of everyone’s work, and consensus about the priorities. As one of team members pointed out, “stakeholders now have to agree to work beforehand and any ad-hoc work inserted into the iteration visibly impacts on our initial commitment”.

Scrum isn’t always a walk in the park

The beauty of Scrum is that it’s lightweight and in principle, simple to understand. It doesn’t mean however that running projects with Scrum is not difficult. Below I outlined what I see currently as our key challenges.

Launch deadlines vs. sprint schedule

Some of our web projects have strict launch dates as they are tied to specific events, such as a launch of new device. More often than not, these fall somewhere within the sprint duration so we can’t simply finish them at the end of the sprint. One way of addressing it would be to schedule them in the preceding sprint however we are rarely able to gather all necessary bits of information beforehand. For now, we just accept the fact that some items in our sprint have a higher priority and need to be released before the sprint ends.

Sprint planning and estimates

More often than not, by the end of the sprint we realise that we won’t complete all scheduled backlog items. We try not to make a big drama out of it – apparently it happens roughly half of the time even for mature Scrum teams – but at times, it deflates a sense of accomplishment. We can definitely improve how we break our backlog items, and our estimates are still sometimes based on a wishful thinking. There is also too much temptation to add more to a backlog than our velocity predicts – we try to resist it, with mixed effects.

Getting stakeholders on board

Introducing stakeholders to our planning and review process was seen by the whole team as a huge improvement, yet working with clients – either internal or external – is not always a piece of cake. We sometimes struggle to get good briefs and all necessary information before the sprint starts, and every now and then we get a cheeky request to fit something new in. It’s a long process to get everyone follow the rules and we are only at the beginning!

Bugs and business as usual

This seems to be quite a common problem for many Scrum teams – how should we deal with bugs and small requests (like copy changes) that come during the duration of the sprint? We considered postponing them until the next planning meeting where we would determine which ones we add into a sprint. Eventually, we decided we would spend more time discussing and planning these things than it actually takes to resolve them. Therefore, we assume that a small percentage of team’s velocity (about 10%) will be spent on dealing with these. That’s not quite in line with Scrum principles and we don’t have a good way to measure yet what’s the actual impact of such work.

Happy Scrum, everyone!

Scrum doesn’t work everywhere and for everyone but our team is happy with the direction we moved to, which often comes back in our retrospectives. Empowerment, openness, collaboration, and better structure have been mentioned by several team members. It’s really remarkable how easily everyone adjusted to the new process and team’s commitment to make the whole thing work has been invaluable.

We are only in our seventh sprint now and are still on the steep learning curve, making adjustments as needed. Like anything in this world, Scrum is not perfect but maybe the greatest thing about it is that it keeps continuous adaptation at its core so within each cycle, we can look back and change what we don’t like.

Want to learn how to be a Scrummaster or Product Owner yourself?

Before we started our transition to Scrum I completed the Scrum Master and Product Owner training classes with Martine Davos at Skills Matter in London, UK, which I highly recommend for any Scrum newbies.

Read more
Rae Shambrook

We previously posted about the clock app’s new look and today we are getting to know one of the developers behind the clock ( as well as other community apps.)  Bartosz Kosiorek gives us a glimpse into developing for Ubuntu and how he got started.

1) First, can you give us a bit of background about yourself and tell us how you started developing for Ubuntu?

My name is Bartosz and I’m from Poland. Currently I’m the developer for the Ubuntu Clock and Ubuntu Calculator. I started contributing to Ubuntu in 2008, by submitting bug reports into launchpad and fixing translations. Later I participated in the One Hundred Papercuts project. I made SRU verifications and eventually started developing.

My adventure with Ubuntu started from Ubuntu 8.10 (Interpid Idex). Previously I tried many different distributions (Debian, Fedora, SuSE etc.). I chose Ubuntu because it is easy to use and after installation, I have fully functional system. I like that after Ubuntu is installed, there are no duplicate applications and those already installed work perfectly with the system.

2) How long have you been working on the Clock and Calculator? How did you get involved in these projects?

I started to develop for Ubuntu about two years ago when I first heard about Ubuntu Touch and convergence. I started by contributing to Ubuntu Core Apps by testing, submitting bug reports and patches. Most commits were done for Ubuntu Calculator and Ubuntu Clock by fixing bugs which were approved by Riccardo Padovani, Mihir Soni and Nekhelesh Ramananthan. After some time, I became member of Ubuntu Core Apps. It’s very fun to work with these guys and the Ubuntu community. I’ve learned a lot about Qt/QML and user experience design.

3) How do you approach implementing a design in your apps?

Generally I follow the design document during implementation and sometimes find parts that need to be improved. After speaking with the Ubuntu UX team, we discuss various issues and agree on a final design solution. Sometimes the UX team gives us freehand, so we could design some parts by ourselves (eg. Stopwatch, Welcome Wizard, Landscape View). It’s really fun to work with such awesome guys.

4) What feature would you like to see in the future?

I think from user perspective, longer battery life is a very important topic. The power usage is higher with white background: https://www.quora.com/Does-a-white-background-use-more-energy-on-a-LCD-than-if-it-was-set-to-black especially with OLED screen. I wish that Ubuntu Touch came with a darker theme, to save battery on OLED screens.

Read more
Joseph Williams

Embeddable cards for Juju

Juju is a cloud orchestration tool with a lot of unique terminology. This is not so much of a problem when discussing or explaining terms or features within the site or the GUI, but, when it comes to external sources, the context is sometimes lost and everything can start to get a little confusing.

So a project was started to create embeddable widgets of information to not only give context to blog posts mentioning features of Juju, but also to help user adoption by providing direct access to the information on jujucharms.com.

This project was started by Anthony Dillon, one of the developers, to create embeddable information cards for three topics in particular. Charms, bundles and user profiles. These cards would function similarly to embedded YouTube videos, or embedding a song from Soundcloud on your own site as seen bellow:

 

 

Multiple breakpoints of the cards were established (small, 300px and below. medium: 301px to 625px and large: 626px and up) so that they would work responsively and therefore work in a breadth of different situations and compliment the user’s content referring to a charm, bundle or a user profile without any additional effort for the user.

We started the process by determining what information we would want to include within the card and then refining that information as we went through the different breakpoints. Here are some of the initial ideas that we put together:

charm  bundle  profile

We wrote down all the information there could be related to each type of card and then discussed how that might carry down to smaller card sizes and removed the unnecessary information as we went through the process. For the profile cards, we felt there was not enough information to display a profile card above 625px break point so we limited the card to the medium size.

Just enter the bundle or the charm name and the card will be generated for you to copy the code snippet to embed into your own content.

embed card thing

You can create your own here: http://www.jujugui.org/community/cards

Below are some examples of the responsive cards are different widths:

 

Read more
Alejandra Obregon

Designing cloud tools in Cape Town

Our cloud design team has just returned from a trip to Cape Town for our mid-cycle sprint. We have the luxury of being collocated in London and working closely throughout the whole year, but these sprints give us an invaluable opportunity to collaborate with other stakeholders and engineers working around the world. We get together to define priorities and review progress on our goals for this cycle.

We can showcase upcoming features and designs and get feedback from a variety of sources.

This time we also heard from cloud architects who are building data centres with the products we design. It gave us an insight into how our products are being received in the field – what is working well in reality, what we need to do more of.

The week mainly involves lots of talking, drawing, presenting and discussing topics and we work to a tight schedule.

But we also get a chance to socialise and get to know people we work with in different time zones, which I think is just as valuable as the working bit.

At the end of the week we plan in detail for the next few weeks based on the outcomes of our conversations.

It’s an exciting time to be at Canonical. A new version of our provisioning product MAAS has just been released and it’s a huge evolution and something we’re very proud of.

The new Juju GUI just went live today! It has a completely revamped interface to make it easier to build and manage your cloud applications and services.

Our Autopilot tool for building OpenStack clouds will have a really nice set of new options in our next release.

Below are some photos of our week in Cape Town. I’m looking forward to our next Ubuntu release in April and the evolution of the cloud tools that come with it.

If you’d like to join the fun, get in touch, we’re hiring!

image
image
image
image
image
mark_kiko_maas_review
image
image

Read more
James Mulholland

We sat down with Dekko star Dan Chapman to get an insight into how he got involved with Ubuntu and his excitement for the release of the Pocket Desktop.

me

Dan has been an active member of the Community since 2013, where he has worked closely with our Design Team to create one of our first convergent showcase apps: Dekko. He also helps out with the Ubuntu QA community with package testing and automated tests for the ubuntu-autopilot-tests project.

The Dekko app is an email client that we are currently developing to work across all devices: mobile, tablet and desktop. If you are interested in contributing to Dekko, whether that be writing code, testing, documentation, translations or have some super cool ideas you would just like to discuss. Then please do get in touch, all contributions are welcomed!

Dan can be reached by email dpniel@ubuntu.com or pop by #dekko on irc.freenode.net or see his wiki page https://wiki.ubuntu.com/dpniel

Early Dekko exploration

Dekko Phone Retro 1

What inspired you to contribute?

I first got involved with the Community in 2013, where Nicholas Skaggs introduced me to the Quality Team to write test cases for automated testing for the Platform. I can’t remember why I started exactly, but I saw it as an opportunity to improve it. Ever since then it’s been a well worth it experience.

What is it about open source that you like?

I like the fact that in the Community everyone has a common goal to build something great.

How does it fit into your lifestyle?

I study from home at the moment so I have to divide my time between my family, Ubuntu and my studies.

What I do for Ubuntu and my course are quite closely tied. The stuff I do for Ubuntu is giving me real life practical skills that I can relate to my course, which is mainly theory based.

Have you made your work with the Ubuntu Community an integral part of your studies as well?

I’m actually doing a project at the moment that is to do with my work on Dekko, but it’s for interacting with an exchange server and implementing a client side library. Hopefully when that’s done I can bring it into Dekko on a later date. I try to keep my interests parallel.

How much time does it take you to develop an app?

Quite a large proportion of my time goes towards Ubuntu.

How is it working remotely?

I find it more than effective. I mean it would be great to meet people face-to-face too.

Dekko development

Dekko Phone Retro 2

What are you most excited about?

Being able to have a full-blown computer in my pocket. As soon as it’s available i’m having the pocket desktop.

Do you use your Ubuntu phone as your main device?

I do yes. The rest of the family do too. I even got my eldest boy, who’s 9 to use it, as well as my partner and mother-in-law.

How is it working with the Ubuntu Design Team?

It’s been great actually because i’m useless at design. There’s always something to improve on, so even if the designs aren’t ready there’s still enough to work on. There hasn’t been big waits in-between or waiting for you guys as you’re busy. The support is there if you need it.

Have you faced any challenges when working on an app for many form factors (phone, tablet, desktop etc)?

The only challenge is getting the design before the toolkit components are ready. It was a case of creating custom stuff and trying to not cause myself too much pain when I have to switch. The rest has been plain sailing as they toolkit is a breeze to use, and the Design team keep me informed of any changes.

What’s the vibe like in the Community at the moment?

I speak to a fair few of them now through Telegram, that seems to be the place to talk now there’s an app for it. It’s nice you can ping your question to anyone and you’ll get an immediate response relatively quickly. Alan Pope, always gives you answers.

What are you thoughts on the Pocket Desktop?

It is exciting as it’s something different. I don’t think there’s competition, as we all have different target audiences we are reaching to. I’m really excited about where the Platform is heading.

The future of convergent Dekko

Dekko Future

Read more
Steph Wilson

We believe that the first impression matters, especially when it comes to introducing a new product to a user for the first time. Our aim is to delight the user from the moment they open the box, through to the setup wizard that will help them get started with their new phone.

Devices have become an essential part of our everyday lives. We choose carefully the ones we want to adopt, taking into account all manner of factors that influence our lifestyle and how we conduct our everyday tasks. So when buying a totally new product, with unfamiliar software, or from a new brand, you want to make the first impression count in order to seduce and reassure the user that this product is for them.

The out of the box experience (OOBE) is one of the most important categories of software usability. It essentially says how easy your software is to use, as well as introducing the user into your brand through visual design and tone of voice, which can convey familiarity and trust within your product.

How did we do it?

We started to look at research around users past experiences when setting up a new device and their feelings about the whole process. We also took a look at what our competitors were doing, taking into account current patterns and trends in the market.

From gathering this research we started to simplify as much as possible the OOBE workflow. Taking into consideration the good and the bad things, we started to define our design goals:

  • Design for seduction
  • Simplicity
  • Introduce the brand through design
  • Transform the setup wizard

What did we change?

First of all we started from the smallest screen, taking the existing screens we have for mobile and assessing the design faults and bugs.

In order to create a consistent experience across all devices, we drew together common first experiences found on the mobile, tablet and desktop:

  • Choosing a language
  • Wifi setup
  • Choosing a Time Zone
  • Choosing a lock screen option

One of the major changes we wanted to achieve was to give the user the same experience across all devices, moving us closer to achieving a seamless convergent platform.

What did we achieve?

  • We achieved our main aim in creating the same visual experience across all devices.

Convergence

 

  • We defined two types of screens: Primary screen (left), Secondary screen (right)

Image 1

The secondary screens created more space for forms, which helped us to define a consistent and intuitive animation between screens.

 

  • All the dialogs were transformed where possible into full screens. We kept the dialogs only to communicate to the user confirmation or error messages.

Image 2

 

  • The desktop installer was simplified and modernized.

desktop 2

The implementation of the OOBE has already begun and we cannot wait for you to open the box and experience it on your new Ubuntu device.

UX Designer: Andreea Pirvu

Visual Designer: Grazina Borosko

Read more
Robin Winslow

Prepare for when Ubuntu freezes

I routinely have at least 20 tabs open in Chrome, 10 files open in Atom (my editor of choice) and I’m often running virtual machines as well. This means my poor little X1 Carbon often runs out of memory, at which point Ubuntu completely freezes up, preventing me from doing anything at all.

Just a few days ago I had written a long post which I lost completely when my system froze, because Atom doesn’t yet recover documents after crashes.

If this sounds at all familiar to you, I now have a solution! (Although it didn’t save me in this case because it needs to be enabled first – see below.)

oom_kill

The magic SysRq key can run a bunch of kernel-level commands. One of these commands is called oom_kill. OOM stands for “Out of memory”, so oom_kill will kill the process taking up the most memory, to free some up. In most cases this should unfreeze Ubuntu.

You can run oom_kill from the keyboard with the following shortcut:

Except that this is disabled by default on Ubuntu:

Enabling SysRq functions

For security reasons, SysRq keyboard functions are disabled by default. To enable them, change the value in the file /etc/sysctl.d/10-magic-sysrq.conf to 1:

And to enable the new config run:

SysRq shortcut for the Thinkpad X1

Most laptops don’t have a physical SysRq key. Instead they offer a keyboard combination to emulate the key. On my Thinkpad, this is fn + s. However, there’s a quirk that the SysRq key is only “pressed” when you release.

So to run oom_kill on a Thinkpad, after enabling it, do the following:

  • Press and hold alt
  • To emulate SysRq, press fn and s keys together, then release them (keep holding alt)
  • Press f

This will kill the most expensive process (usually the browser tab running inbox.google.com in my case), and freeup some memory.

Now, if your computer ever freezes up, you can just do this, and hopefully fix it.

(Also posted on robinwinslow.uk)

Read more
Jamie Young

dConstruct 2015

feature-image-2

We had a great time at dConstruct in Brighton last Friday. Ubuntu was the premier sponsor of the event, so 15 of us headed down from London in the small hours of the morning to set up our stand and enjoy a day out.

Origami competition

IMG_8315

IMG_8341

Our stand became a landscape of wolves and unicorns as attendees of the conference took up our challenge to fold one of these animals (with instructions) to win themselves a brand new BQ Aquaris E5 Ubuntu Phone.

The talks

IMG_0373

IMG_1396

The theme of the talks was ‘Designing the Future’ so we had robots, The Jetsons, various superheroes and The Terminator all making an appearance during the day.

Josh Clark demonstrated real magic on stage. Matt Novak revealed the secret behind robot vacuum cleaners of the 1950s (they were radio-controlled). Nick Foster brought us all back down to earth with a great talk on designing for the mundane and finally Dan Hill encouraged us to look more closely at the planning regulation notices pinned to lampposts.

If you want to hear all the talks from the day you can find them here.

A well deserved lunch!

IMG_20150911_134719

After a morning of mental sustenance we needed some real food in our stomachs and we all set off for a tasty lunch at the Chilli Pickle, just around the corner from the venue. Feeling energised from the spicy food, it was back for round two of talks…

Positive feedback

IMG_8271

IMG_8301

From the moment the doors opened we had people coming up to the stand to speak to us about Ubuntu. We had a really positive response from attendees to the stand, where we were showing off demos of both Ubuntu Phones. People were really happy to see us, which was nice! From our end, it was invaluable meeting you all and hearing all the interesting questions you had for us. We hope we managed to answer them all!

A sunny day out

A beautiful sunny day, without a drop of rain, made it for a nice day out of the office: that alone is usually enough to fill anyone with energy and inspiration.

The future is sponsored by Ubuntu

We are inspired to support entrepreneurs and inventors focused on life-changing projects. We’re building open source tools for the next generation of devices, things, desktops and clouds.

Take a look at www.ubuntu.com

Join us

Passionate about good design and creating delightful experiences? We’re looking for people who love to learn and share their knowledge and ideas.

See all the design jobs on canonical.com

Read more
Inayaili de León Persson

We’re going to be at dConstruct!

Ubuntu is once again sponsoring the excellent dConstruct conference, taking place this Friday, 11th September in Brighton.

This year’s theme is “Designing the future” and we can’t wait to hear what the stellar lineup of speakers has to share.

As ever, if you’re going to be there, come and say hi, and grab a few Ubuntu goodies while you’re at it.

In the meantime, why not listen to the dConstruct podcast, where Jeremy Keith talks to the speakers before the event?

See you on Friday!

Ubuntu at dConstruct 2014

Read more
Inayaili de León Persson

August’s reading list

The design team members are constantly sharing interesting, fun, weird, links with each other, so we thought it might be a nice idea to share a selection of those links with everyone.

Here are the links that have been passed around during last month:

Thanks to Robin, Luca, Elvira, Anthony, Jamie, Joe and me, for the links this month!

Read more
Robin Winslow

pre {font-size: 1em; margin-bottom: 0.75em; padding: 0.75em} code {padding-left: 0.5em; padding-right: 0.5em} pre code {padding: 0; display: block;}

I recently tried to setup OpenID for one of our sites to support authentication with login.ubuntu.com, and it took me much longer than I’d anticipated because our site is behind a reverse-proxy.

My problem

I was trying to setup OpenID with the django-openid-auth plugin. Normally our sites don’t include absolute links (https://example.com/hello-world) back to themselves, because relative URLs (/hello-world) work perfectly well, so normally Django doesn’t need to know the domain name that it’s hosted it.

However, when authenticating with OpenID, our website needs to send the user off to login.ubuntu.com with a callback url so that once they’re successfully authenticed they can be directed back to our site. This means that the django-openid-auth needs to ask Django for an absolute URL to send off to the authenticator (e.g. https://example.com/openid/complete).

The problem with proxies

In our setup, the Django app is served with a light Gunicorn server behind an Apache front-end which handles HTTPS negotiation:

User <-> Apache <-> Gunicorn (Django)

(There’s actually an additional HAProxy load-balancer in between, which I thought was complicating matters, but it turns out HAProxy was just passing through requests absolutely untouched and so was irrelevant to the problem.)

Apache was setup as a reverse-proxy to Django, meaning that the user only ever talks to Apache, and Apache goes off to get the response from Django itself, with Django’s local network IP address – e.g. 10.0.0.3.

It turns out this is the problem. Because Apache, and not the user directly, is making the request to Django, Django sees the request come in at http://10.0.0.3/openid/login rather than https://example.com/openid/login. This meant that django-openid-auth was generating and sending the wrong callback URL of http://10.0.0.3/openid/complete to login.ubuntu.com.

How Django generates absolute URLs

django-openid-auth uses HttpRequest.build_absolute_uri which in turn uses HttpRequest.get_host to retrieve the domain. get_host then normally uses the HTTP_HOST header to generate the URL, or if it doesn’t exist, it uses the request URL (e.g.: http://10.0.0.3/openid/login).

However, after inspecting the code for get_host I discovered that if and only if settings.USE_X_FORWARDED_HOST is True then Django will look for the X-Forwarded-Host header first to generate this URL. This is the key to the solution.

Solving the problem – Apache

In our Apache config, we were initially using mod_rewrite to forward requests to Django.

RewriteEngine On
RewriteRule ^/?(.*)$ http://10.0.0.3/$1 [P,L]

However, when proxying with this method Apache2 doesn’t send the X_Forwarded_Host header that we need. So we changed it to use mod_proxy:

ProxyPass / http://10.0.0.3/
ProxyPassReverse / http://10.0.0.3/

This then means that Apache will send three headers to Django: X-Forwarded-For, X-Forwarded-Host and X-Forwarded-Server, which will contain the information for the original request.

In our case the Apache frontend used HTTPS protocol, whereas Django was only using so we had to pass that through as well by manually setting Apache to pass an X-Forwarded-Proto to Django. Our eventual config changes looked like this:

<VirtualHost *:443>
    ...
    RequestHeader set X-Forwarded-Proto 'https' env=HTTPS

    ProxyPass / http://10.0.0.3/
    ProxyPassReverse / http://10.0.0.3/
    ...
</VirtualHost>

This meant that Apache now passes through all the information Django needs to properly build absolute URLs, we just need to make Django parse them properly.

Solving the problem – Django

By default, Django ignores all X-Forwarded headers. As mentioned earlier, you can set get_host to read the X-Forwarded-Host header by setting USE_X_FORWARDED_HOST = True, but we also needed one more setting to get HTTPS to work. These are the settings we added to our Django settings.py:

# Setup support for proxy headers
USE_X_FORWARDED_HOST = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')

After changing all these settings, we now have Apache passing all the relevant information (X-Forwarded-Host, X-Forwarded-Proto) so that Django is now able to successfully generate absolute URLs, and django-openid-auth now works a charm.

Read more
Robin Winslow

We recently introduced Vanilla framework, a light-weight styling framework which is intended to replace the old Guidelines framework as the basis for our Ubuntu and Canonical branded sites and others.

One of the reasons we created Vanilla was because we ran into significant problems trying to use Guidelines across multiple different sites because of the way it was made. In this article I’m going to explain how we structured Vanilla to hopefully overcome these problems.

You may wish to skip the rationale and go straight to “Overall structure” or “How to use the framework”.

Who’s it for?

We in Canonical’s design team will definitely be using Vanilla, and we also hope that other teams within Canonical can start to use it (as they did with Guidelines before it).

But most importantly, it would be fantastic if Vanilla offers a solid enough styling basis that members of the wider community feel comfortable using it as well. Guidelines was never really safe for the community at large to use with confidence.

This is why we’ve made an effort to structure Vanilla in such a way that any or all of it can be used with confidence by anyone.

Limitations of Guidelines

Guidelines was initially intended to solve exactly one problem – to be a single resource containing all the styling for ubuntu.com. This would mean that we could update Guidelines whenever we needed to update ubuntu.com’s styling, and those changes would propagate across all our other Ubuntu-branded sites (e.g.: cn.ubuntu.com or developer.ubuntu.com).

So we simply structured the markup of these sites in the same way, and then created a single hosted CSS file, and linked to it from all the sites that needed Ubuntu styling.

As time went on, two large problems with this solution emerged:

  • As over 10 sites were linking to the same CSS file, updating that file became very cumbersome, as we’d have to test the changes on every site first.
  • As the different sites became more individual over time, we found we were having to override the base stylesheet more and more, leading to overly complex and confusing local styling.

This second problem was only exacerbated when we started using Guidelines as the basis for Canonical-branded sites (e.g.: canonical.com) as well, which had a significantly different look.

Architecture goals for Vanilla

Learning from our experiences with Guidelines, we planned to solve a few specific problems with Vanilla:

  • Website projects could include only the CSS code they actually needed, so they don’t have to override lots of unnecessary CSS.
  • We could release new changes to the framework without worrying about breaking existing sites, allowing us to iterate quickly.
  • Other projects could still easily copy the styles we use on our sites with minimal work

To solve these problems, we decided on the following goals:

  • Create a basic framework (Vanilla) which only contains the common elements shared across all our sites.

    • This framework should be written in a modular way, so it’s easy to include only the parts you need
  • Extend the basic framework in “theme” projects (e.g. ubuntu-vanilla-theme) which will apply specific styling (colours etc.) for that specific brand.

    • These themes should also only contain code which needs to be shared. Site-specific styling should be kept local to the project
  • Still provide hosted compiled CSS for sites to hotlink to if they like, but force them to link to a specific version (e.g. vanilla-framework-version-0.0.15.css) rather than “latest” so that we can release a new version without worry.

Sass modularisation

This modular structure would be impossible in pure CSS. CSS itself offers no mechanism for encapsulation. Fortunately, our team has been using Sass to write our CSS for a while now, and Sass offers some important mechanisms that help us modularise our code. So what we decided to create is actually a Sass mixin library (like Bourbon for example) using the following mechanisms:

Default variables

Setting global variables is essential for the framework, so we can keep consistent settings (e.g. font colours, padding etc.). Variables can also be declared with the !default flag. This allows the framework’s settings to be overridden when extending the framework:

We’ve used this pattern in each of the Vanilla themes we’ve created.

Separating concerns into separate files

Sass’s @import feature allows us to encapsulate our code into files. This not only keeps our code tidier, but it means that anyone hoping to include some parts of our framework can choose which files they want:

Keeping everything in a mixin

When a Sass file is imported any loose CSS is compiled directly to the output. But anything declared inside a @mixin will not be output unless you call the mixin.

Therefore, we set a goal of ensuring that all parts of our library can be imported without any CSS being output, so that you can import the whole module but just choose what you want output into your compiled CSS:

Namespacing

To avoid conflicts with any local sass setup, we decided to namespace all our mixins with the vf- prefix – e.g. vf-grid or vf-header.

Overall structure

Using the aforementioned techniques, we created one base framework, Vanilla Framework, which contains (at the time of writing) 19 separate “modules” (vf-buttons, vf-grid etc.). You can see the latest release of the framework on the project’s homepage, and see the framework in action on the demo page.

The framework can be customised by overriding any of the global settings inside your local Sass, as described above.

We then extended this basic framework with three branded themes which we will use across our sites:

You can of course create your own themes by extending the framework in the same way.

NPM modules

To make it easy to include Vanilla Framework in our projects, we needed to pick a package manager to use for installing it and tracking versions. We experimented with Bower, but in the end we decided to use the Node package manager. So now anyone can install and use any of the following packages:

Hotlinks for compiled CSS

Although for in-depth usage of our framework we recommend that you install and extend it locally, we also provide hosted compiled CSS files, both minified and unminified, for the Vanilla framework itself and all Vanilla themes, which you can hotlink to if you like.

To find the links to the latest compiled CSS files, please visit the project homepage.

How to use the framework

The simplest way to use the framework is to hotlink to it. To do this, simply link to the latest version (minified or unminified) directly in your HTML:

However, if you want to take full advantage of the framework’s modular nature, you’ll probably want to install it directly in your project.

To do this, add the latest version of vanilla-framework to your project’s package.json as follows:

Then, after you’ve npm installed, include the framework from the node_modules folder:

The future

We will continue to develop Vanilla Framework, with version 0.1.0 just around the corner. You can track our progress over on the project homepage and on Github.

In the near future we’ll switch over ubuntu.com and canonical.com to using it, and when we do we’ll definitely blog about it.

Read more