Canonical Voices

Posts tagged with 'design'

Benjamin Keyser

In the converged world of Unity-8, applications will work on small mobile screens, tablets and desktop monitors (with a mouse and keyboard attached) as if by magic. To achieve this transformation for your own app with little to no extra work required when considering the UI, simply design using grid units for a few predetermined virtual screen targets. Combined with Ubuntu off-the-shelf UI components built with convergence in mind, most of the hard work is done, freeing developers and designers to focus on what’s most important to their users.

What’s a grid unit? And why 40, 50, or 90 of them?

A grid unit (GU) is a virtual measure of screen space that’s independent of device hardware details like pixels or aspect ratio: those complexities are mapped under the covers by Ubuntu. Instead, by targeting just three ‘fixed’ virtual GU portrait widths—40, 50, and 90 GU— you’re guaranteed to be addressing the largest number of devices, including the desktop, to a high degree of design quality and consistency where relative spacing and content sizing just works.

The 40, 50, and 90 GU dimensions correspond to smaller smartphones, larger smartphones/phablets, and tablets respectively in portrait mode. These particular panel-widths weren’t chosen arbitrarily: they were selected by analyzing the most popular device specs on the market and picking the portrait dimensions that would embrace the largest number of possibilities most successfully, including for the desktop (more on that later).

For example, compact phones such as the BQ Aquarius E4.5 are best suited to the 40 GU-wide virtual portrait screen, offering the right balance of content to screen real estate for palm-sized viewing. For larger phones with more screen space such as the Meizu MX4, the 50 GU layout is most fitting, allowing more room for content. Finally, for edge-to-edge tablet portrait layouts for the N7 or N10, the 90 GU layout works best.

Try this exercise

Having trouble envisioning the system in action? Close your eyes and imagine a two-dimensional graph paper divided into squares that can adapt according to just three simple rules:

  • It can only be 40, 50, or 90 whole units along the short edge but the long edge can be variable
  • The long edge (in landscape mode or on the desktop) will be the whole number of GUs that carves out the maximum area rectangle that will fit within any given device’s physical screen in landscape mode based on the physical dimension of the GU determined from portrait mode (in pixels)
  • The last rule is simple but key: the squares of the graph paper must always be square—the graph paper, just to push the image a bit too far—is made of something more like graphene than polypropylene (no squeezed or stretched GUs allowed.)

Try it for yourself here: https://dl.dropboxusercontent.com/u/360991/canonical/grid-units/grid-units.html

There is one additional factor that can impact the final available screen area, but it’s a bit of a technical convolution. The under-the-covers pixels to grid unit mapping can’t include fractional pixels (this may seem like an obvious point, admittedly). But at the end of the day, the user sees the largest possible version of the 40, 50, or 90 GU wide virtual screen that’s possible on any given device. That means that all you have to do as a designer or developer is plan for the virtual dimensions we’ve been talking about, and you’re assured your user is getting the best possible rendering.

Though the system may seem abstract at first, the benefits of this system are all to easy to understand from a developer or designer standpoint: it’s far more predictable and simpler to design for layouts that follow rules rather than trying to account for a universe of idiosyncratic device possibilities. In addition, by using these layouts as the foundation, the convergence goal is much more easily achieved.

What about landscape & desktop? Use building blocks

By assembling these key portrait views together, it’s far easier to achieve landscape and desktop layouts than ever before. For example, if your app lends itself to a two panel layout, simply join together 40 and 50 GU phone layouts (that you’ve already designed) to achieve a landscape layout (or even a portrait tablet layout!)

Similarly, switching from portrait to landscape mode on tablet—also a desktop-friendly layout—could be as simple as joining a 40 GU layout and a 90 GU layout for a total of 130 GU, which fits nicely within both 16:9 and 16:10 tablet landscape screens as well as on any desktop monitor.

Since landscape and desktop layouts are the least predictable due to device variations and manual stretching by users, you can designate that of one of your panel layouts be of flexible width to fill the available space using one of these strategies:

  • Center the layout in the available space
  • Stretch or squeeze the layout to fit the available space
  • Combine these two, depending on the individual components within the layout

More complex layouts can also be achieved by joining three or more portrait layouts, too. For example, three 40 GU layouts can be joined side by side, which happen to fit perfectly into a 4:3 landscape tablet screen.

Columns, too

To help developers even further with one of the most common layouts—columnar or grid types—we’re adding a capability that maintains column-to-content size relationships across devices and the desktop the same way that type sizes are specified. This makes it very simple to achieve the proper content readability and density regardless of the device. For example, by specifying a “medium” sized column filled with “small” type, these relative relationships can be preserved throughout the converged-device experience without having to manually dig into pixel measurements.

The column capability can also adapt responsively to extra wide, variable landscape layouts, such as 16:10 aspect ratio tablets or manually stretched desktop layouts. This means that as more space becomes available as a user stretches the corners of the app window on the desktop, additional columns can be added on cue, providing more room for content.

Putting it all together across all form factors

By making screen dimensions virtual, we can minimize the vagaries of individual hardware specs that can frustrate device-convergent thinking and help developers focus more on their user’s needs. A combination of snap-together layouts, automated column layouts, and adaptive UI toolkit components like the header, list component, and bottom edge component help ensure users will experience a consistent, elegant journey from mobile to desktop and back again.

 

 

 

Read more
Daniel Holbach

Daniel McGuire is unstoppable. The work I mentioned yesterday was great, here’s some more, showing what would happen when the user selects “Playing Music”.

help app - playing music

 

More feedback we received so far:

  • Kevin Feyder suggested using a different icon for the app.
  • Michał Prędotka asked if we were planning to add more icons/pictures and the answer is “yes, we’d love to if it doesn’t clutter up the interface too much”. We are going to start a call for help with the content soon.
  • Robin of ubuntufun.de asked the same thing as Michał and wondered where the translations were. We are going to look into that. He generally like the Ubuntu-like style.

Do you have any more feedback? Anything you’d like to look or work differently? Anything you’d like to help with?

Read more
Daniel Holbach

Some of you might have noticed the Help app in the store, which has been around for a couple of weeks now. We are trying to make it friendlier and easier to use. Maybe you can comment and share your ideas/thoughts.

Apart from actual bugs and adding more and more useful content, we also wanted the app to look friendlier and be more intuitive and useful.

The latest trunk lp:help-app can be seen as version 0.3 in the store or if you run

bzr branch lp:help-app
less help-app/HACKING

you can run and check it out locally.

Here’s the design Daniel McGuire suggested going forward.

help-mockup

What are your thoughts? If you look at the content we currently have, how else would you expect the app to look like or work?

Thanks a lot Daniel for your work on this! :-)

Read more
Jouni Helminen

Ubuntu community devs Andrew Hayzen and Victor Thompson chat with lead designer Jouni Helminen. Andrew and Victor have been working in open source projects for a couple of years and have done a great job on the Music application that is now rolling out on phone, tablet and desktop. In this chat they are sharing their thoughts on open source, QML, app development, and tips on how to get started contributing and developing apps.

If you want to start writing apps for Ubuntu, it’s easy. Check out http://developer.ubuntu.com, get involved on Google+ Ubuntu App Dev – https://plus.google.com/communities/1… – or contact alan.pope@canonical.com – you are in good hands!

Check out the video interview here :)

Read more
niemeyer

This post provides the background for a deliberate and important decision in the design of gopkg.in that people often wonder about: while the service does support full versions in tag and branch names (as in “v1.2″ or “v1.2.3″), the URL must contain only the major version (as in “gopkg.in/mgo.v2″) which gets mapped to the best matching version in the repository.

As will be detailed, there are multiple reasons for that behavior. The critical one is ensuring all packages in a build tree that depend on the same API of a given dependency (different majors means different APIs) may use the exact same version of that dependency. Without that, an application might easily get multiple copies unnecessarily and perhaps incorrectly.

Consider this example:

  • Application A depends on packages B and C
  • Package B depends on D 3.0.1
  • Package C depends on D 3.0.2

Under that scenario, when someone executes go get on application A, two independent copies of D would be embedded in the binary. This happens because both B and C have exact control of the version in use. When everybody can pick their own preferred version, it’s easy to end up with multiple of these.

The current gopkg.in implementation solves that problem by requiring that both B and C necessarily depend on the major version which defines the API version they were coded against. So the scenario becomes:

  • Application A depends on packages B and C
  • Package B depends on D 3.*
  • Package C depends on D 3.*

With that approach, when someone runs go get to import the application it would get the newest version of D that is still compatible with both B and C (might be 3.0.3, 3.1, etc), and both would use that one version. While by default this would just pick up the most recent version, the package might also be moved back to 3.0.2 or 3.0.1 without touching the code. So the approach in fact empowers the person assembling the binary to experiment with specific versions, and gives package authors a framework where the default behavior tends to remain sane.

This is the most important reason why gopkg.in works like this, but there are others. For example, to encode the micro version of a dependency on a package, the import paths of dependent code must be patched on every single minor release of the package (internal and external to the package itself), and the code must be repositioned in the local system to please the go tool. This is rather inconvenient in practice.

It’s worth noting that the issues above describe the problem in terms of minor and patch versions, but the problem exists and is intensified when using individual source code revision numbers to refer to import paths, as it would be equivalent in this context to having a minor version on every single commit.

Finally, when you do want exact control over what builds, godep may be used as a complement to gopkg.in. That partnership offers exact reproducibility via godep, and gives people stable APIs they can rely upon over longer periods with gopkg.in. Good match.

Read more
Steph Wilson

Community members at the Sprint

Victor and Andrew are two inspiring Community developers that have devoted their spare time to contribute to the Ubuntu Touch Music App team. I sat down with them during the Washington Device Sprint in October where they told us how they drew inspiration from the Design Team, and what drives them to contribute to Ubuntu.

You can read more about Victor and Andrew through their blogs, where they post interesting articles on their work and personal projects.

From left to right: Riccardo, Andrew, Filippo and Victor

From left to right: Riccardo, Andrew, Filippo and Victor

Hey guys, so when did you first get involved with Ubuntu?

Victor: “I started to contribute to the Ubuntu platform in March/April 2013 where I noticed there was no music app, so I started putting one together. It was pretty sketchy to start with, but it worked. I didn’t have a device to test it on so I mostly tested it using the platform on my desktop – so things were a bit hit and miss.

There was also another developer doing a music app, and at the time there was no core capability of playing music through an application for the proposed devices. Michael Hall (Open Source Software Developer) and Alan Pope (Engineering Manager) pulled Daniel Holm and I together, where we merged our core bases and started the music core app.

We didn’t have as much time as other applications, so we more or less sprinted like we are now to get things done. It was very spec driven and specific, which was helpful but sometimes it was hard to put together a full vision of what the designers wanted. So now we are redoing it from the feedback we have gathered, and it’s going pretty well. A little more agile than it was previously as to do thing faster, but it’s been fun the whole time. It’s nice to work on an application that people need and gets visibility, never get sick of hacking at it.”

Andrew: “I’m from North London, where I’m currently studying Software Engineering at Oxford Brookes University. I was working on my own music app where I just taught myself how to do things using my own framework, then I saw that these guys at Ubuntu had a similar problem to me, and so I thought I’d provide a patch. This then built up from there, and now here I am!”

Steph: “It’s amazing that someone can be in their bedroom writing codes and then suddenly your app is on a phone!”

Victor: “The other great thing about it is the Community Managers make it easy and apparent that you can contribute to different projects.”

Andrew: “Yeah someone just got in contact with me and asked me if I wanted to join the team and told me how open source projects work.”

What inspired you to contribute?

Victor: “A lot of my original inspiration was from what the Design Team had previously done. The previous iteration design spec was very large for the music app and it wasn’t as future driven, more just visually pleasing.”

Do you find it hard to implement some designs?

Victor: “We try to make it as close to the designs as we can, but obviously there’s compromises. There was some very flow driven things such as: sized cover arts that were hard to implement, but we can implement them now. It’s nice because they use the same pattern from other applications.”

Andrew: “Usually we just tell the designer that this is just not possible.”

What is it about open source that you like?

Victor: “I have been a user since 2006, but I have never been a large open source developer myself. It is hard to get involved with when you don’t know what you want to contribute to.”

Andrew: “Most applications are so developed already, so you would have to learn the existing code base and develop on it, whereas if you start a new you know everything from the get-go. Seeing your application on the device and knowing it can be on other devices too, is pretty exciting!”

How does it fit into your lifestyles?

Victor: “I’m a software engineer as well, so I write a lot of code. I haven’t really done QML or QT until I started doing these applications with the Ubuntu platform, so it has been a learning experience. I am learning something new from experienced people.”

Have you made any other applications for Ubuntu?

Victor: I’ve made a few games like Piano Tiles, and another that’s kind of like a clone of that but in QML – It’s a simple app but a good time waster haha.”

How much time does it take you to develop an app?

Victor: “It took me like a day. Andrew made a game last night! In 2 hours…”

Andrew: “Yeah we did! Loads of us at the sprint just got together in a room and made a few games.”

So you’re used to working remotely, does that put a barrier against things?

Andrew: “It sometimes delay things. However, you start to build this image of a person, so when you actually get to meet them you start to understand how they are and what makes them tick.

Victor: “Depends on how personal it really needs to be. If you are collaborating together and it’s mostly writing code and coming up with ideas, it doesn’t necessarily need to be face-to-face. It is obviously nicer, but you also get the benefit if the other person is a night owl in a different country where sometimes our hours overlap, two different chunks of time we’re working in.

Andrew: “There’s usually someone on IRC to speak to, it’s like a 24 hour operation haha.”

What’s the vibe like in the Community at the moment?

Victor: “It’s a pretty small Community at the moment, with close ties. Everyone is receptive to feedback, so if it was larger Community I don’t think it would be as receptive.”

Steph: “Thanks for your time guys!”

Here’s a sneaky preview of the music app, more will be revealed soon:

Album detail

Landing page

Read more
Steph Wilson

We sat down with some of Ubuntu’s unsung Community heroes at the recent Devices Sprint in Washington D.C.

Riccardo and Filippo are two young and passionate developers who have adapted their own software to benefit the whole of the Ubuntu Community. We spoke about how and why they contribute to Ubuntu, and what motivates them to keep giving.

The Community hard at work

(Community gathering at the Sprint)

Riccardo Padovani 

Italian Community site:

http://ubuntu.it/

Personal blog:

http://blog.rpadovani.com/

So Riccardo, how did you get involved in Ubuntu?

I started 3 years ago with the Italian Ubuntu Community as they were looking for someone who could manage the website. I was young and wanted to learn about computer science, so I started for myself. While I was contributing I started to understand what was behind the Ubuntu project and their philosophy, and I thought this was a great project for software. So then I started to do stuff for Ubuntu Touch, where I made new friends and at the same time improved my English and computer science skills.

How does working in the Italian Ubuntu Community fit into your lifestyle?

I’m at University, so in the evenings instead of watching television I open my notebook and do some coding. For me it’s very fun. It’s not something I do because someone is telling me to, I do it for me. I prefer writing code than watching TV haha.

What kind of things have you contributed to Ubuntu so far?

Last year I was mainly working on the Reminder App, but now more recently I’ve started to contribute towards the Web Browser. As I use Ubuntu as my main phone I love seeing the improvements in the software I use everyday, as I know I can do something to improve it. People will benefit when the phone is released, more so on the Italian Community Site for example: when there’s something wrong and someone reports it to me, loads of people can see my work and I can fix it. It’s awesome, as I am getting better experience at the same time.

How did you start to contribute to the Community? How does it work?

I started to use Ubuntu in 2008, but before 2012 I did nothing until I found a project I wanted to get involved with. I think for every project and Community you need to find something you love and want to improve. Opening a new bug when something is wrong is the first step to contributing to an Ubuntu project.

First you find out how the Community works and then you begin to know who you can speak to, which then graduates into a natural evolution.

Does your Community regularly meet-up?

It depends on the team, as some teams are split and do different things. Every 6 months we have a meeting where we can have a beer and socialise. The rest of the year we try to do public hangouts, and then private hangouts on what we’re doing in the next month or so.

Do you find these sprints helpful?

I think during this sprint it takes more energy to do code, because I’m busier talking to people and learning new things. For the people who can or have taught me something I can meet them and say thank you in person, it is nice.

Filippo Scognamiglio

Personal blog:

http://swordfishslabs.wordpress.com/

Hey Filippo, so tell me how did you get involved with Ubuntu?

I started with some gaming applications where I first made MineSweeper. MineSweeper is not in the Ubuntu Store at the moment due to some technicalities and design issues, but it’s all working and should be implemented soon. I also made another game called Ubuntu Netwalk where you connect sources of energy to destinations and then rotate the pieces to solve the puzzle.

I started a new project that was completely unrelated to Ubuntu, which was a terminal emulator. A terminal emulator is a program that emulates a video terminal within some other display architecture.

I published a video of my work and no one cared at the start, but then a few months after I made another video and everyone loses their mind! I was really busy answering emails and questions about it. Then David Planella (Ubuntu Community Team Manager) approached me and asked me to import the terminal to the Ubuntu Touch, as the engine was the same, and so that’s where my Ubuntu story really began.

So, what’s your background?

I am currently studying Computer Engineering at University back in Italy.

Being part of a Community, what does it mean?

I wasn’t part of the Community before doing something relevant, then I got a part after I was approached. Usually people first start with commenting on the forums or fixing bugs, where you begin to build a presence in the Community. For me it was just like falling from the sky, now I want to be more involved in the Community. I never knew all these guys, today I only knew Riccardo, Alan Pope (Engineering Manager) and David Planella through email exchange, that’s it!

How’s the Sprint going for you? 

The Sprint itself is a nice opportunity to see the USA as it is my first time here. For me it is a great opportunity to finally meet the people I have been working with remotely and say thanks. I find it hard when I work from home as you’re on your own, but now I’m here at the sprint I can go grab people and interact more.

When I compare myself to my schoolmates who aren’t involved in Ubuntu or other projects, I can see the benefits it will give me in my career after university.

What motivates you? 

I get motivated by the people I can learn from. In Ubuntu I’m involved with people who are much more experienced than me, so they can teach me new things and I can produce at the same time. Learning from others on your own project or part of Ubuntu is not possible with closed source projects, because with closed source you can have an opinion on what’s good or not. They can’t tell you should do this, they simply have an external point of view.

Another good thing about open source is that you can do a lot more things with less effort. My terminal was taken from another terminal, if it wasn’t open source I would have had to write the terminal from the engine to the user interface. I drew influenced from other engines that have been made and then adapted it to my needs, of which those people who made that engine probably took it from someone else – that’s the beauty of open source.

I am happy if my project goes on and influences something/someone else, and they can take my software and adapt it to their own needs.

(From left to right: Riccardo, Andrew, Filippo and Victor)

(Community meal out)

Read more
Steph Wilson

Last week was a week of firsts for me: my first trip to America, my first Sprint and my first chili-dog.

Introducing myself as the new (only) Editorial and Web Publisher, I dove head first into the world of developers, designers and Community members. It was a very absorbing week, which after felt more like a marathon than a sprint.

After being grilled by Customs, finally we arrived at Tyson’s Corner where 200 or so other developers, designers and Community members gathered for the Devices Sprint. It was a great opportunity for me to see how people from each corner of the world contribute to Ubuntu, and share their passion for open source. I especially found it interesting to see how designers and developers work together, given their different mind sets and how they collaborated together.

The highlight for me was talking to some of the Community guys, it was really interesting to talk to them about why and how they contribute from all corners of the world.

From left to right: Riccardo, Andrew, Filippo and Victor.

(From left to right: Riccardo, Andrew, Filippo and Victor)

The main ballroom.

(The main Ballroom)

Design Team dinner.  From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni.

(Design Team dinner. From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni)

I caught up with Olga and Giorgio to share their thoughts and experiences from the Sprint:

So how did the Sprint go for you guys?

Olga: “It was very busy and productive in terms of having face time with development, which was the main reason we went, as we don’t get to see them that often.

For myself personally, I have a better understanding of things in terms of what the issues are and what is needed, and also what can or cannot be done in certain ways. I was very pleased with the whole sprint. There was a lot of running around between meetings, where I tried to use the the time in-between to catch-up with people. On the other hand as well, Development made the approach to the Design Team in terms of guidance, opinions and a general catch-up/chat, which was great!

Steph: “I agree, I found it especially productive in terms of getting the right people in the same room and working face-to-face, as it was a lot more productive than sharing a document or talking on IRC.”

Giorgio: “Working remotely with the engineers works well for certain tasks, but the Design Team sometimes needs to achieve a higher bandwidth through other means of communication, so these sprints every 3 months are incredibly useful.

What a Sprint allows us to do is to put a face to the name and start to understand each other’s needs, expectations and problems, as stuff gets lost in translation.

I agree with Olga, this Sprint was a massive opportunity to shift to much higher level of collaboration with the engineers.

What was your best moment?

Giorgio: “My best moment was when the engineers perception towards the efforts of the Design Team changed. My goal is to better this collaboration process with each Sprint.”

Did anything come up that you didn’t expect?

Giorgio: “Gaming was an underground topic that came up during the Sprint. There was a nice workshop on Wednesday on it, which was really interesting.”

Steph: “Andrew a Community Developer I interviewed actually made two games one evening during the Sprint!”

Olga: “They love what they do, they’re very passionate and care deeply.”

Do you feel as a whole the Design Team gave off a good vibe?

Giorgio: “We got a good vibe but it’s still a working progress, as we need to raise our game and become even better. This has been a long process as the design of the Platform and Apps wasn’t simply done overnight. However, now we are in a mature stage of the process where we can afford to engage with Community more. We are all in this journey together.

Canonical has a very strong engineering nature, as it was founded by engineers and driven by them, and it is has evolved because of this. As a result, over the last few years the design culture is beginning to complement that. Now they expect steer from the Design Team on a number of things, for example: Responsive design and convergence.

The Sprint was good, as we finally got more of a perception on what other parties expect from you. It’s like a relationship, you suddenly have a moment of clarity and enlightenment, where you start to see that you actually need to do that, and that will make the relationship better.”

Olga: The other parties and the Development Team started to understand that initiated communication is not just the responsibility of the Design Team, but it’s an engagement we all need to be involved in.”

In all it was a very productive week, as everyone worked hard to push for the first release of the BQ phone; together with some positive feedback and shout-outs for the Design Team :)

Unicorn hard at work.

(Unicorn hard at work)

There was a bit of time for some sightseeing too…

It would have been rude not to see what the capital had to offer, so on the weekend before the sprint we checked out some of Washington’s iconic sceneries.

The Washington Monument.

(The Washington Monument)

We saw most of the important parliamentary buildings like the White House, Washington Monument and Lincoln’s Statue. Seeing them in the flesh was spectacular, however, I half expected a UFO to appear over the Monument like in ‘Independence Day’, and for Abraham Lincoln to suddenly get up off his chair like in the movie ‘Night at the Museum’ – unfortunately none of that happened.

The White House.

(The White House)

D.C. isn’t as buzzing as London but it definitely has a lot of character, as it embodies an array of thriving ethnic pockets that represented African, Asian and Latin American cultures, and also a lot of Italians. Washington is known for getting its sax on, so me and a few of the Design Team decided to check-out the night scene and hit a local Jazz Club in Georgetown.

...And all the jazz.

(Twins Jazz Club)

On the Sunday, we decided to leave the hustle and bustle of the city and venture to the beautiful Great Falls Park, which was only 10-15 minutes from the hotel. The park was located in the Northern Fairfax County along the banks of the Potomac River, which is an integral part of the George Washington Memorial Parkway. Its creeks and rapids made for some great selfie opportunities…

Great Falls Park.

(Great Falls Park)

Read more
Luca Paulina

A few weeks ago we launched ‘Machine view’ for Juju, a feature designed to allow users to easily visualise and manage the machines running in their cloud environments. In this post I want to share with you some of the challenges we faced and the solutions we designed in the process of creating it.

A little bit about Juju…
For those of you that are unfamiliar with Juju, a brief introduction. Juju is a software tool that allows you to design, build and manage application services running in the cloud. You can use Juju through the command-line or via a GUI and our team is responsible for the user experience of Juju in the GUI.

First came ‘Service View’
In the past we have primarily focused on Juju’s ‘Service view’ – a virtual canvas that enables users to design and connect the components of their given cloud environment.

Service_view

This view is fantastic for modelling the concept and relationships that define an application environment. However, for each component or service block, a user can have anything from one unit to hundreds or thousands of units, depending on the scale of the environment, and before machine view, units means machines.

The goal of machine view was to surface these units and enable users to visualise, manage and optimise their use of machines in the cloud.

‘Machine view’: design challenges
There were a number of challenges we needed to address in terms of layout and functionality:

  • The scalability of the solution
  • The glanceability of the data
  • The ability to customise and sort the information
  • The ability to easily place and move units
  • The ability to track changes
  • The ability to deploy easily to the cloud

I’ll briefly go into each one of these topics below.

Scalability: Environments can be made up of a couple of machines or thousands. This means that giving the user a clear, light and accessible layout was incredibly important – we had to make sure the design looked and worked great at both ends of the spectrum.

Machine view

Simple_machine_view

Glanceability: Users need simple comparative information to help choose the right machine at-a-glace. We designed and tested hundreds of different ways of displaying the same data and eventually ended up with an extremely cut back listing which was clean and balanced.

A tour of the many incarnations and iterations of Machine view…

The ability to sort and customise: As it was possible and probable that users would scale environments to thousands of machines, we needed to provide the ability to sort and customise the views. Users can use the menus at the top of each column to hide information from view and customise the data they want visible at a glance. As users become more familiar with their machines they could turn off extra information for a denser view of their machines. Users are also given basic sorting options to help them find and explore their machines in different ways.

More_menu

The ability to easily place and move units: Machine view is built around the concept of manual placement – the ability to co-locate (put more than one) items on a single machine or to define specific types of machines for specific tasks. (As opposed to automatic placement, where each unit is given a machine of the pre-determined specification). We wanted to enable users to create the most optimised machine configurations for their applications.

Drag and drop was a key interaction that we wanted to exploit for this interface because it would simplify the process of manually placing units by a significant amount. The three column layout aided the use of drag and drop, where users are able to pick up units that need placing on the left hand side and drag them to a machine in the middle column or a container in the third column. The headers also change to reveal drop zones allowing users to create new machines and containers in one fluid action keeping all of the primary interactions in view and accessible at all times.

Drag and drop in action on machine view

The ability to track changes: We also wanted to expose the changes that were being made throughout user’s environments as they were going along and allow them to commit batches of changes altogether. Deciding which changes were exposed and the design of the uncommitted notification was difficult, we had to make sure the notifications were not viewed as repetitive, that they were identifiable and that it could be used throughout the interface.

Uncommitted_SV

Uncommitted_MV

The ability to deploy easily to the cloud: Before machine view it was impossible for someone to design their entire environment before sending it to the cloud. The deployment bar is a new ever present canvas element that rationalises all of the changes made into a neat listing, it is also where users can deploy or commit those changes. Look for more information about the deployment bar in another post.

The change log exposed

The deployment summary

We hope that machine view will really help Juju users by increasing the level of control and flexibility they have over their cloud infrastructure.

This project wouldn’t have been possible without the diligent help from the Juju GUI development team. Please take a look and let us know what you think. Find out more about Juju, Machine View or take it for a spin.

Read more
niemeyer

The qml package is right now one of the best choices for creating graphic applications under the Go language. Part of the reason why this is true comes from the convenience of QML, a high-level domain-specific language that allows describing visual components, events, animations, and content in general in a succinct and pleasing way. The integration of such a language with Go means having both a good mechanism for describing visual content, and a good platform for doing general development under, which can range from simple data manipulation to involved OpenGL content rendering.

On the practical side, one of the implications of using such a language partnership is that every Go qml application will have some sort of resource content to deal with, carrying the QML logic. Such content may be loaded either from files on disk, or from strings in memory. Loading from a file means the content may be organized in multiple files that directly reference each other without changing the Go application, and may be updated and tested without rebuilding. Loading from a string in memory means the content needs to be self-contained, but results in a standalone binary (linking aside – still depends on Qt libraries).

There’s a well known trick to have both benefits at once, though, and the basic solution has already been made available in multiple packages: have the content on disk, and use an external tool to pack it up into a Go file that is built into the binary when the content is updated. Unfortunately, this trick alone is not enough with the qml package, because the QML engine needs to know what resources are available and where so that the right thing happens when it sees a directory being imported or an image path being referenced.

To solve that problem, the qml package has been enhanced with functionality that leverages the existing Qt resource system to pack content into the binary itself. Rather than using the upstream C++ and XML-based resource compiler, though, a new resource packer was implemented inside the qml package and made available both under a friendly Go API, and as a tool that follows common Go idioms.

The help text for the genqrc tool describes it in detail:

Usage: genqrc [options] <subdir1> [<subdir2> ...]

The genqrc tool packs all resource files under the provided subdirectories into
a single qrc.go file that may be built into the generated binary. Bundled files
may then be loaded by Go or QML code under the URL "qrc:///some/path", where
"some/path" matches the original path for the resource file locally.

Starting with Go 1.4, this tool may be conveniently run by the "go generate"
subcommand by adding a line similar to the following one to any existent .go
file in the project (assuming the subdirectories ./code/ and ./images/ exist):

    //go:generate genqrc code images

Then, just run "go generate" to update the qrc.go file.

During development, the generated qrc.go can repack the filesystem content at
runtime to avoid the process of regenerating the qrc.go file and rebuilding the
application to test every minor change made. Runtime repacking is enabled by
setting the QRC_REPACK environment variable to 1:

    export QRC_REPACK=1

This does not update the static content in the qrc.go file, though, so after
the changes are performed, genqrc must be run again to update the content that
will ship with built binaries.

The tool may be installed via go get as usual:

go get gopkg.in/qml.v1/cmd/genqrc

and once the qrc.go file has been generated, the main qml file may be
loaded with logic equivalent to:

component, err := engine.LoadFile("qrc:///path/to/file.qml")

The loaded file can in turn reference any other content that was bundled
into the Go binary.

For a better picture, this example demonstrates the use of the tool.

Read more
niemeyer

There were a few long standing issues in the yaml.v1 package which were being postponed so that they could be done at once in a single incompatible change, and the time has come: yaml.v2 is now available.

Besides these incompatible changes, other compatible fixes and improvements were performed in that push, and those were also applied to the existing yaml.v1 package so that dependent applications benefit immediately and without modifications.

The subtopics below outline exactly what changed, and how to adapt existent code when necessary.

Type errors

With yaml.v1, decoding a YAML value that is not compatible with the Go value being unmarshaled into will silently drop the offending value without notice. In many cases continuing with degraded behavior by ignoring the problem is intended, but this was the one and only option.

In yaml.v2, these problems will cause a *yaml.TypeError to be returned, containing helpful information about what happened. For example:

yaml: unmarshal errors:
  line 3: cannot unmarshal !!str `foo` into int

On such errors the decoding process still continues until the end of the YAML document, so ignoring the TypeError will produce logic equivalent to the old yaml.v1 behavior.

New Marshaler and Unmarshaler interfaces

The way that yaml.v1 allowed custom types to implement marshaling and unmarshaling of YAML content was slightly confusing and troublesome. For example, considering a CustomType with a keys field:

type CustomType struct {
        keys map[string]int
}

and supposing the goal is to unmarshal this YAML map into it:

custom:
    a: 1
    b: 2
    c: 3

With yaml.v1, one would need to implement logic similar to the following for that:

func (v *CustomType) SetYAML(tag string, value interface{}) bool {
        if tag == "!!map" {
                m := value.(map[interface{}]interface{})
                // ... iterate/validate/convert key/value pairs 
        }
        return goodValue
}

This is too much trouble when the package can easily do those conversions internally already. To fix that, in yaml.v2 the Getter and Setter interfaces are both gone and were replaced by the Marshaler and Unmarshaler interfaces.

Using the new mechanism, the example above would be implemented as follows:

func (v *CustomType) UnmarshalYAML(unmarshal func(interface{}) error) error {
        return unmarshal(&v.keys)
}

Custom-ordered maps

By default both yaml.v1 and yaml.v2 will marshal keys in a stable order which is increasing within the same type and arbitrarily defined across types. So marshaling is already performed in a sensible order, but it cannot be customized in yaml.v1, and there’s also no way to tell which order the map was originally in, as some applications require.

To fix that, there is a new pair of types that support preserving the order of map keys both when marshaling and unmarshaling: MapSlice and MapItem.

Such an ordered map literal would look like:

m := yaml.MapSlice{{"c", 3}, {"b", 2}, {"a", 1}}

The MapSlice type may be used for variables going in and out of the yaml package, or in struct fields, map values, or anywhere else sensible.

Binary values

Strings in YAML must be valid UTF-8 or UTF-16 (with a byte order mark for the latter), and for binary data the specification defines a standard !!binary tag which represents the raw data encoded (encrypted?) as base64. This is now supported both in yaml.v1 and yaml.v2, transparently. That is, any string value that is not valid UTF-8 will be base64-encoded and appropriately tagged so that it roundtrips as the same string. Short strings are inlined, while long ones are automatically broken into several lines and represented in a proper style. For example:

one: !!binary gICA
two: !!binary |
  gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgI
  CAgICAgICAgICAgICAgICAgICAgICAgICAgICA

Multi-line strings

Any string that contains new-line characters (‘\n’) will now be encoded using the literal style. For example, a value that would before be encoded as:

key: "line 1\nline 2\nline 3\n"

is now encoded by both yaml.v1 and yaml.v2 as:

key: |
  line 1
  line 2
  line 3

Other improvements

Besides these major changes, some assorted minor improvements were also performed:

  • Better handling of top-level “null”s (issue #35)
  • Marshal base 60 floats quoted for YAML 1.1 compatibility (issue #34)
  • Better error on invalid map keys (issue #25)
  • Allow non-ASCII characters in plain strings (issue #11).
  • Do not catch unrelated panics by mistake (commit a6dc653f)

For obtaining the yaml.v1 improvements:

go get -u gopkg.in/yaml.v1

For updating to yaml.v2, adapt the code as necessary given the points above, replace the import path, and run:

go get -u gopkg.in/yaml.v2

Read more
Luca Paulina

Come and meet us at dConstruct

Ubuntu is sponsoring the dConstruct “Living with the network” event on the 5th of September at the Brighton Dome. Stop by for a chat with the team, grab some goodies and enter our competition for a chance to win an Ubuntu Phone.

nexus4_hero_shots

Read more
niemeyer

As detailed in the preliminary release of qml.v1 for Go a couple of weeks ago, my next task was to finish the improvements in its OpenGL API. Good progress has happened since then, and the new API is mostly done and available for experimentation. At the same time, there’s still work to do on polishing edges and on documenting the extensive API. This blog post aims to present the improvements made, their internal design, and also to invite help for finishing the pending details.

Before diving into the new, let’s first have a quick look at how a Go application using OpenGL might look like with qml.v0. This is an excerpt from the old painting example:

import (
        "gopkg.in/qml.v0"
        "gopkg.in/qml.v0/gl"
)

func (r *GoRect) Paint(p *qml.Painter) {
        width := gl.Float(r.Int("width"))
        height := gl.Float(r.Int("height"))
        gl.Enable(gl.BLEND)
        // ...
}

The example imports both the qml and the gl packages, and then defines a Paint method that makes use of the GL types, functions, and constants from the gl package. It looks quite reasonable, but there are a few relevant shortcomings.

One major issue in the current API is that it offers no means to tell even at a basic level what version of the OpenGL API is being coded against, because the available functions and constants are the complete set extracted from the gl.h header. For example, OpenGL 2.0 has GL_ALPHA and GL_ALPHA4/8/12/16 constants, but OpenGL ES 2.0 has only GL_ALPHA. This simplistic choice was a good start, but comes with a number of undesired side effects:

  • Many trivial errors that should be compile errors fail at runtime instead
  • When the code does work, the developer is not sure about which API version it is targeting
  • Symbols for unsupported API versions may not be available for linking, even if unused

That last point also provides a hint of another general issue: portability. Every system has particularities for how to load the proper OpenGL API entry points. For example, which libraries should be linked with, where they are in the local system, which entry points they support, etc.

So this is the stage for the improvements that are happening. Before detailing the solution, let’s have a look at the new painting example in qml.v1, that makes use of the improved API:

import (
        "gopkg.in/qml.v1"
        "gopkg.in/qml.v1/gl/2.0"
)

func (r *GoRect) Paint(p *qml.Painter) {
        gl := GL.API(p)
        width := float32(r.Int("width"))
        height := float32(r.Int("height"))
        gl.Enable(GL.BLEND)
        // ...
}

With the new API, rather than importing a generic gl package, a version-specific gl/2.0 package is imported under the name GL. That choice of package name allows preserving familiar OpenGL terms for both the functions and the constants (gl.Enable and GL.BLEND, for example). Inside the new Paint method, the gl value obtained from GL.API holds only the functions that are defined for the specific OpenGL API version imported, and the constants in the GL package are also constrained to those available in the given version. Any improper references become build time errors.

To support all the various OpenGL versions and profiles, there are 23 independent packages right now. These packages are of course not being hand-built. Instead, they are generated all at once by a tool that gathers information from various sources. The process can be tersely described as:

  1. A ragel-based parser processes Qt’s qopenglfunctions_*.h header files to collect version-specific functions
  2. The Khronos OpenGL Registry XML is parsed to collect version-specific constants
  3. A number of tweaks defined in the tool’s code is applied to the state
  4. Packages are generated by feeding the state to text templates

Version-specific functions might also be extracted from the Khronos Registry, but there’s a good reason to use information from the Qt headers instead: Qt already solved the portability issue. It works in several platforms, and if somebody is using QML successfully, it means Qt is already using that system’s OpenGL capabilities. So rather than designing a new mechanism to solve the same problem, the qml package now leverages Qt for resolving all the GL function entry points and the linking against available libraries.

Going back to the example, it also demonstrates another improvement that comes with the new API: plain types that do not carry further meaning such as gl.Float and gl.Int were replaced by their native counterparts, float32 and int32. Richer types such as Enum were preserved, and as suggested by David Crawshaw some new types were also introduced to represent entities such as programs, shaders, and buffers. The custom types are all available under the common gl/glbase package that all version-specific packages make use of.

So this is all working and available for experimentation right now. What is left to do is almost exclusively improving the list of function tweaks with two goals in mind, which will be highlighted below as those are areas where help would be appreciated, mainly due to the footprint of the API.

Documentation importing

There are a few hundred functions to document, but a large number of these are variations of the same function. The previous approach was to simply link to the upstream documentation, but it would be much better to have polished documentation attached to the functions themselves. This is the new documentation for MultMatrixd, for example. For now the documentation is being imported manually, but the final process will likely consist of some automation and some manual polishing.

Function polishing

The standard C OpenGL API can often be translated automatically (see BindBuffer or BlendColor), but in other cases the function prototype has to be tweaked to become friendly to Go. The translation tool already has good support for defining most of these tweaks independently from the rest of the machinery. For example, the following logic changes the ShaderSource function from its standard from into something convenient in Go:

name: "ShaderSource",
params: paramTweaks{
        "glstring": {rename: "source", retype: "...string"},
        "length":   {remove: true},
        "count":    {remove: true},
},
before: `
        count := len(source)
        length := make([]int32, count)
        glstring := make([]unsafe.Pointer, count)
        for i, src := range source {
                length[i] = int32(len(src))
                if len(src) > 0 {
                        glstring[i] = *(*unsafe.Pointer)(unsafe.Pointer(&src))
                } else {
                        glstring[i] = unsafe.Pointer(uintptr(0))
                }
        }
`,

Other cases may be much simpler. The MultMatrixd tweak, for instance, simply ensures that the parameter has the proper length, and injects the documentation:

name: "MultMatrixd",
before: `
        if len(m) != 16 {
                panic("parameter m must have length 16 for the 4x4 matrix")
        }
`,
doc: `
        multiplies the current matrix with the provided matrix.
        ...
`,

and as an even simpler example, CreateProgram is tweaked so that it returns a glbase.Program instead of the default uint32.

name:   "CreateProgram",
result: "glbase.Program",

That kind of polishing is where contributions would be most appreciated right now. One valid way of doing this is picking a range of functions and importing and polishing their documentation manually, and while doing that keeping an eye on required tweaks that should be performed on the function based on its documentation and prototype.

If you’d like to help somehow, or just ask questions or report your experience with the new API, please join us in the project mailing list.

Read more
Iain Farrell

Verónica Sousa's Cul de sac

Verónica Sousa’s Cul de sac

Ubuntu was once described to me by a wise(ish ;) ) man as a train that was leaving whether you’re on it or not. That’s the beauty of a 6 month release cycle. As many of you will already know, each release we include photos and illustrations produced by community members. We ask that you submit your images using free photo sharing site Flickr and that you limit your images this time to 2. The group won’t let you submit more than that but if you change your mind after you’ve submitted, fear not, simply remove one and it’ll let you add another.

As with previous submissions processes we’ve run, and in conjunction with the designers at Canonical we’ve come up with the following tips for creating wallpaper images.

  1. Images shouldn’t be too busy and filled with too many shapes and colours, a similar tone throughout is a good rule of thumb.
  2. A single point of focus, a single area that draws the eye into the image, can also help you avoid something too cluttered.
  3. The left and top edges are home to Ubuntu’s Launcher and Panel so be careful to consider how your images look in place so as not to clash with the user interface. Try them out on your own desktop, see how they feel.
  4. Try your image at different aspect ratios to make sure something important isn’t cropped out on smaller/ larger screens at different resolutions.
  5. Take a look at the wallpapers guidance on the Ubuntu Wiki regarding the size of images. Our target resolution is 2560 x 1600.
  6. Break all the rules except the resolution one! :D

To shortlist from this collection we’ll be going to the contributors whose images were selected last time around to act as our selection judges. In doing this we’ll hold a series of public IRC meetings on Freenode in #1410wallpaper to discuss the selection. In those sessions we’ll get the selection team to try out the images on their own Ubuntu machines to see what they look like on a range of displays and resolutions.

Anyone is welcome to come to these sessions but please keep in mind that an outcome is needed from the time that people are volunteering and there’s usually a lot of images to get through so we’d appreciate it if there isn’t too much additional debate.

Based on the Utopic release schedule, I think our schedule for this cycle should look like this:

  • 08/08/14 – Kick off 14.10 wallpaper submission process.
  • 22/08/14 – First get together on #1410wallpaper at 19:30 GMT.
  • 29/08/14 – Submissions deadline at 18:00 GMT – Flickr group will be locked and the selection process will begin.
  • 09/09/14 – Deliver final selection in zip format to the appropriate bug on Launchpad.
  • 11/09/14 – UI freeze for latest version of Ubuntu with our fantastic images in it!

As always, ping me if you have any questions, I’ll be lurking in #1410wallpaper on freenode or leave a question in the Flickr group for wider discussion, that’s probably the fastest way to get an answer to a question.

I’ll be posting updates on our schedule here from time to time but the Flickr group will serve as our hub.

Happy snapping and scribbling and on behalf of the community, thanks for contributing to Ubuntu! :)


Read more
Jussi Pakkanen

There are usually two different ways of doing something. The first is the correct way. The second is the easy way.

As an example of this, let’s look at using the functionality of C++ standard library. The correct way is to use the fully qualified name, such as std::vector or std::chrono::milliseconds. The easy way is to have using std; and then just using the class names directly.

The first way is the “correct” one as it prevents symbol clashes and for a bunch of other good reasons. The latter leads to all sorts of problems and for this reason many style guides etc prohibit its use.

But there is a catch. Software is written by humans and humans have a peculiar tendency.

They will always do the easy thing.

There is no possible way for you to prevent them from doing that, apart from standing behind their back and watching every letter they type.

Any sort of system that relies, in any way, on the fact that people will do the right thing rather than the easy thing are doomed to fail from the start. They. Will. Not. Work. And they can’t be made to work. Trying to force it to work leads only to massive shouting and bad blood.

What does this mean to you, the software developer?

It means that the only way your application/library/tool/whatever is going to succeed is that correct thing to do must also be the simplest thing to do. That is the only way to make people do the right thing consistently.

Read more
Tom Macfarlane

Mobile Asia Expo 2014

Following the success of our new stand design at MWC earlier this
year, we applied the same design principles to the Ubuntu stand at
last month’s Mobile Asia Expo in Shanghai.

MAE_2014_stand

With increased floor space, compared to last year, and a new stand
location that was approachable from three key directions, we were
faced with a few new design challenges:

  • How to effectively incorporate existing 7m wide banners into
    the new 8m wide stand?
  • How to make the stand open and approachable from three sides
    with optimum use of floor space and maintaining the maximum
    amount storage space possible?
  • How to maintain our strong brand presence after any necessary
    structural changes?

Proposed layout ideas

MAE_drawing_1

MAE_drawing_2

MAE_drawing_3 Final layout
The final design utilised maximum floor space and incorporated the
positioning of our bespoke demo pods, that proved successful at MWC.
Along with strong branding featuring our folded paper background
with large graphics showcasing app and scope designs and a new aisle
banner. The main stand banners were then positioned in an alternating
arrangement aligned to the left and to the right above the stand.

MAE_drawing_4

Aisle banner

Ubuntu_hanging_banner_Preview

MAE_stand_front

MAE_stand_back

Read more
Inayaili de León Persson

Latest from the web team — June 2014

We’re now almost half way through the year and only a few days until summer officially starts here in the UK!

In the last few weeks we’ve worked on:

  • Responsive ubuntu.com: we’ve finished publishing the series on making ubuntu.com responsive on the design blog
  • Ubuntu.com: we’ve released a hub for our legal documents and information, and we’ve created homepage takeovers for Mobile Asia Expo
  • Juju GUI: we’ve planned work for the next cycle, sketched scenarios based on the new personas, and launched the new inspector on the left
  • Fenchurch: we’ve finished version 1 of our new asset server, and we’ve started work on the new Ubuntu partners site
  • Ubuntu Insights: we’ve published the latest iteration of Ubuntu Insights, now with a dedicated press area
  • Chinese website: we’ve released the Chinese version of ubuntu.com

And we’re currently working on:

  • Responsive Day Out: I’m speaking at the Responsive Day Out conference in Brighton on the 27th on how we made ubuntu.com responsive
  • Responsive ubuntu.com: we’re working on the final tweaks and improvements to our code and documentation so that we can release to the public in the next few weeks
  • Juju GUI: we’re now starting to design based on the scenarios we’ve created
  • Fenchurch: we’re now working on Juju charms for the Chinese site asset server and Ubuntu partners website
  • Partners: we’re finishing the build of the new Ubuntu partners site
  • Chinese website: we’ll be adding a cloud and server sections to the site
  • Cloud Installer: we’re working on the content for the upcoming Cloud Installer beta pages

If you’d like to join the web team, we are currently looking for a web designer and a front end developer to join the team!

Juju scenariosWorking on Juju personas and scenarios.

Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Let us know your thoughts in the comments.

Read more
Inayaili de León Persson

Making ubuntu.com responsive: final thoughts

This post is part of the series ‘Making ubuntu.com responsive‘.

There are several resources out there on how to create responsive websites, but they tend to go through the process in an ideal scenario, where the project starts with a blank slate, from scratch.

That’s why we thought it would be nice to share the steps we took in converting our main website and framework, ubuntu.com, into a fully responsive site, with all the constraints that come from working on legacy code, with very little time available, while making sure that the site was kept up-to-date and responding to the needs to the business.

Before we started this process, the idea of converting ubuntu.com seemed like a herculean task. It was only because we divided the work in several stages, smaller projects, tightened scope, and kept things simple, that it was possible to do it.

We learned a lot throughout this past year or so, and there is a lot more we want to do. We’d love to hear about your experience of similar projects, suggestions on how we can improve, tools we should look into, books we should read, articles we should bookmark, and things we should try, so please do leave us your thoughts in the comments section.

Here is the list of all the post in the series:

  1. Intro
  2. Setting the rules
  3. Making the rules a reality
  4. Pilot projects
  5. Lessons learned
  6. Scoping the work
  7. Approach to content
  8. Making our grid responsive
  9. Adapting our navigation to small screens
  10. Dealing with responsive images
  11. Updating font sizes and increasing readability
  12. Our Sass architecture
  13. Ensuring performance
  14. JavaScript considerations
  15. Testing on multiple devices

Note: I will be speaking about making ubuntu.com responsive at the Responsive Day Out conference in Brighton, on the 27th June. See you there!

Read more
Inayaili de León Persson

This post is part of the series ‘Making ubuntu.com responsive‘.

When working on a responsive project you’ll have to test on multiple operating systems, browsers and devices, whether you’re using emulators or the real deal.

Testing on the actual devices is preferable — it’s hard to emulate the feel of a device in your hand and the interactions and gestures you can do with it — and more enjoyable, but budget and practicability will never allow you to get a hold of and test on all the devices people might use to access your site.

We followed very simple steps that anyone can emulate to decide which devices we tested ubuntu.com on.

Numbers

You can quickly get a grasp of which operating systems, browsers and devices your visitors are using to get to your site just by looking at your analytics.

By doing this you can establish whether some of the more troublesome ones are worth investing time in. For example, if only 10 people accessed your site through Internet Explorer 6, perhaps you don’t need to provide a PNG fallback solution. But you might also get a few less pleasant surprises and find that a hard-to-cater-for browser is one of the preferred ones by your customers.

When we did our initial analysis we didn’t find any real surprises, however, due to the high volume of traffic that ubuntu.com sees every month even a very small percentage represented a large number of people that we just couldn’t ignore. It was important to keep this in mind as we defined which browsers, operating systems and devices to test on, and what issues we’d fix where.

Browsers (between 11 February and 13 March 2014)
Browser Percentage usage
Chrome 46.88%
Firefox 36.96%
Internet Explorer Total 7.54%
11 41.15%
8 22.96%
10 17.05%
9 14.24%
7 2.96%
6 1.56%
Safari 4.30%
Opera 1.68%
Android Browser 1.04%
Opera Mini 0.45%
Operating systems (between 11 February and 13 March 2014)
Operating system Percentage usage
Windows Total 52.45%
7 60.81%
8.1 14.31%
XP 13.3%
8.84 8.84%
Vista 2.38%
Linux 35.4%
Macintosh 6.14%
Android Total 3.32%
4.4.2 19.62%
4.3 15.51%
4.1.2 15.39%
iOS 1.76%
Mobile devices (between 12 May and 11 June 2014)
5.41% of total visits
Device Percentage usage (of 5.41%)
Apple iPad 17.33%
Apple iPhone 12.82%
Google Nexus 7 3.12%
LG Nexus 5 2.97%
Samsung Galaxy S III 2.01%
Google Nexus 4 2.01%
Samsung Galaxy S IV 1.17%
HTC M7 One 0.92%
Samsung Galaxy Note 2 0.88%
Not set 16.66%

After analysing your numbers, you can also define which combinations to test in (operating system and browser).

Go shopping

Based on the most popular devices people were using the access our site, we made a short list of the ones we wanted to buy first. We weren’t married to the analytics numbers though: the idea was to cover a range of screen sizes and operating systems and expand as we went along.

  • Nexus 7
  • Samsung Galaxy S III
  • Samsung Galaxy Note II

We opted for not splashing on an iPad or iPhone, as there are quite a few around the office (retina and non-retina) and the money we saved meant we could buy other less common devices.

Testing devicesPart of our current device suite.

When we started to get a few bug reports from Android Gingerbread and Windows phone users, we decided we needed phones with those operating systems installed. This was our second batch of purchases:

  • Samsung Galaxy y
  • Kindle Fire HD (Amazon was having a sale at the time we made the list!)
  • Nokia Lumia 520
  • LG G2

And, last but not least, we use a Nexus 4 to test on Ubuntu on phones.

We didn’t spend much in any of our shopping sprees, but have managed to slowly build an ever-growing device suite that we can test our sites on, which is invaluable when working on responsive projects.

Alternatives

Some operating systems and browsers are trickier to test on in native devices. We have a BrowserStack account that we tend to use mostly to test on older Windows and Internet Explorer versions, although we also test Windows on virtual machines.

browserstackBrowserStack website.

Tools

We have to confess we’re not using any special software that synchronises testing and interactions across devices. We haven’t really felt the need for that yet, but at some point we should experiment with a few tools, so we’d like to hear suggestions!

Browser support

We prefer to think of different levels (or ways) of access to the content rather than browser support. The overall rule is that everyone should be able to get to the content, and bugs that obstruct access are prioritised and fixed.

As much as possible, and depending on resources available at the time, we try to fix rendering issues in browsers and devices used by a higher percentage of visitors: degree of usage defines the degree of effort fixing rendering bugs.

And, obviously: dowebsitesneedtolookexactlythesameineverybrowser.com.

Read the final post in this series: “Making ubuntu.com responsive: final thoughts”

Reading list

Read more
Inayaili de León Persson

This post is part of the series ‘Making ubuntu.com responsive‘.

All our designs are created using the Ubuntu font, and the websites are not exception. Ubuntu.com was already using a carefully refined and tested typographic scale that we have evolved over the years.

Early in the project, we had decided that the large screen view of the website would be kept, so we would be reusing the original styles (with some updates and clean up) and it became the typographic scale for the desktop view.

Adjust as needed

When working on the pilot responsive projects like Ubuntu Insights and canonical.com, we’ve used that original scale and reduced it, keeping the proportions.

After some device testing to adjust paragraph size for comfortable reading, we settled on having the base font size modified at our grid breakpoints to 14px, 15px and 16px.

By keeping the proportions of the sizes though, it was easy to see that some font sizes and margins were too large in proportion to others at small screens — especially the larger headings like h1 —, so we tweaked these as needed to improve readability.

A typographic scale can serve as a guide, but you don’t have to stay married to it: adjusting sizes to what feels better is an important step in making sure your text is comfortable to read at various sizes — and the best way to test this is on the actual devices!

Testing devicesTesting ubuntu.com on various devices.

Use ems

Our typographic scale is defined in pixels, but the sizes are specified in ems in our CSS so they can be scalable.

Small, medium and large spacingDifferences in the typographic scale from small to large screens.

Reuse existing patterns

We have a weekly designers meeting where we talk about new patterns we’re working on in our separate projects across the entire design team. This way, we minimise the risk of creating new patterns when existing ones are in place, and when something new is created it can be shared with the rest of the team to use.

So we made sure to show our updated typographic scale and get everyone’s feedback on it, including designers from the apps and platforms teams. The best part was that we had reached similar conclusions about which sizes worked best in small screens (the variation was in the decimals) so we were all being consistent when it came to mobile!

Remember fallback fonts

Even though web fonts are widely supported now, some browsers, like Opera Mini, just don’t support them, so it’s always a good idea to look at your site across various devices and browsers, and turn off the font-face declarations to see if the fallback fonts that you’ve declared look as good as you can make them and match as closely as possible with the original font. By doing this you’ll make the transition between the fallback font and the web font once it’s finally loaded less obvious and less jarring for the user.

Opera MiniOpera Mini, without the Ubuntu font.

Conclusion

There are a few simple things you can do when transitioning from a fixed-width to a responsive website. The focus should be to improve readability, so it’s vital to check as many pages and screens of your site in different devices. And remember that picking a typographic scale is not as simple as taking numbers out of a generator: you have to see it in action and adapt it to your project.

We’d love to hear how you handled typography in your responsive projects — leave your thoughts in the comments area!

Read the next post in this series: “Making ubuntu.com responsive: our Sass architecture”

Reading list

Read more