Canonical Voices

Bejan Alizadeh

Sheets transition

We’ve recently been exploring how the share transitions should work when you’re previewing a photo in gallery mode. Our main goal is that there is a consistent transition for sharing photos across the phone.

This is the latest iteration of the explorations we’ve been doing, and, as such, these transitions are still work in progress, but certainly worth sharing.

Step by step


Video: Sharing a photo in photo gallery mode

The first transition happens when you select “Share” from the toolbar. This takes you to a ‘content picker’ mode where you can select where you’d like to share your photo (Facebook, Twitter, etc.).

The intention is that the ‘content picker’ transition is similar to the ‘page stack’ one — which takes you deeper into the app — but because you’re going into a ‘content picker’ mode the transition needs to be slightly different. That difference is the direction: instead of going from right to left it goes bottom to top.

Once you’ve selected how to share your photo, the screen splits slightly below where you’ve tapped (in the example, below Facebook), and there is a subtle transparency fade so that the transition is less jarring.

In the next step, the transition takes you to an embedded Facebook share page, where you can write a description about the photo you’re posting. Once you select the description box, the OSK keyboard comes from the bottom to top, something that is always consistent across the phone.

When you click “Post”, a similar transition to the selecting share transition, but reversed, takes you back to the photo.

Your feedback

As I’ve mentioned before, this is still work in progress, but we’re really interested in hearing your thoughts — let us know what you think in the comments.

Read more
Inayaili de León Persson

New year, new website: the new canonical.com

We’ve been talking about it for a while and we are now happy to reveal Canonical’s brand new website.

The brief

We thought that it was more than appropriate that, in the year that Canonical commemorates its 10th anniversary, our website got some love, so that’s exactly what we set out to do.

Canonical on devicesThe homepage of the new canonical.com on various devices

The main goal of this redesign was to create a website that clearly communicates what Canonical is and does. To present our services, describe our role in the creation of Ubuntu and to give users an understanding of the principles behind Canonical as a company.

The journey

We set out to distill the Canonical site into its most essential components. This required a huge amount of editing as the site had grown over time. This was not a straightforward task, but there were a few things that we knew would get us very close to that goal:

  • Clearly define canonical.com’s audiences and make sure the new site’s content was created with them in mind
  • Move the content that dates easily (events, news, etc.) from the site to a searchable repository
  • Move all detailed product and service information to www.ubuntu.com to make it more easy to find

We started preparing to move a lot of the content that previously lived on the site a few months ago when we started the Ubuntu Resources project — a place for content such as news, events, press releases, white papers and case studies.

Ubuntu Resources (currently in ‘alpha’) is also our first responsive site, and a lot of the lessons we have been learning from it, code- and design-wise, have been applied to the new canonical.com, like the small screen site navigation and the global Ubuntu sites navigation.

Carla has published a very interesting post on how she used stakeholder interviews to define the website’s key journeys and audiences. This research was instrumental in keeping the content of the site focused and the information architecture as simple as possible.

Before moving onto a digital format, we did a lot of collaborative sketching, churning out ideas on how we could illustrate each page’s message.

Sketching ideasGenerating ideas: some of our sketches

Even though we were working towards a fairly tight deadline, we went through several content, design and code iterations, with copywriters, designers and developers working closely together and improving as much as possible until we were happy with the results.

Canonical status boardOur ever-changing analog status board — sometimes only sticky notes will do!

The visual design borrowed most of the underlying patterns from www.ubuntu.com, such as the grid and font sizes. Ubuntu’s website has been evolving into a more ‘open’ design and the new Canonical website takes that idea even further by removing the main content container and increasing spacing between elements.

We also brought in new patterns, influenced by the design work that is being done on the phone and tablet, like the grid used in small screens, the Ubuntu shape (the squircle) and the folded paper background.

Phone patterns on canonical.comUsing the squircle and the folded paper background on the new canonical.com

The result

We’re very happy with the result, and we think it achieves the goals we set out to accomplish. Now that the site is launched though, it’s up to everyone who visits it to let us know how we did: do let us know your thoughts in the comments!

Read more
Matt Turnbull

New year links

Happy new year!

Here are a couple of links that have been flying around the London office since we returned. The Verge did a recap of their most influential people of 2013.

pussy-riot

 

And there’s a report from JWT pointing at some nice trends, manifestations and insights for 2014 (thank you, Daniel, for the link).

sunday-assembly
reaper-drone
apex-brasil

Read more
Bejan Alizadeh

Messaging interaction

We’ve currently been working on user interaction for sending and deleting multiple SMSs and we thought it would be nice to show you where we’re going with it.

Here are some of things that we’ve had to consider when making these interactions user friendly:

  • making sure the transitions fit within our paper metaphor — for example, when you select a message thread we wanted it to feel like it’s taking you deeper into the app
  • just like with the visual assets, transitions also need to be consistent — for example, whenever you’re diving deeper into any app the transition should be similar
  • making room for scrolling and seeing more messages without the keypad taking too much space
  • making sure it’s clear when a message is pending
  • initial exploration into how to navigate back within an app


Video: Sending messages


Video: Deleting messages

As with many of the current interactions, these are work in progress — we’ll be keeping you updated with any further developments.

Read more
Inayaili de León Persson

Latest from the web team — December 2013

This month we’ve been working hard trying to wrap up as much as we can before the holidays and planning for 2014.

In the last few weeks we’ve worked on:

  • Ubuntu Resources: updating the site based on feedback we received from users — keep those comments coming!
  • Canonical website: getting the site ready for launch, which will happen early next month
  • Juju GUI: adding animations to the GUI
  • Landscape: providing designs for an upcoming visual update
  • Juju Labs: designing and updating the labs section of juju.ubuntu.com

And we’re currently working on:

  • Ubuntu Resources: adding topic-based subscriptions and filters in search results
  • Ubuntu 14.04 release: believe it or not, we’re already starting to look into how we’ll be updating www.ubuntu.com for this LTS
  • Ubuntu.com: updating our partner pages in the new year and adding new Ubuntu installation videos to the site
  • Canonical website: now testing the new website on various mobile devices
  • Juju GUI: refining the bundle and browse experience and interactions
  • Fenchurch: moving towards continuous integration of Juju service with Canonical’s IS team

We have also had a very fun Canonical End of Year party!

Canonical status board
A photo of our status board of the upcoming and updated canonical.com

Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Let us know your thoughts in the comments.

Read more
Carla Berkers

I’d like to share my experience working on the project that has been my main focus over the past months: the redesign of canonical.com.

Research methods

As I started talking to people in the design department I quickly discovered we have a lot of information about our site visitors. My colleagues helped me access Google Analytics data and findings from previous user testing sessions. There was also a research-based set of personas, that helped me to put together an initial overview of user needs and tasks.

I was curious to try to validate these user needs and test them against Canonical’s current business requirements. In order to find out more about the company goals I prepared a stakeholder interview script and started putting together a list of people to talk to. I initially planned to limit the number of interviewees to about six to eight stakeholders, as too many opinions could potentially slow down the project and complicate the requirements.

Getting to know the company

I started with eight people to talk to, but with each interview I found out about other people that I should add to my list. At the same time, many of the interviewees warned me that every person I would talk to would have different ideas about the site requirements. By the end of the first round of interviews, ending up with too many stakeholders turned out to be the most commonly mentioned risk to endanger the project finishing on time.

I had very diverse conversations about different aspects of the site with a range of people. From strategic insights from our CEO Jane, to brand guidelines and requirements from our Head of Design Marcus and ideas around recruitment from our HR business partner Alice — each conversation brought unique requirements to light.

After I spoke to about fifteen people I summarised the key points from each stakeholder on post it notes and put them all up on a wall in one of the meeting rooms in the office. As I took out the duplicates and restructured the remaining notes, I began to see a familiar pattern.

Conclusions

When I finished grouping the different audiences, I ended up with five groups of users: enterprise customers, (potential) partners, job seekers, media (a varied group that includes press, tech analysts and bloggers), open source advocates and the more generic tech enthusiasts that want to know more about the company backing Ubuntu.

As these groups aligned very well with the persona’s and other pieces of research that I had found, I felt comfortable continuing my process by moving on to the user needs and site goals that will help build a good site structure and generate useful content for each group of users.

I found that talking to many experts from within the company helped me quickly understand the full range of requirements, saving me time rather than making my job more complicated. Furthermore I was happy to get a chance to get to know people from different parts of the company so soon after I got started.

In order to keep the project moving forward, we appointed one key stakeholder to sign off each step of the process, but I’m looking forward to showing the end results to the broader group to see if I managed to meet all their expectations. We will also conduct user-testing to ensure the site answers our core audiences questions and allows them to complete their tasks.

I hope to share more about this project in the months to come.

Read more
matthieu-james

The new Ubuntu icons

During last month’s vUDS we showcased the latest design explorations for the new Ubuntu icon theme. Here is a summary of what we presented.

Our objectives

This project’s main goal is to create a single modern, high-resolution icon theme for desktop and touch devices that can adapt to various screen densities and reinforces the Ubuntu user experience. We want our icons to express our values and convey Ubuntu’s personality in a unique way.

We already had mobile icons for the applications and symbols, but, because they evolved over time without strong guidelines, did not form a consistent set. On the desktop, even though the style is clean and consistent, the icons looked dated and needed to be replaced too.

Previous desktop icons
Previous mobile and monochromatic iconsThe previous version of desktop, mobile and monochromatic icons

New icons

We’ve been working on this on-going project for the past year. We’ve done extensive research on the subject with a focus on learning how best to classify the icons; and we’ve gone through several design iterations and explorations.

So here is the latest iteration of the new icon set. As I’ve mentioned, these are all still subject to change as we’re constantly improving and refining the designs.

Latest application iconsLatest application icons

Latest symbol iconsLatest symbolic icons

Icons in contextIcons in context — one of the latest design explorations of the dash

Next steps

The goals for 14.04 are to provide a new icon theme for mobile and tablet, and to provide guidelines with templates to help people to design consistent icons for their apps. We’d like to eventually implement the new set on the desktop too.

We’ve had lots of good feedback so far, and we’d like to get even more, so please let us know your thoughts in the comments!

Read more
Christina Li

App Design Clinic #6

We have been running the app design clinic every two weeks to answer any questions from community designers and developers on the apps they are working on!

For this session we talked about the community submitted convergence designs for file manager and clock app (thanks everyone!) as well as answering some questions from our Canonical engineers submitted apps, such as:
- If your app has two equal actions- how do you provide entry points?
- What if I want to show more content, but page stack is not appropriate?
- Where should ‘About’ & ‘Settings’ go? (Not in the tabs, please)

If you missed it, or want to watch it again, here it is:

Please send your questions and screenshots to
design@canonical.com by 1pm UTC on a Tuesdays to be included in the following Wednesday clinic.

Watch this space for our next App Design Clinic time.

Read more
Katie Taylor

App Design Clinic #5

Over the last few months we have been running the App Design Clinic and we want to thank you for all your submissions, contribution and feedback!
The Design Clinic happens every two weeks and the last one co-incided with vUDS, so we included more information on general app design as well as answering quesitons.

Here’s what happened:

- A summary of topics from previous App Design Clinics
- A run-down of where to find the necessary things and resources to design an app
- And finally, how you can contribute and participate in the clinic

The next one happens tomorrow, 4th of December. Please send your questions and screenshots to
design@canonical.com by 1pm UTC on a Tuesdays to be included in the following Wednesday clinic.

Read more
Christina Li

On 19-21 November we had our vUDS where we got to discuss and share with the community some of the design work we’ve been doing recently.

Our topics ranged from our design blog to convergence designs to Juju GUI cloud to icon designs!

If you missed any of our sessions, don’t worry. They are all below for you to check them out!

Design Blog

Love our blog? How can we make it better? What topics would you like to see?

Responsive Design

Hear about our thoughts on converging our patterns, components and designs from phone to tablet to desktop.

App Design Clinic

Every two weeks, we gather to talk about app designs and patterns. If you are developing an app or have any questions on apps, let us know!

Designing a responsive website and web guide

We talked about the process of designing a responsive website and shared the current web style guide we have been using for the main Ubuntu.com site.

Research on Windows and Android usability

Juju GUI design evolution

User research has informed the way Juju GUI has changed over the last year. Here is the evolution of Juju GUI.

Designing icons for Ubutnu

We have been designing icons for Ubuntu Phone and Tablet and Desktop. Check them out!

Let us know what you think, or suggestions on what you want to see next from the Design team at the next vUDS!

Read more
Inayaili de León Persson

Latest from the web team — November 2013

Even though Ubuntu 13.10’s release is behind us, we always find ways to keep busy. Here are the highlights of the past four weeks.

In the last few weeks we’ve worked on:

And we’re currently working on:

  • Ubuntu Resources: we’re iterating on the current alpha release, improving the design and adding new features
  • Canonical website: we’re currently exploring design directions and finalising the content for the site
  • Juju GUI: we’re refining the bundle experience and interactions for the 14.04 release
  • Fenchurch: we’ve been improving deployment scripts and asset deployment
  • Live chat trial: we’ve been helping the sales team to test a live chat feature on www.ubuntu.com

We also welcomed a new member of the team: Felipe is the new Lead User Experience. And we’ve learned about Karl’s cage fighting past.

Welcome lunchTeam lunch to welcome Felipe and Karl

Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Let us know your thoughts in the comments.

Read more
matthieu-james

Juju ice-cream icon design

Who doesn’t like ice-cream? Here in the design team we sure do! In the last few weeks we’ve been preparing a special Juju demo for the OpenStack Summit in Hong Kong and we’ve created some very ‘tasty’ icons for it. We thought it would be nice to show you how those icons were created, so here’s a little insight on the design process.

The brief

We wanted to replace the normal Juju icons for something a little bit more special in order to explain to people that visited the Ubuntu stand what kind of things Juju can do. We decided to use the idea of an ice-cream with toppings and sauce which you can build in the same way that you can build services in Juju.

The best part of this demo is that people would actually get the ice-cream they had ‘built’ in Juju in real life!

JujuThe Juju interface, with its default icons

Finding good concepts

The first thing I needed to do was to find good concepts to present ice-creams and toppings in an icon format. Toppings were going to be especially tricky, as they can be very small and therefore hard to make out at small sizes.

I initially sketched and designed some ideas that were using a kind of flat look. This worked well for the ice-cream, but not so much for the toppings — I soon noticed they had to be semi-realistic to be recognisable.

Initial flat sketches
Initial version of the Juju ice-cream iconsInitial sketches and designs following a flat and more simplified look

At a second stage, I added perspective to the icons; it was important that the icons kept the same perspective for consistency.

Sketches with added perspectiveAnother set of sketches with added perspective

The shape of the sauce bottles was also something that needed a bit of trial and error. The initial design looked too much like a ketchup bottle, so we’ve decided to try a different approach.

Before and after sauce bottle shapeBefore and after shape of the sauce

For the backgrounds, I chose to use vibrant colours for the ice-cream icons, to contrast with the ice-creams’ monochrome palette, but paler colours for the toppings, as these are already quite colourful.

The amount of detail added to the icons is just enough for what we needed to show and for them to be recognised. I’ve also added larger pieces to the side of the toppings, to make them easier to be identified.

Juju Oreo toppingThe Oreo topping icon, with a side of Oreos

Working out the detail

The Oreo pieces were created from a single biscuit, which I cut into 9 different parts and then distributed in different layers — I guess in a similar way to what happens in real life.

9 Oreo piecesThe 9 pieces used to create the icon

The clone tool in Inkscape came in handy: repeating the same small set of different pieces made the final SVG file much lighter, and also Inkscape faster.

The whole process took 4 days from brief to final icons, which is quite a tight deadline, but it was a really fun project to work on.

Final icon setThe final icon set

Read more
Inayaili de León Persson

The new Ubuntu Resources

Today we’ve launched the alpha version of our latest project: the Ubuntu Resources website.

This is our first responsive project that follows the mobile-first methodology and we’re very excited to share this with everyone!

As you’ll be able to see, we’re not quite done with it yet, but we wanted to share what we’ve created so far, so we can get feedback and keep improving the design and expanding the features.

Ubuntu Resources on a phoneThe new Ubuntu Resources on an Ubuntu-powered phone

A little bit of background

This project grew from the need to separate content like case studies, news, press releases and events, from the core of the Canonical and Ubuntu sites — and it will eventually replace much of what currently is at insights.ubuntu.com. As the site is designed for reading and engaging with longer pieces of content, we thought it would be the ideal place to explore mobile-first and responsive approaches. And we plan to use what we’?ve learned from it to make www.ubuntu.com and our Web Style Guide responsive.

Scaling things down

We started the research phase taking a holistic view of the project, trying to understand what types of content and users we wanted to target. We realised that with limited time and resources we would have to divide the project into different releases, so that we could make sure each aspect of the site was given the attention it deserved.

The first and current release of the site — alpha — focuses solely on small screens. The main goal is that all the content is accessible and the visual style and features will be progressing and being added as we go.

WireframesInitial wireframes across a variety of screen sizes

Reusing existing styles

One of the challenges in this project was deciding how we were going to integrate the existing Web Style Guide, which we’ve been using internally for a while now and will be made public on design.ubuntu.com soon.

Ubuntu Web Style GuideSneak peek of our Web Style Guide

We decided to use a minimal version of the style guide that kept the Ubuntu Resources’ style coherent with www.ubuntu.com and that we could improve on.

You’ll also notice small details that align with our phone design, like the grid, navigation selection and icons, and we’ll be adding even more in the upcoming releases.

What’s coming

Apart from working on the larger screen versions of the site, some of the things we will be looking into for the next iterations are:

  • the ability to subscribe to different types of content
  • more curated topic landing pages
  • content filtering and sorting
  • cleaner URLs
  • the way we handle PDFs and other file formats
  • more content like a press section

Go and have a look at the site and let us know your thoughts. We want to know what you like and what you think can be improved, or any other comments you might have — we’ve included a handy link to the feedback form at the bottom of every page. Enjoy!

Read more
Christina Li

November Brown Bag lunch

Some of us in the Design team have been gathering on a monthly basis to have lunch together and share things we find interesting to us.

Today, I’d like to share with you the Brown Bag lunch we had this week.

Vesa shared with us his interest in photography and showed us some of the shots he took over time.

9690256316_8c4040a4bb_bWestminster at night by Vesa (flickr)

I came across an inspiring research done by Helen Hamlyn Centre for Design at the Royal College of Art in London. The research focused on facilitating older people using mobile phones, rather than designing a simpler phone for them to use.

And, our challenge of the month was to build the tallest paper tower! Each team had 20 pieces of paper and 6 minutes, with 2 rules:

1. You can only use paper to build your tower
2.You can tear or fold the pieces of paper.

Well, I’m happy to report that Rachel, Vesa and Olga proudly won this challenge with their paper tower!

photo (2)

How would you build your tower in 6 minutes?

Read more
Tingting Zhao

In the previous post, we talked about how to design effective user testing tasks to evaluate the usability of an interface. This post continues with this topic by highlighting a number of key strategies that you may need to use when conducting a formative user testing, whose main aim is to identify usability problems and propose design solutions, rather than compare quantitative metrics (summative testing), eg. task completion time and mouse clicks. It is unlikely that the prepared task script could be strictly applied without any changes, since the testing situation tends to be dynamic and often unpredictable. To get useful data, you need to be able to manipulate your task script with flexibilities, while also maintaining consistency.

Changing task orders during testing

Normally, to avoid order effect, the issuing of the tasks should be randomised for each participant. Order effect refers to the effect in which the tasks are presented, which can affect the results, specifically: users may perform better in later tasks as their familiarity with the interface increases; or users may perform worse as they become more fatigued. However, as discussed in the previous post, the tasks are often contextual and dependent on each other, so you need to carefully consider which tasks could be shuffled. For example, it is a good practice to mark on the script to indicate dependent tasks, so that you know these tasks should not be reordered and separated from each other. In other words, the dependent tasks must always be moved together. It is worth noting that the randomisation of task orders may not always be possible for a testing, for example, when the tasks are procedurally related, such as a testing focusing on payment flow.

Sometimes you may need to change the task orders by considering their levels of difficulty. This is useful in the following two scenarios: when you notice a participant appears to be slightly nervous before the testing has started, provide a simple first task to put him/her at ease; or when you observe a participant has failed to solve several tasks in a row, provide one or two easy tasks to reduce the frustration and stress, and boost confidence.

Another type of changing task order is made in response to users’ solicited goals that are associated with a coming task. For example, in one phone testing, after a participant checked the battery level, s/he spontaneously expressed a desire to know if there was a way to switch off some running apps to save battery. In this case, we jumped to the task of closing running apps, rather than waiting until later. This makes the testing feel more natural.

Remove tasks during testing

There are typically two situations that require you to skip tasks:

  • Time restriction

  • Questions being answered with previous tasks

Time restriction: user testing normally has a time limit. Participants are paid for certain lengths of time. Ideally, all the tasks should be carried out by all the participants. However, sometimes they take longer to solve tasks. Or you may discover areas that require more time for investigation. In this case, not all the tasks could be performed by a participant within the given time. Subsequently, you need to be able to quickly decide which tasks should be abandoned for this specific participant. There are two ways to approach this:

  • Omit tasks that are less important: it is always useful to prioritise the tasks in terms of their importance – what are the most important areas that have key questions that need to be answered and require feedback; what could be left for the next testing, if not covered this time?

  • Omit tasks that have already received abundant feedback: skip the tasks from which you have already gathered rich and useful information from other participants.

Questions were answered with previous tasks: Sometimes questions associated with a specific task would be answered while a participant was attempting to solve the previous task – in this case, you could skip this task.

In one of our phone testings, we asked a participant to send a text to a non-contact (a plumber). During the task-solving process, s/he decided to save the number to the contact book first and then send a text. In this case, we skipped the task of ‘saving a number to contact book’.

However sometimes you should not skip a task, even if it might seem repetitive. For example, if you want to test the learnability and memorability of a feature, having the participant perform the same task (with slightly different descriptions) for the second time (after a certain time duration) could afford useful insights.

Add tasks during testing

There are three contexts in which you could consider adding tasks:

  • Where the user formulates new goals

  • Re-examinations

  • Giving the user a second chance

The added task must be relevant to the aim of the testing, and should only be included if the testing time permits.

User formulates new goals: you could add tasks based on user-formulated goals in the task solving process.

For example, in one phone testing, one participant wondered if s/he could customise the tiles on the Windows phone’s home screen. We made this an added task for her/him. Adding tasks based on user articulated new goals follows their thought process and make the testing more natural. It also provides opportunities for us to discover new information.

Re-examinations: sometimes the users may succeed in a task accidently, without knowing how s/he did it. In this case, the same task (with a slightly changed description) could be added to re-assess the usability.

For example, in one phone testing, we had one task: “You want to call you sister Lisa to say thank you for the phone”. One participant experienced great difficulties in performing this task, and only completed it after a long time and by accident. In this case, we added another task to re-evaluate the ease of making a phone call:

“Your call is cut off while you are talking to your sister, so now you want to call her again.”

Similarly in the Gallery app testing, where participants managed to add a picture into a selected album accidently, we asked them to add another picture into a different album.

Re-examination allows us to judge accurately the impact of a problem, as well as to understand the learnability of interface – the extent to which users could detect and learn interaction patterns (even by accident), and apply the rules later.

Giving the user a second chance: in the majority of the user testing, participants used the evaluated interface for the first time. It could be very demanding for them to solve the tasks successfully in their first attempt. However, as the testing progresses, participants might discover more things, such as features and interaction patterns (although possibly by accident). Consequently, their knowledge of the interface may increase. In this case, you could give them another chance to solve the task that they failed earlier in the tests. Again, this helps you to test the learnability of the interface, as well as assess the impact of a problem.

For example, in a tablet testing, one participant could not find the music scope earlier in the testing, but later s/he accidentally discovered the video scope. To test if s/he now understood the concept of dash scopes, we asked the participant to find the music scope again after several other tasks.

Change task descriptions (slightly) during testing

Information gathered from pre-testing brief interview and participants’ testing verbal data could often be used to modify the task description slightly to make the task more realistic to the users. This also gives the user the impression that you are an active listener and interested in their comments, which helps to build a rapport with them. The change should be minor and limited to the details of the scenario (not the aim of the task). It is important that the change does not compromise the consistency with other participants’ task descriptions.

For example, in a tablet testing, where we needed to evaluate the discoverability of the HUD in the context of photo editing, we had this task: “You want to do some advanced editing by adjusting the colour of the picture.” One participant commented that s/he often changed pictures to ‘black and white’ effect. In response to this, we changed the task to “You mentioned that you often change a picture to black and white, and now you want to change this picture to ‘black and white’”. The task change here does not change the aim of the task, nor the requirements for solving the task (in this case, the access to the HUD), but it becomes more relatable to the participant.

Another example is from a phone testing. We changed the task of “you want to go to Twitter” to “you want to go to Facebook” after learning the participant uses Facebook but not Twitter. If we continued to request this participant to find Twitter, it would make the testing become artificial, which would result in invalid data. The aim of the task is to evaluate the ease of navigation in finding an app, therefore changing Twitter to Facebook does not change the nature of the task.

Conclusions

This post outlines a number of main strategies you could use to modify your task script to deal with typical situations that may occur in a formative user testing. To sum up:

Changing tasks orders: randomise tasks for each participant if possible, and move the dependent tasks as a whole; consider the difficulties of the task and issue an easy task to start with if you feel participant is nervous, or provide an easy task if participants failed several tasks in a row. Allow them to perform a later task if they verbalise it as a goal/strategy for solving the current task.

Remove tasks: if time is running out with a particular participant, omit certain tasks. This could be tasks with low priorities; tasks that already have enough feedback from other participants; or tasks the participant has already covered while attempting a previous task.

Add tasks: if time permits, allow users to perform a new task if it is a user initiated goal and is relevant to the testing; repeat a task (with slightly different wording and at an appropriate time) if the user succeeds in a task accidently, or has failed this task earlier, or if the aim is to test the learnability of the system.

Change task description: slightly amend the details of the task scenario (not the aim of the task) based on users’ verbal data to make it more relatable and realistic to the user. This will improve the reliability of the data.

If you have other ways to maneuver the tasks during the testing session, or have situations you are unsure about, feel free to share your experience and thoughts.

Read more
Jouni Helminen

App Design Clinic #4

App design Clinic #4 focuses on icons, with questions from Stuart Langridge including:

  • guidance on creating app icons (stylistically and in terms of file format and resolution)
  • tips on how to use action icons

The presentation deck link will be shared on the blog once it’s been checked by our icon designer, and we hope to have an icons guideline with downloadable templates and full API docs online within a month.

The next clinic is held in conjunction with vUDS. Let’s make it a great one,  please send any designs and/or questions to design@ubuntu.com

Read more
Spencer Bygraves

A week in San Francisco

I recently attended my first cloud sprint meeting held in San Francisco, and it turned out to be a great experience. It’s been 10 years since I last visited, so as well as working hard, it was nice to have the opportunity to see the city again.

Whilst there we worked on the UX and visual design for two of our cloud products, which we’ll be able to share with you soon. It was also a great opportunity to spend time with colleagues from around the world, working together during the day and having a few beers in the evening.

In terms of design, we are working to extend the cloud visual language that is being established through the Juju GUI, with a view to having a consistent suite of cloud products.

A post with some cloud designs will follow soon. For now, here’s some pictures from our week in San Francisco.

Watch this space!

 

SF-01Discussing Juju and collaborative coding

 

SF-02San Francisco

 

 

Read more
Peter Mahnke

So I am stretching the metaphor a bit, but I think it accurately explains my experience of the recent cloud sprint in San Francisco.

The week starts with some presentations and talks about where we are now and where we want to be from a company, marketplace and product perspective.  This lasts about two hours, then all 115 of us are set free to figure out what we can do to help best achieve these visions. Things are more organised than at an unconference , there are tracks and rooms and sessions planned, but it is all very fluid.  Each day reveals itself and  the week gathers its own momentum.

Spencer, Ale and Luca looking at a wireframe
Some people are here to finish off some work and coordinate releases. Some people are trying to plan the next six month cycle with team-mates they only see a few times a year. Some people have just joined the company and some people are trying to design for the next year or more.  That’s us.

While most here are looking at April, we are brainstorming, paper prototyping, grabbing stakeholders, talking to users, meeting with developers and trying to build that shared vision for a set of products and where they might go in the future — inspiring, chaotic, impossible, crazy, amazing.

set of wireframes and post-it notes

But we are also trying to finish things off from the last cycle, pay some technical debt, polish up a few things.  We are trying to listen to what else is happening, it all moves so fast. We also sign-up to get at least four other smaller things done in the next month.

At the end of the week, a few things are finished. Even better, a few more big things are planned. Dozens of drawings, hundreds of post-it notes are photographed. We shake hands with friends and colleagues that we will only talk to online for a few months and head home to get building.

Read more
Anthony Dillon

I was recently asked to attend a cloud sprint in San Francisco as a front-end developer for the new Juju GUI product. I had the pleasure of finally meeting the guys that I have collaboratively worked with and ultimately been helped by on the project.

Here is a collection of things I learnt during my week overseas.

Mocha testing

Mocha is a JavaScript test framework that tests asynchronously in a browser. Previously I found it difficult to imagine a use case when developing a site, but I now know that any interactive element of a site could benefit from Mocha testing.

This is by no means a full tutorial or features set of Mocha but my findings from a week with the UI engineering team.

Breakdown small elements of your app or website its logic test

If you take a system like a user’s login and register, it is much easier to test each function of the system. For example, if the user hits the signup button you should test the registration form is then visible to the user. Then work methodically through each step of the process, testing as many different inputs you can think of.

Saving your bacon

Testing undoubtedly slows down initial development but catches a lot of mistakes and flaws in the system before anything lands in the main code base. It also means if a test fails you don’t have to manually check each test again by hand — you simply run the test suite and see the ticks roll in.

Speeds up bug squashing

Bug fixing becomes easier to the reporter and the developer. If the reporter submits a test that fails due to a bug, the developer will get the full scope of the issue and once the test passes the developer and reporter can be confident the problem no longer exists.

Linting

While I have read a lot about linting in the past but have not needed to use it on any projects I have worked on to date. So I was very happy to use and be taught the linting performed by the UI engineering team.

Enforces a standard coding syntax

I was very impressed with the level of code standards it enforces. It requires all code to be written in a certain way, from indenting and commenting to unused variables. This results in anyone using the code, being able to pick up it up and read it as if created by one person when in fact it may have contributed by many.

Code reviews

In my opinion code reviews should be performed on all front-end work to discourage sloppy code and encourage shared knowledge.

Mark up

Mark up should be very semantic. This can be a case of opinion, but shared discussion will get the team to an agreed solution, which will then be reused again by others in the similar situations.

CSS

CSS can be difficult as there are different ways to achieve a similar result, but with a code review the style used will be common practise within the team.

JavaScript

A perfect candidate as different people have different methods of coding. With a review, it will catch any sloppy or short cuts in the code. A review makes sure  your code is refactored to best-practise the first time.

Conclusion

Test driven development (TDD) does slow the development process down but enforces better output from your time spend on the code and less bugs in the future.

If someone writes a failing test for your code which is expected to pass, working on the code to produce a passing test is a much easier way to demonstrate the code now works, along with all the other test for that function.

I truly believe in code reviews now. Previously I was sceptical about them. I used to think that  “because my code is working” I didn’t need reviews and it would slow me down. But a good reviewer will catch things like “it works but didn’t you take a shortcut two classes ago which you meant to go back and refactor”. We all want our code to be perfect and to learn from others on a daily basis. That is what code reviews give us.

Read more
Inayaili de León Persson

IKEA’s design process

Graham pointed out a recent Wall Street Journal article out to me as I was going on about my recent kitchen renovation (yes, I’ve used IKEA units). It gives a glimpse into IKEA’s ‘painstaking’ and, for me, fascinating design process.

IKEA kitchenPhoto by David

Even though, being IKEA, they can very much define how people will live, they go through long and careful research, which also has to be in line with their strict production processes. The symbiotic relationship between user, design, engineering, and the dedication to improving this relationship, reminded me of the design process that happens here at Canonical and Ubuntu.

Research Manager Mikael Ydholm leads a team that visits thousands of homes annually … and compiles reports from trend spotters and experts that look as far as a decade into the future.

I would love to learn more about IKEA’s design processes and their designers’ work, so if anyone knows of more in-depth articles, videos or books on it, please give me a shout.

Read more