Canonical Voices

Posts tagged with 'user experience'

Jouni Helminen

Malta Sprint

Our Apps and Platform teams took part at a design/engineering sprint on the beautiful island of Malta in May, and we thought we would share some pics to show a peek into “behing the scenes” and people working on the apps and operating system.

The sprint itself was a great experience, with over 150 people, engineers and designers, working together and planning out the next steps. Refined designs for mobile apps such as Browser, Camera and Telephony suite (Dialer, Contacts and Messaging) were unveiled and implementation got well underway, and on the platform team Scopes are starting to look really beautiful on the phone. There were plenty of tech demos and talks ranging from Cloud to Convergence to Mobile to Internet of Things – it was great to see everyone hacking, designing and discussing together about super exciting things. A good reminder that although Canonical has grown in size, at the core it still feels like a startup in a good sense.

It is an interesting time coming up to the release of the phone hardware, and these two weeks at Malta were a brilliant opportunity for all teams to sync up, work hard and squeeze in some R&R in the evenings too. Sun, great grilled seafood and the historical buildings of Valeta – it was fantastic to work in such a beautiful setting, and we cannot wait to get all the new goodies in the hands of people.

IMG_2665

20140522-DSC05138ubuntu_team

20140519-DSC01071

20140519-DSC01063

Screen Shot 2014-06-10 at 23.22.09

Read more
Carla Berkers

As the number of Juju users has been rapidly increasing over the past year, so has the number of new solutions in the form of charms and bundles. To help users assess and choose solutions we felt it would be useful to improve the visual presentation of charm and bundle details on manage.jujucharms.com.

While we were in Las Vegas, we took advantage of the opportunity to work with the Juju developers and solutions team to find out how they find and use existing charms and bundles in their own deployments. Together we evaluated the existing browsing experience in the Juju GUI and went through JSON-files line by line to understand what information we hold on charms.

carla.jpg

We used post-its to capture every piece of information that the database holds about a bundle or charm that is submitted to charmworld.

 

We created small screen wireframes first to really focus on the most important content and how it could potentially be displayed in a linear way. After showing the wireframes to a couple more people we used our guidelines to create mobile designs that we can scale out to tablet and desktop.

With the grouped and prioritised information in mind we created the first draft of the wireframes.

 

In order to verify and test our designs, we made them modular. Over time it will be easy to move content around if we want to test if another priority works better for a certain solution. The mobile-first approach is a great tool for making sense of complex information and forced us to prioritise the content around user’s needs.

jaas-store

First version designs.

Read more
Giorgio Venturi

With the unstoppable rise of mobile apps, some pundits within the tech industry have hastily demoted the mobile web to a second-class citizen, or even dismissed it as ‘dead’. Who cares about websites and webapps when you can deliver a superior user experience with a native app?

Well, we care because the reality is a bit different. New apps are hard to discover; their content is locked, with no way to access it from the outside. People browse the web more than ever on their mobile phones. The browser is the most used app on the phone, both as starting point and a destination in the user journey.

Installing
Source: xkcd

At Ubuntu, we decided to focus on improving the user experience of browsing and searching the web. Our approach is underpinned by our design principles, namely:

  1. Content is king: UI should recede in the background once user starts interacting with content
  2. Leverage natural interaction by using gestures and spatial metaphors.

In designing the browser, there’s one more principle we took into account. If content is our king, then recency should be our queen.

Recency is queen

People forget about things. That’s why tasks such as finding a page you visited yesterday or last week can be very hard: UIs are not designed to support the long-term memory of the user. For example, when browsing tabs on a smartphone touchscreen, it is hard to recognise what’s on screen as we forgot what that page is and why we arrived there.

Similarly, bookmarks are often a meaningless list of webpages, as their value was linked to the specific time when they were taken. For example, let’s imagine we are planning our next holiday and we start bookmarking a few interesting places. We may even create a new ‘holidays’ folder and add the bookmarks to it. However, once the holiday is the bookmarks are still there, they don’t expire once they have lost their value. This happens pretty much every time; old bookmarks and folders will eventually start cluttering our screen and make it difficult to find the information we need.

Therefore we redesigned tabs, history and bookmarks to display the most recent information first. Consequently, the display and the retrieval of information is simplified.

Browser tabs

In our browser, most recent tabs come first. Here is how it works:

Browser tabs

In this way, users don’t have to painstakingly browse an endless list of tabs that may have been opened weeks or days ago, like in Mobile Safari or Chrome.

History

Browser history has not changed much since Netscape Navigator; modern browser still display a chronological log of all the web pages we visited starting from today. Finding a website or a page is hard because of the sheer amount of information. In our browser we employ a clustered model where you display the last visited websites, not every single page. On tap, you then display all webpages for that websites, starting from the most recent. In this way scanning the history log is much easier and less painful.

Browser history

Loving the bottom edge

We believe the bottom edge is the most pleasurable edge to use. It is easily accessible at any time and ergonomically friendly to the typical one-hand phone hold. Once discovered, it will slowly build into our muscle memory and become a natural and intuitive way of interacting with the application.

Bottom edge

This is why we combined tabs and history and made them accessible through the bottom edge. As a team, we spent months building and refining a sleek, intuitive and fluid user experience.

Here’s a sneak preview of how it will look like:


Video: Browser interactions

Bottom edge gesture will have three stages:

  1. Dragging from the bottom edge will hint and reveal the most recently viewed tab
  2. Continue dragging and the full tab spread is revealed
  3. Keep on dragging and browser history will be fully revealed.

All elements will support gestural interaction: user can swipe to delete a tab or a website from history.

That’s all for now. In the next blog post, we will talk more about gestural interaction in Browser. Stay tuned!

Read more
Alejandra Obregon

Last week a few of us flew to Las Vegas for a Juju sprint at the world-famous Flamingo casino (where Hunter S. Thompson stayed in Fear and Loathing).

It was the first time in Las Vegas for most of us so we weren’t quite sure what to expect…

fear-and-loathing-in-las-vegas

And while there were plenty of distractions within reach at any stage…




…we managed to get through a large amount of work!




The focus of the sprint was to explore ideas and define specs for work we will be delivering in the next six months. Amongst other things we covered topics such as:

  • A new search and browse experience for charms and bundles
  • The best way to prioritise and present information to help users to assess and select charms and bundles. For this we employed a mobile-first methodology. Carla will be writing more about this in an upcoming post
  • How to improve the juju service block
  • Lots of other exciting features we should be able to unveil soon!

So by the end of the sprint we felt a little bit more like this…

If you want to find out more about Juju visit Ubuntu.com

Or have a play with Juju itself! Juju is the quickest way to deploy services to any cloud running Ubuntu.

We are currently hiring designers, UX consultants and engineers to work on Juju. Maybe you could come along to Vegas next time!

Read more
Mark Shuttleworth

Every detail matters, and building great software means taking time to remove the papercuts. Ubuntu has over the past 5 years been refined in many ways to feel amazingly comfortable on the cloud. In the very early days of EC2 growth the Ubuntu team recognised how many developers were enjoying fast access to infrastructure on demand, and we set about polishing up Ubuntu to be amazing on the cloud.

This was a big program of work; the Linux experience had many bad assumptions baked in – everything had been designed to be installed once on a server then left largely untouched for as long as possible, but cloud infrastructure was much more dynamic than that.

We encouraged our team to use the cloud as much as possible, which made the work practical and motivated people to get it right themselves. If you want to catch all the little scratchy bits, make it part of your everyday workflow. Today, we have added OpenStack clouds to the mix, as well as the major public clouds. Cloud vendors have taken diverse approaches to IAAS so we find ourselves encouraging developers to use all of them to get a holistic view, and also to address any cloud-specific issues that arise. But the key point is – if it’s great for us, that’s a good start on making it great for everybody.

Then we set about interviewing cloud users and engaging people who were deep into cloud infrastructure to advise on what they needed. We spent a lot of time immersing ourselves in the IAAS experience through the eyes of cloud users – startups and industrial titans, universities and mid-sized, everyday companies. We engaged the largest and fastest-moving cloud users like Netflix, who have said they enjoy Ubuntu as a platform on the cloud. And that in turn drove our prioritisation of paper-cuts and significant new features for cloud users.

We also looked at the places people actually spend time developing. Lots of them are on Ubuntu desktops, but Windows and MacOS are popular too, and it takes some care to make it very easy for folks there to have a great devops experience.

All of this is an industrial version of the user experience design process that also powers our work on desktop, tablet and phone – system interfaces and applications. Devops, sysadmins, developers and their managers are humans too, so human-centric design principles are just as important on the infrastructure as they are on consumer electronics and consumer software. Feeling great at the command line, being productive as an operator and a developer, are vital to our community and our ecosystem. We keep all the potency of Linux with the polish of a refined, designed environment.

Along the way we invented and designed a whole raft of key new pieces of Ubuntu. I’ll write about one of them, cloud-init, next. The net effect of that work makes Ubuntu really useful on every cloud. That’s why the majority of developers using IAAS do so on Ubuntu.

Read more
Daniel Oliver

New Apps header

The new apps header features max. 4 slots that can be arranged and combined in order to fulfil user needs in every screen.

Header_slots

Header’s values

We want to provide our users with the right amount of contextual information for them to know:

1- Where they are at (inside the app, in a particular view).

2- Where they can go inside the app in order to find content (navigate across different views).

3- What they can achieve in any given view (compose a message, crop a picture…).

The new header provides clarity by always showing the user where they are at, consistency by providing a way to navigate across main views inside the apps, and priority by surfacing the most important actions in every screen.

header_balance

Header’s elements

The elements are the building blocks of the header, the controls that can placed inside the slots mentioned above.

There’s different categories of elements and each of them have to be positioned carefully in the header in order to create slick experiences across our apps.

header_glossary

 

Title

One of the main values behind the new header is Clarity: we want the user to be clear about where they are at any moment.

That’s why the only mandatory element for our header is the title; you can leave some other slots empty, but every header has to have a title.

header_title

Tabs

A Tab is a control that allow users to navigate across views directly from the header.

The main views of your app are the different faces  in how content is organised and visualised.

header_telephonyExample: 

Our telephone app has two main views: Dialer and Contacts. Placing a tabs on the telephony header, allows users to toggle between this two views quickly.

Tabs placement

Place the tabs right to the title.

According to our interface values “right” means moving forward, and that’s what a tab precisely is, moving forward to the next view represented by the tab icon.

header_actions_tabs

Actions

Actions allow users to accomplish a direct goal in every screen (compose a message , edit, crop a picture…) Give priority to the actions that will be used more often and place them in the header.

header_AB

Example: Our address book app has a clear primary action which is add a new contact to the list.Placing that action straight to the header will make the user accomplish the goal quicker and smoother.

Actions placement

Place the actions right to the title as well.

header_actions_placement

In case you want to mix tabs and actions on the same header,  keep the tabs as close to the title as possible, creating a natural block to navigate across the views; place the actions after them.

header_actions_tabs

Back

After the main views of your app, subsequent views will use a back button in the header to navigate back to the main views. Back always returns to the previous view of an app, until the user reaches the main view again.

header_gallery

Example: Our gallery app has 3 main views: Photos, events and albums. Once the user gets to a detail view, the header of that view has a back button that returns the user to them main view where he came from.

Back placement

There’s only one place where you can place the back, and that’s the top left slot. According to our interface values, that’s a place where user has to intentionally stretch the finger and make an effort to trigger.

header_back_placementDrawer

So we’ve already introduced a few elements, but what happens when there’s not enough free slots on the header to place all your tabs and actions? Our solution is the drawer: an overflow where users will find all the controls not available straight on the header.

header_drawer_g

Example: Our gallery app has 3 main views: Photos, events and albums; and it also has the “take a picture” action on the header. In order to keep the header clear, we’ve decided to place the main views inside a drawer, and surface “take a picture” on the header. In this particular case, the drawer contains the main views of the app.

Inside the drawer

The drawer can contain some of the elements that couldn’t fit in the header’s slots. If  the drawer is placed on the top left slot, then it will contain tabs (main views); if the drawer is placed on the top right slot, then it will contain extra actions.

header_drawer

 

Drawer placement

The drawer works as a metaphorical extension of the header, so placing it at the first or last slot helps reinforce that idea.

header_drawer_placement

 

 

Search

Search is a special action that allows users to rapidly locate a desired piece of content. And since search can be a really important use case in apps, we are providing a special experience for it.Triggering search will refresh the standard header into a search header, displaying the osk at the same time, and removing the focus from the content. (for more information on search read search pattern)

header_search

Example: Our notes app presents search as one of the main action in the header. Once the user hits on the search icon the header transitions to the search header.

Search placement

There’s only one place in the header where you can place search: top right slot.

 

header_search_placement

 

Implication with the drawer:  In the scenario where you need a back button, a drawer and a search; the search will need to be kept in the top left slot in order to reinforce the search pattern across all our system.

 

header_search_p_2

Header layout

The four slots on the header can be arranged as follows:

Layout A

1 slot at the left of the title and max. 2 slots on the right

header_gallery_A

When to use it

  • You need to use a Back button in order to display detail screens for your app content.
  • Your app has a large number of main views and you need the drawer to display all of them.
  • You prefer to use the slots on the right to display actions, then you have to use a drawer to place the main views.

Layout B

max. 3 slots right to the title

header_telephony B When to use it

  • You don’t need a back button.
  • You want to place tabs at the right for the user to be able to switch views easily.
  • Most of the actions to be performed on the app are contextual (related to the content) and there’s no need to surface those actions on the header.

Behaviour

According to our user interface values, content is always the priority; that’s why the header is just a tool the disappears when users don’t need it. By scrolling down, the header will disappear. By scrolling up  the header will slide in again.

 

header_behavIt might be scenarios where users will need the header present at all times (i.e. Header with tabs) in that justified case, it’s possible to set the header fixed on the screen.

 

 

 

Read more
Katie Taylor

App Design Clinic #8

This week we dedicated the short clinic to sizing, and ensuring widgets and items are usable (touchable).

We covered…

  • The Ubuntu grid unit – for more information, see http://developer.ubuntu.com/api/qml/sdk-1.0/UbuntuUserInterfaceToolkit.resolution-independence/)
  • Minimum touch target size – 4×4 gu
  • A sneak preview of the updated widgets coming to Ubuntu Touch

If you missed it, or want to watch it again, here it is:

 

 

The next App Design Clinic will on Wednesday 26th February. Please send your questions and screenshots to design@canonical.com by 1pm UTC on Tuesdays to be included in the following Wednesday clinic.

Read more
Carla Berkers

I’d like to share my experience working on the project that has been my main focus over the past months: the redesign of canonical.com.

Research methods

As I started talking to people in the design department I quickly discovered we have a lot of information about our site visitors. My colleagues helped me access Google Analytics data and findings from previous user testing sessions. There was also a research-based set of personas, that helped me to put together an initial overview of user needs and tasks.

I was curious to try to validate these user needs and test them against Canonical’s current business requirements. In order to find out more about the company goals I prepared a stakeholder interview script and started putting together a list of people to talk to. I initially planned to limit the number of interviewees to about six to eight stakeholders, as too many opinions could potentially slow down the project and complicate the requirements.

Getting to know the company

I started with eight people to talk to, but with each interview I found out about other people that I should add to my list. At the same time, many of the interviewees warned me that every person I would talk to would have different ideas about the site requirements. By the end of the first round of interviews, ending up with too many stakeholders turned out to be the most commonly mentioned risk to endanger the project finishing on time.

I had very diverse conversations about different aspects of the site with a range of people. From strategic insights from our CEO Jane, to brand guidelines and requirements from our Head of Design Marcus and ideas around recruitment from our HR business partner Alice — each conversation brought unique requirements to light.

After I spoke to about fifteen people I summarised the key points from each stakeholder on post it notes and put them all up on a wall in one of the meeting rooms in the office. As I took out the duplicates and restructured the remaining notes, I began to see a familiar pattern.

Conclusions

When I finished grouping the different audiences, I ended up with five groups of users: enterprise customers, (potential) partners, job seekers, media (a varied group that includes press, tech analysts and bloggers), open source advocates and the more generic tech enthusiasts that want to know more about the company backing Ubuntu.

As these groups aligned very well with the persona’s and other pieces of research that I had found, I felt comfortable continuing my process by moving on to the user needs and site goals that will help build a good site structure and generate useful content for each group of users.

I found that talking to many experts from within the company helped me quickly understand the full range of requirements, saving me time rather than making my job more complicated. Furthermore I was happy to get a chance to get to know people from different parts of the company so soon after I got started.

In order to keep the project moving forward, we appointed one key stakeholder to sign off each step of the process, but I’m looking forward to showing the end results to the broader group to see if I managed to meet all their expectations. We will also conduct user-testing to ensure the site answers our core audiences questions and allows them to complete their tasks.

I hope to share more about this project in the months to come.

Read more
Christina Li

On 19-21 November we had our vUDS where we got to discuss and share with the community some of the design work we’ve been doing recently.

Our topics ranged from our design blog to convergence designs to Juju GUI cloud to icon designs!

If you missed any of our sessions, don’t worry. They are all below for you to check them out!

Design Blog

Love our blog? How can we make it better? What topics would you like to see?

Responsive Design

Hear about our thoughts on converging our patterns, components and designs from phone to tablet to desktop.

App Design Clinic

Every two weeks, we gather to talk about app designs and patterns. If you are developing an app or have any questions on apps, let us know!

Designing a responsive website and web guide

We talked about the process of designing a responsive website and shared the current web style guide we have been using for the main Ubuntu.com site.

Research on Windows and Android usability

Juju GUI design evolution

User research has informed the way Juju GUI has changed over the last year. Here is the evolution of Juju GUI.

Designing icons for Ubutnu

We have been designing icons for Ubuntu Phone and Tablet and Desktop. Check them out!

Let us know what you think, or suggestions on what you want to see next from the Design team at the next vUDS!

Read more
Tingting Zhao

In the previous post, we talked about how to design effective user testing tasks to evaluate the usability of an interface. This post continues with this topic by highlighting a number of key strategies that you may need to use when conducting a formative user testing, whose main aim is to identify usability problems and propose design solutions, rather than compare quantitative metrics (summative testing), eg. task completion time and mouse clicks. It is unlikely that the prepared task script could be strictly applied without any changes, since the testing situation tends to be dynamic and often unpredictable. To get useful data, you need to be able to manipulate your task script with flexibilities, while also maintaining consistency.

Changing task orders during testing

Normally, to avoid order effect, the issuing of the tasks should be randomised for each participant. Order effect refers to the effect in which the tasks are presented, which can affect the results, specifically: users may perform better in later tasks as their familiarity with the interface increases; or users may perform worse as they become more fatigued. However, as discussed in the previous post, the tasks are often contextual and dependent on each other, so you need to carefully consider which tasks could be shuffled. For example, it is a good practice to mark on the script to indicate dependent tasks, so that you know these tasks should not be reordered and separated from each other. In other words, the dependent tasks must always be moved together. It is worth noting that the randomisation of task orders may not always be possible for a testing, for example, when the tasks are procedurally related, such as a testing focusing on payment flow.

Sometimes you may need to change the task orders by considering their levels of difficulty. This is useful in the following two scenarios: when you notice a participant appears to be slightly nervous before the testing has started, provide a simple first task to put him/her at ease; or when you observe a participant has failed to solve several tasks in a row, provide one or two easy tasks to reduce the frustration and stress, and boost confidence.

Another type of changing task order is made in response to users’ solicited goals that are associated with a coming task. For example, in one phone testing, after a participant checked the battery level, s/he spontaneously expressed a desire to know if there was a way to switch off some running apps to save battery. In this case, we jumped to the task of closing running apps, rather than waiting until later. This makes the testing feel more natural.

Remove tasks during testing

There are typically two situations that require you to skip tasks:

  • Time restriction

  • Questions being answered with previous tasks

Time restriction: user testing normally has a time limit. Participants are paid for certain lengths of time. Ideally, all the tasks should be carried out by all the participants. However, sometimes they take longer to solve tasks. Or you may discover areas that require more time for investigation. In this case, not all the tasks could be performed by a participant within the given time. Subsequently, you need to be able to quickly decide which tasks should be abandoned for this specific participant. There are two ways to approach this:

  • Omit tasks that are less important: it is always useful to prioritise the tasks in terms of their importance – what are the most important areas that have key questions that need to be answered and require feedback; what could be left for the next testing, if not covered this time?

  • Omit tasks that have already received abundant feedback: skip the tasks from which you have already gathered rich and useful information from other participants.

Questions were answered with previous tasks: Sometimes questions associated with a specific task would be answered while a participant was attempting to solve the previous task – in this case, you could skip this task.

In one of our phone testings, we asked a participant to send a text to a non-contact (a plumber). During the task-solving process, s/he decided to save the number to the contact book first and then send a text. In this case, we skipped the task of ‘saving a number to contact book’.

However sometimes you should not skip a task, even if it might seem repetitive. For example, if you want to test the learnability and memorability of a feature, having the participant perform the same task (with slightly different descriptions) for the second time (after a certain time duration) could afford useful insights.

Add tasks during testing

There are three contexts in which you could consider adding tasks:

  • Where the user formulates new goals

  • Re-examinations

  • Giving the user a second chance

The added task must be relevant to the aim of the testing, and should only be included if the testing time permits.

User formulates new goals: you could add tasks based on user-formulated goals in the task solving process.

For example, in one phone testing, one participant wondered if s/he could customise the tiles on the Windows phone’s home screen. We made this an added task for her/him. Adding tasks based on user articulated new goals follows their thought process and make the testing more natural. It also provides opportunities for us to discover new information.

Re-examinations: sometimes the users may succeed in a task accidently, without knowing how s/he did it. In this case, the same task (with a slightly changed description) could be added to re-assess the usability.

For example, in one phone testing, we had one task: “You want to call you sister Lisa to say thank you for the phone”. One participant experienced great difficulties in performing this task, and only completed it after a long time and by accident. In this case, we added another task to re-evaluate the ease of making a phone call:

“Your call is cut off while you are talking to your sister, so now you want to call her again.”

Similarly in the Gallery app testing, where participants managed to add a picture into a selected album accidently, we asked them to add another picture into a different album.

Re-examination allows us to judge accurately the impact of a problem, as well as to understand the learnability of interface – the extent to which users could detect and learn interaction patterns (even by accident), and apply the rules later.

Giving the user a second chance: in the majority of the user testing, participants used the evaluated interface for the first time. It could be very demanding for them to solve the tasks successfully in their first attempt. However, as the testing progresses, participants might discover more things, such as features and interaction patterns (although possibly by accident). Consequently, their knowledge of the interface may increase. In this case, you could give them another chance to solve the task that they failed earlier in the tests. Again, this helps you to test the learnability of the interface, as well as assess the impact of a problem.

For example, in a tablet testing, one participant could not find the music scope earlier in the testing, but later s/he accidentally discovered the video scope. To test if s/he now understood the concept of dash scopes, we asked the participant to find the music scope again after several other tasks.

Change task descriptions (slightly) during testing

Information gathered from pre-testing brief interview and participants’ testing verbal data could often be used to modify the task description slightly to make the task more realistic to the users. This also gives the user the impression that you are an active listener and interested in their comments, which helps to build a rapport with them. The change should be minor and limited to the details of the scenario (not the aim of the task). It is important that the change does not compromise the consistency with other participants’ task descriptions.

For example, in a tablet testing, where we needed to evaluate the discoverability of the HUD in the context of photo editing, we had this task: “You want to do some advanced editing by adjusting the colour of the picture.” One participant commented that s/he often changed pictures to ‘black and white’ effect. In response to this, we changed the task to “You mentioned that you often change a picture to black and white, and now you want to change this picture to ‘black and white’”. The task change here does not change the aim of the task, nor the requirements for solving the task (in this case, the access to the HUD), but it becomes more relatable to the participant.

Another example is from a phone testing. We changed the task of “you want to go to Twitter” to “you want to go to Facebook” after learning the participant uses Facebook but not Twitter. If we continued to request this participant to find Twitter, it would make the testing become artificial, which would result in invalid data. The aim of the task is to evaluate the ease of navigation in finding an app, therefore changing Twitter to Facebook does not change the nature of the task.

Conclusions

This post outlines a number of main strategies you could use to modify your task script to deal with typical situations that may occur in a formative user testing. To sum up:

Changing tasks orders: randomise tasks for each participant if possible, and move the dependent tasks as a whole; consider the difficulties of the task and issue an easy task to start with if you feel participant is nervous, or provide an easy task if participants failed several tasks in a row. Allow them to perform a later task if they verbalise it as a goal/strategy for solving the current task.

Remove tasks: if time is running out with a particular participant, omit certain tasks. This could be tasks with low priorities; tasks that already have enough feedback from other participants; or tasks the participant has already covered while attempting a previous task.

Add tasks: if time permits, allow users to perform a new task if it is a user initiated goal and is relevant to the testing; repeat a task (with slightly different wording and at an appropriate time) if the user succeeds in a task accidently, or has failed this task earlier, or if the aim is to test the learnability of the system.

Change task description: slightly amend the details of the task scenario (not the aim of the task) based on users’ verbal data to make it more relatable and realistic to the user. This will improve the reliability of the data.

If you have other ways to maneuver the tasks during the testing session, or have situations you are unsure about, feel free to share your experience and thoughts.

Read more
Inayaili de León Persson

Release month is always a busy one for the web team, and this time was no exception with the Ubuntu 13.10 release last week.

In the last few weeks we’ve worked on:

  • Ubuntu 13.10 release: we’ve updated www.ubuntu.com for the latest Ubuntu release
  • Updates to the new Ubuntu OpenStack cloud section: based on some really interesting feedback we got from Tingting’s research, we’ve updated the new pages to make them easier to understand
  • Canonical website: Carla has conducted several workshops and interviews with stakeholders and has defined key audiences and user journeys
  • Juju GUI: on-boarding is now ready to land in Juju soon
  • Fenchurch (our CMS): the demo services are fixed and our publishing speed has seen a 90% improvement!

And we’re currently working on:

  • Responsive mobile pilot: we’ve been squashing the most annoying bugs and it’s now almost ready for the public alpha release!
  • Canonical.com: with some of the research for the project already completed, Carla will now be working on creating the site’s information architecture and wireframing its key sections
  • Juju GUI: Alejandra, Luca, Spencer, Peter and Anthony are in a week-long sprint in San Francisco for some intense Juju-related work (lucky them!)
  • developer.ubuntu.com: we have been working with the Community team to update the site’s design to be more in line with www.ubuntu.com and the first iteration will be going live soon
  • Fenchurch: we are now working on a new download service

Release day at the Canonical office in LondonRelease day at the Canonical office

Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Add your thoughts in the comments.

Read more
Luca Paulina

Over the last year we have been working on the Juju GUI to reach a broader audience. Juju is a way of building complex cloud environments. It connects different services, allows complex configuration and the ability to scale out quickly and easily. Juju is offered as a command line tool or as a GUI on the web.

The team

For the last 6 months a small dedicated team has been working together to push the design of the Juju GUI forward. The design team consists of 2 user experience designers, Alejandra and Luca, and 2 visual designers, Jamie and Spencer. The project has raised many questions and one of them was what it is like designing a product you don’t use. In this blog post Jamie and Luca attempt to clarify our process.

No assumptions

Luca: As a user experience designer part of my process is to create assumptions to further thought, design and development, these are later validated in interviews with stakeholders, user testing or with the development team. An assumption is something that is generally accepted as being true without proof. I’ll never be a direct user of Juju, therefore creating assumptions for the type of audience that the Juju GUI is designed for is an interesting challenge.

To help build assumptions, ideate and create cohesive user flows that will later be tested I’ve had to run planned and impromptu workshops, ask questions, have daily hangouts with the development team,  run week long sprints, ask more questions and lock myself away in the Juju war room to immerse myself in the world of Juju.

Juju_war_room

Jamie: From a visual perspective this digital product is unlike anything that I’ve worked on in the past. Whereas some rules of typography, hierarchies and readability apply to the design I’ve found myself a lot more focussed on subtle detailing and refinements than ever before. This is because users of the GUI are wanting to complete tasks, they want to be able to deploy their environments as quickly and painlessly as possible. So the design job became about helping them do that without the GUI getting in the way. It is intended to lay lightly across the canvas, aiding users when they need and not obstructing them when they don’t.

Juju GUIThe Juju GUI

Extensive and continuous research

Luca: I’m always surprised by the sheer amount of complexity that the GUI entails. The varying needs of our core target audiences means that we have to conduct a lot of research when we create user flows, ideas and when we’re examining if a feature is needed. Thankfully we have a great user research team which helps find users, conducts the testing and helps interpret the results.

I’ve found that with this particular product the interpretation of feedback has been key to making sure our designs resonate with our users. The feedback is catalogued in a document and shared out amongst the development teams to gain their insights and ideas as well. Solutions are then ideated and the design team then acts upon them creating new designs.

Jamie: The user testing results and feedback from the community has been key to the development of the visual style for the GUI. We’ve been through numerous rounds of testing to get to this stage of design development and each round of tests has moved the design forward. Once a round of testing has been completed the team will review the findings and create design tasks to solve any issues highlighted by the testers. The users we’ve tested with have been high-level cloud architects and system administrators, so familiar with the type of tasks that the GUI performs just not familiar with the way in which we perform those tasks in the GUI. Assumptions we’ve made about the way they would use the GUI have sometimes been mistaken so the design really has been guided by the users.

Juju GUI design evolutionEvolution of Juju’s interface

Constant validation from a multidisciplinary team

Luca: Throughout the project the need for validation on concepts and ideas has been incredibly important. The agile process we use allows us to create wireframes and designs quickly and get them in front the dev team and get their insight and feedback, we’re lucky enough to have a near 24 hour working cycle (Teams in Europe, North America and Australasia). Because of this it’s not uncommon for a design to go through many iterations in a week, for example; the inspector wireframes (pictured below) went through 9 revisions in 10 working days, the complexity of the inspector design and experience was refined and finessed collaboratively with the development team, this has turned the inspector into an integral and very powerful part of the GUI.

Juju inspector wireframesDetailed wireframes for the inspector

Jamie: Working within an agile process has meant that design decisions are required to be made quickly and collaboratively within the team. The design team in London is small so we can share work internally and move designs on sometimes multiple times a day. This means we’re able to keep up with the development cycle that releases every 2 weeks and means that users can see the design evolve far faster than waiting for a yearly or biannual release of the product. As a designer it’s been hard seeing the product not pixel perfect when it’s released but we’re working hard to craft, fine-tune and round the edges of it so it will be a beautiful thing to use and interact with each new release.

Inspector designVisual iterations of the inspector

Questioning language and terminology

Luca: Juju is expanding into a new field of creating clouds by managing services not machines. This means that there really isn’t a language framework that we can rely on and one thing that has been apparent over the last 6 months is the importance of terminology and language for developers. At the beginning of the project it was difficult and time consuming to learn the established vocabulary associated with the cloud and Juju, this gave us a great reason to start questioning words and terms used throughout the GUI. We uncovered words that were already established in other web services and words that didn’t connect with the user. Questioning these words and terms made it clear that not only do we (as non-users) not understand but this would also happen with users and it allowed us to finesse the language in the GUI to something more appropriate.

Good design principles and patterns

Jamie: The GUI is not just the work of the Cloud team. To harmonize the look of the products in the Canonical stable we’ve worked closely with the design team developing the phone OS looking for ways that design patterns developed by them can be applied to the Juju GUI. We’ve also worked with the Web team to see where we can integrate any elements from their UI library. The GUI is a product but it’s not a mobile OS and equally we interact with it in a desktop web browser but it’s not a website, so it ultimately has to have it’s own look. But by pooling the collective design wisdom of the teams who have been crafting interactions in their specific fields and by using patterns and guidelines already defined in this space we can create a interface that is better than the sum these parts but with it’s own clear voice.

Good design practices

Jamie: We like to sketch here. We sketch everything out before any work is done on screen and it’s enormously useful in iterating quickly through problems that users have and trying to come up with multiple solutions to these in a collaborative way. With a small team we can sketch our way through multiple problems towards multiple solutions and then move into applications like Photoshop and Illustrator once we’ve got a clear direction of the UX. This fast way of working also allows us to keep pace with the development cycle and to be able to add features to the GUI each time we do a release. Once a feature of the GUI is open to the world we’ll gather feedback then it’s back to the drawing board to refine it.

Isle of Man workshopUX sketching during a recent sprint

Playing to our strengths

Luca: Most of the processes to provision, create and manage services in the cloud are currently carried out via command line. A priority for us has been to think about how we can use visual language to provide a layer of information and understanding not readily available via the command line. As designers we understand that with colour, structure, layout and flow we can communicate the status of a system or process in a very powerful way. We have made it our goal to bring out the strengths of the GUI by exploring visual metaphors and relationships. We established that the command line is an input output tool, the GUI doesn’t have that type of interaction and offers a more holistic approach, we offer that by having a clear hierarchy and having concise user flows. Early on in the project we made a principle to not compete with the command line but to embrace it, there are users out there who will use Juju just as a command line tool or as the GUI or a mix of both.

Charm-iconsPlayful icons helps users navigate the GUI

Final thoughts

Pretty much everyone in the team has been involved in the conceptual stage of the project, this has helped us create a cohesive product with some really powerful features. I’m sure there are a lot of designers out there working on designs for products that they won’t end up using. We wanted to take the time to highlight how we’ve approached this problem while we’ve been working on the Juju GUI project. The coming months will see a redesign of the the navigation bar, notifications, service blocks and relationship lines. We’ve given you a preview of some of these features in the visuals above.

Read more
Inayaili de León Persson

We might have been quiet, but we have been busy! Here’s a quick overview of what the web team has been up to recently.

In the past month we’ve worked on:

  • New juju.ubuntu.com website: we’ve revamped the information architecture, revisited the key journeys and updated the look to be more in line with www.ubuntu.com
  • Fenchurch (our CMS): we’ve worked on speeding up deployment and continuous testing
  • New Ubuntu OpenStack cloud section on www.ubuntu.com/cloud: we’ve launched a restructured cloud section, with links to more resources, clearer journeys and updated design
  • Juju GUI: we’ve launched the brand new service inspector

And we’re currently working on:

  • 13.10 release updates: the new Ubuntu release is upon us, and we’re getting the website ready to show it off
  • A completely new project that will be our mobile/responsive pilot: we’re updating our web patterns to a more future-friendly shape, investigating solutions to handle responsive images, and we’ve set up a (growing) mobile device testing suite — watch this space for more on this project
  • Fenchurch: we’re improving our internal demo servers and enhancing performance on the downloads page to help deal with release days!
  • Usability testing of the new cloud section: following the aforementioned launch, Tingting is helping us test these pages with their target audience — and we’ve already found loads of things we can improve!
  • A new canonical.com: we haven’t worked on Canonical’s main website in a while, so we’re looking into making it leaner and meaner. As a first stage, Carla has been conducting internal interviews and analysing the existing content
  • Juju GUI: we’re designing on-boarding and a new notification system, and we’re finalising designs for the masthead, service block and relationship lines

We’ve also learnt that Spencer’s favourite author is Paul Auster. And Tristram wrote a post on his blog about his first experience with Juju.

Web team weekly meeting on 19 September 2013Spencer giving his 5×5 presentation at last week’s web team meeting

Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Please let us know your thoughts in the comments.

Read more
Tingting Zhao

In past years, we have had many Ubuntu users getting involved in helping with our user research. Now we feel it’s time to form a user research network, which we’re calling: UbuntuVoice.

So,  if you want to:

  • be the voice of over 20 million Ubuntu users. You will have the opportunities to take part in a variety of Ubuntu user research with different products, and help shape the Ubuntu experience. You choose the ones that you are interested in.

  • stay up to date with Ubuntu. Get periodic updates (every two months) via email, such as what designers are working on, how feedback is used, and how users behave when interacting with technology.

  • get a little something extra. Some of our research will come with an incentive, or in the form of a ‘Ubuntu goody’ lucky draw, and some research will be voluntary.

…then join us today by clicking here

If you have any questions, please feel free to contact us at: ubuntuvoice@gmail.com

 

Update: Thank you very much for everyone’s support for the UbuntuVoice! We reached our target number of participants in just a day! Since we are a small team, we can’t have more participants at the moment. However, do keep your eyes on the design blog for updates.

Ubuntu user research team

Read more
Tingting Zhao

Previously, Charline Poirier provided an excellent post about how to recruit representative participants for usability testing. To continue the story, we are going to talk about the next stage: developing effective task sets, which is a crucial part of a test protocol.

We conduct usability testing iteratively and throughout the product life cycle. The testing interface could range from being as simple as paper images, to clickable prototypes, to a fully working system.

In order to assess the usability of an interface, we ask users to carry out a number of tasks using the interface. We use tasks that resemble those that users would perform in a real life context, so that the data we collect is accurate. In other words, the user behaviour we observed is representative, and the problems we found are those that users would be likely to encounter.

 

Design testing tasks – ‘a piece of cake’?

 

When I first learnt about usability testing, I thought: ‘It’s simple: you just need to write some tasks and ask people to solve them, and done!’ But after conducting my first ever usability testing, I realised this was not the case.  I had so many questions: I wasn’t sure where to start or what tasks should be used, and there were numerous details that needed to be thought through. You need to carefully craft the tasks.

Now, having conducted hundreds of usability testings, I would like to share my experience with you about how to design effective tasks. There are three main stages involved:

  • Decide on the tasks

  • Formulate the tasks

  • Be tactful in presenting the order of the tasks

 

Stage 1: Decide on the tasks

Before you sit down to compose a set of tasks, you are likely to go through the following stages:

  • Clearly establish the goal of the testing: specifically what are the main features/areas that require feedback. When we conduct testing, we always have a face to face meeting with the design team to understand their focus and needs.

  • ‘Walkthrough’ with the design team: If testing an early prototype that has not been fully implemented, it’s important to go through the prototype with the designers so that you are aware of how it works, what is working and what is broken.

  • Inspection : go through the test interface at least three times. The first time to get an idea of the general flow and interaction of the interface; the second time to ‘put on the user’s hat’, and examine the interface by thinking about what users would do, and pay attention to any possible difficulties they may experience. This is the stage where you could start to write down some of the potential tasks you could use, which cover the features you need to assess, and the predicted problematic areas; and the third time, you should focus on developing tasks when you are going through the interface again. This gives you the opportunity to evaluate the tasks you identified, and add or remove tasks. By the end, you will have a number of potential task banks to work on.

Dumas and Fox (2008, p1131) provide a very good summary of the kind of tasks that are likely to be involved in usability testing. It is in line with those that we used in our testing sessions in most contexts. These include:

  • tasks that are important, such as frequently performed tasks or tasks that relate to important functions;

  • tasks where evaluators predict users will have difficulties;

  • tasks that enable a more thorough examination of the system, such as those that can only be accomplished by navigating to the bottom of the system hierarchy, or tasks that have multi-links or shortcuts;

  • tasks that influence business goals;

  • tasks that examine the re-designed areas;

  • tasks that relate to newly-added features.

For this step, you don’t need to worry about how to phrase the task descriptions, but make sure all areas that you need to investigate are covered by your tasks.

Stage 2: Formulate the tasks

How well the tasks are formulated determines the reliability and the validity of the usability testing and the usefulness of the data. It’s crucial to get this right. You should consider:

  • The formats of tasks to be used
  • The articulation of the tasks

The formats of tasks

The tasks could be categorised into two main formats:

  • Direct tasks or Scenario tasks

  • Open-ended or Closed task

You need to decide what should be used, and when.

Scenario task or Direct task

A scenario task is presented as a mini user story: often it has the character, the context and the necessary details for achieving the goal. For example, to test the browser and bottom menu on the phone:

You are holding a dinner party this Saturday. You want to find a chicken curry recipe from the BBC food site.

A direct task is purely instructional. For instance, to use the above example:

Find a chicken curry recipe from the BBC food site.

Among these two types, we often use the scenario tasks in the testing. This is because it emulates real-world context that participants can easily relate to, and consequently they are more likely to behave in a natural way. This helps to mitigate the artificiality of user testing to a great extent.  The closer they are related to the reality, the more reliable the test results can be (eg. Rubin, 1994; Dumas and Fox, 2008). In addition, some research (eg. Shi, 2010) shows that the scenario tasks work more effectively with Asian participants.

Interesting research: for Indian participants, Apala Lahiri Chavan’s research (Schaffer, 2002) shows that using a ‘Bollywood’ style task would elicit more useful feedback. For example:

Your innocent and young sister is going to get married this Saturday, and you just get a news the prospective groom is already married! So you want to book a flight ticket as soon as possible to find your sister and save her.

The researchers found that Indian participants feel reluctant to make criticisms to the unfamiliar facilitator, but once they phrased the task in a film-like story, the participants became more talkative and open.

Closed task or Open-ended task

 A closed task is specific to what the participants need to do. This type of task has one correct answer, and therefore allows us to measure if participants solved or failed a task. It is the most commonly used format. For example, to test the telephony on the phone:

 You want to text your landlord to say you will give her the rent tomorrow. Her number is: 7921233290.

An open-ended task contains minimum information and less specific direction as to what you want a participant to do. It gives users more freedom to explore the system. This is particularly useful if you want to find out about what areas users would spontaneously interact with, or which ones matter most to them.

For example, in our Ubuntu.com testing, designers wanted to understand what information was important for users to get to know about Ubuntu. In this case, an open-ended task would be appropriate. I used the task:

 You heard your friends mention something called ‘Ubuntu’. You are interested in it and want to find out more about what Ubuntu is and what it can offer you?

There are three main limitations  of using open-ended tasks:

  • Since participants have control over the task, features that require user feedback might be missed; or vise versa, they may spend too much time on something that is not the focus of the testing. The remedy would be to prepare for a number of closed-tasks, so if certain features are not covered by the participants, these could be used.

  • Some participants may experience uncertainty as to where to look and when they have accomplished the task. Others may be more interested in getting the test done, and therefore do not put in as much effort as what they would in reality.

  • You cannot assign task success rates to open-ended tasks, as there is no correct answer, so it is not suitable if a performance comparison is needed.

The articulation of the tasks

  • Avoid task cues that would lead users to the answers. Make sure the tasks do not contain task solving related actions or terms that are used on the system. For example, in the Juju testing we wanted to know if participants understood the ‘browse’ link for browsing all the charms. We asked participants to find out the types of charms that are available instead of saying ‘you want to browse the charms’.

  • Be realistic and avoid ambiguity. The tasks should be those that would be carried out in the real context, and the descriptions should be unambiguous.

  • Ensure an appropriate level of details. It should contain just enough information so that participants understand what they are supposed to do, but not too much that they are restricted from exploring naturally in their own way. The description of context should not be too lengthy, otherwise participants may lose their focus or forget about it. When closed tasks are used, make sure they are specific enough, so it is clear to the participants as to when they would accomplish their goals. For example, compare the description of ‘You want to show your friends a picture’ to ‘You want to show your friends a picture of a cow’ – which one is better? For the former, the goal is more vague and participants would be likely to click on the first image or a random picture, and assume the task is done. As a result, we might miss usability problems. For the latter,  the task communicates the requirements more effectively: it would be accomplished once they found the picture of a cow. Furthermore, it also provides us with more opportunities to assess the navigation and interaction further, as participants need to navigate among the pictures to find the relevant one.

 

Stage 3: Be tactful in presenting the order of the tasks

In general, the tasks are designed to be independent from each other for two reasons: to grant flexibility in terms of changing the orders of the tasks for different participants; and to allow participants to continue to the next task, even if they failed the previous one.

However, in some contexts, we use dependent tasks (proceeding on to one task depends on whether or not participants solved another task successfully ) on purpose, for instance:

  • When there is a logistic flow involved and the stages of procedures must be followed. To use a very simple example, in order to test account ‘log in’ and ‘out’, we need a task for ‘log in’ first, and then a task for ‘log out’.

  • When testing ‘revisiting’/’back’ navigation (eg. if participants could navigate back to a specific location they visited before) and multitasking concepts (eg. if participants know to use the multitasking facility). For example, when testing the tablet, I had the tasks as follows:

You want to write down a shopping list for all the ingredients you need for this recipe using an app

Here, the participants will need to find the note app and enter ingredients.

Then I had several tasks that were not related to the task above, for example:

 You remember that you will have an important meeting with John this coming Thursday at 10:00 in your office. You want to put it on your calendar before you forget.

Then I instructed participants:

You want to continue with your shopping list by adding kitchen roll on it.

 This requests the participants to go back to the note app that they opened earlier, from which we could find out if they knew to use the right edge swipe to get to the running apps – in other words, whether or not they understood the multitasking feature.

Now you will have your first version of tasks, and on completion, you should always try the tasks out by using the interface to check that they all make sense.

 

Summing up

We use tasks to discover the usability and user experience of an interface. The task quality determines how useful and accurate your testing results would be. It requires time to hone your skills in writing tasks.  Let me sum up some of the main points:

  • Define the goal(s) of the testing;

  • Familiarise yourself with the test interface and go through this interface at least 3 times;

  • Use the appropriate task formats and avoid any inclusion of task-solving cues;

  • Ensure the description is realistic, is at the right level of detail, and avoids ambiguity;

  • Consider the ordering of the tasks, and whether or not you need to use dependent tasks;

  • Pilot the task set with yourself.

What happens next, after you have the list of tasks ready for the  the usability testing? It doesn’t end here.

If time allows, we always pilot the tasks with someone to make sure they are understandable, and that the orders of the tasks work. There are always changes you could make to improve the task sets.

In addition, you will realise that once you are in the actual testing,  no matter how perfect the task sets are,  you will need to react instantly and make adjustments in response to the dynamics of the testing environment: we cannot predict what participants will do. It is therefore important to know how to manipulate the task sets in the real testing condition. We will discuss this in the next post.

References

Dumas, J.S. & Loring, B.A. (2008). Moderating Usability Tests: Principles and Practices for Interacting. San Francisco, CA: Morgan Kaufmann.

Rubin, J. (1994). Handbook of Usability Testing: How to Plan, Design and Conduct Effective Tests. New York: John Wiley & Sons.

Schaffer, E. (2002) Bollywood technique, http://www.humanfactors.com/downloads/jun02.asp#bollywood

Shi, Q. (2010). An Empirical Study of Thinking Aloud Usability Testing from a Cultural Perspective. PhD thesis. Denmark: University of Copenhagen.

 

 

Read more
Lina Pio

Over the past few weeks we’ve been exploring visual directions for the calendar app. It’s a pretty exciting opportunity to create something fresh and at the same time useful. In this post I’ll take you through some of the directions we’re looking at right now and where we hope to eventually go. At this stage the designs are still under consideration.

Year view

This view offers a lot of challenges particularly given the large amount of information that can be compacted into such a small space. The challenge was to provide something that could inform the user quickly and usefully without overloading the screen with information. Each month is clickable. Individual dates, however, will need to be selected from the month view.

01_year copy

Month view

As with year view, it’s a tough call to keep the month view looking and feeling smooth and simple. Because of this, we decided to use the month view to provide the user with an overview of the dates in that month, from which they could select a date only. Instead of filling in the events inside the month view, the user can see the events at a glance inside the week view. We explored two different ways of laying out the month view visually.
02_month copy    02_month2 copy

Week view

In this view we experimented with the visual layout in terms of how much screen space the chrome took up and how you could visually represent different calendar events using coloured blocks vs coloured dots.

03_week1 copy 04_week2 copy 04_week3 copy 04_week4 copy 04_week5 copy

 

 

Day view

With day view, as with Week view, we tried looking at reducing the chrome around the day box to give more space to what the user most needs to see – the events during that day.

05_day1 copy 06_day2 copy

Event

Event view tends to be a different interface type than the others. Where with the other views a user’s prime activity is to navigate through information, the event is the goal in itself, providing a list of information. Because of this, a white background may be a better solution to presenting large amounts of text, making it easier on the eye. One thing that is still in design at the moment is the ability to select the date and time when creating a new event.

07_new_event1 copy 08_new_event2 copy 09_new_event3 copy 10_event_detail copy 10_event_detail2

We hope you enjoyed going through our visuals and thought process. Watch this space next time for more visuals on date and time picker to go along with event view.

 

Video

Here’s a video to show how the interactions and transitions will eventually function.

Read more
Katie Taylor

Edges are special to us. We use them for finding apps, tools and system services, so using the edges will be second nature to Ubuntu phone users. By using the launcher, how to launch your favourite app will become ingrained in your muscle memory of the left edge.

The design vision behind Ubuntu for phones includes the use of fast and natural interactions, so taking that to the welcome screen means that if your phone is locked, you can still access the launcher, system services and the right edge. If you have a pin set up, you only need to enter your pin when accessing private data, in the Gallery app or the Dash for example.

 

 

If you’ve flashed your phone recently, you will be able to activate the lock screen for the phone using a temporary hack (love it!). You’ll notice that the blur has not yet been implemented, but will be added later. Thanks to Michael Zanetti for originally posting instructions to the Ubuntu Phone mailing list. Here they are:

To enable the pin lock, log into the phone and create a file /home/phablet/.unity8-greeter-demo, with the content: password=pin

If you want to see the password unlock screen instead, put this into the file:  password=keyboard
For now, the pin is hardcoded to “1234″ and the password is “password”. Note that this functionality can (and will) disappear at any time as we bring all the bits and pieces together. This is a temporary, simple way to enable the visual part of the lock screen for us all to have a play with.

Let us know what you think on the Ubuntu Phone mailing list and the IRC channel.

Read more
Calum Pringle

It’s been a while since our last update to the app design guides so I thought it was about time I shared the latest additions to this growing resource.

Screen sizes

A brief intro to the framework we use for designing for a scalable OS - the grid unit. With a link directly to a more detailed explanation on developer.ubuntu.com.

Read about designing for multiple screen sizes.

FAQs

We’ve started to collect frequently asked questions. This section could be improved if it was a little more ‘live’ so we’ll have a think about that.

Read our most frequently asked questions.

Combo button

When you are receiving a phone call, it is possible to decline the call (of course), or alternatively you can decline and reply with a message. To accommodate this and similar use cases we have designed the combo button. Use the combo button to display secondary variations of the primary action.

See our new combo button.

Option selector

While designing System Settings we have come across many situations where there is a need to select from a list of mutually exclusive options. Use the option selector when you need to select an option from a list.

See our new option selector building block.

Slider

Our slider has gone through a little makeover too.

Take a look here.

Remember, this site is a work in progress, so we will continue to iterate on the content and design. As usual you can find us on the Ubuntu Phone mailing list and the IRC channel.

Read more
Lina Pio

One of the key challenges with designing calendar applications is the number of ways you can display your time, whether it’s by year, month, week or day. After a lot of good old fashioned hard work, we refactored navigation by making the tab header the key to switching between views. Although the direction I’ll take you through in this article is strong and clean, it’s still a work in progress, and as such, can still change. The images are small in this article, to get a closer look at all of them collected together, download the PDF here.

The latest designs in this article show you how we’ve aimed to solve:

  • Navigation between different calendar views
  • Gestures to help quick navigation
  • Editing events
  • Creating events
  • How this will potentially look and feel

 

Different views

There are 5 different view templates inside the calendar app we are focusing on. They are:

  • Year
  • Month
  • Week
  • Day
  • Event

 


Navigating between different views using title header bar

You can move through the different views by tapping on the title header bar to toggle the view mode options. Just like our patterns, this title is scrollable – so that you can scroll through the view modes which don’t fit the width of the screen. Also like our pattern, swiping to the left or right moves along to the next or previous unit (year/month/week/day/event) in its category. For reference, take a look at Calum’s excellent post on this.

To get a better idea, click here to see a video of the prototype which formed the base of this navigation model and allowed us to test it out, comparing and contrasting it against other design directions. It was enough to give us a feel for the potential final build.

 

Navigating between views using spread and pinch gestures

To aid fast navigation for pro users and to also add an element of fun, we’ve decided to enable zooming in and out between views using finger spreading and pinch gestures similar to zooming in and out in a map app.

 Spreading fingers gesture: This zooms in to the next view; the next view offering more detail, close up.

     E.g. A user spreads when on the year view. This opens up the month view. Spreading on the month view opens up the week view, and so on.

Pinching fingers gesture: This zooms out, to a less detailed view – the previous view in the view hierarchy.

     E.g. A user pinches on month view, the system responds by taking the user to the year view.

 

An event

An event has several detail fields. In order of appearance they are:

  • Event name
  • Time
  • Description
  • Location
  • Guests
  • This happens (how many times does this event happen in the series? Or is it a one time event?)
  • Remind me
  • Timezone

 

Editing an event

A bottom edge swipe on an event page brings up the toolbar with the edit button.

[NOTE: toolbar menu options within the calendar and across the whole system have not been finalised, this image of the toolbar is a placeholder to give an idea of how to edit]

The edit mode shows the boxes around the fields allowing the user to type and change the event details. The toolbar in edit mode is always present. It shows cancel and save options.

 

Creating an event

A bottom edge swipe on year, month, week and day views brings up the toolbar with the option ‘New’ to create a new event. Pressing this brings up a similar template to the ‘Edit’ mode, the only difference being the blank forms.

 

Visuals

The visuals in the image below are an exploration of how this can potentially look and feel. This is still very much still in progress, but gives a strong hint of what’s to come.

 

 

I hope you like our thoughts and directions on this, and that this article gives a stronger idea of what the final app will look and behave like.

Watch this space for my upcoming articles focusing on: an in depth look at events – (including guest contacts, location views, time and date pickers etc) calendar synching with external accounts, calendar settings, and calendar mode inside the indicators.

Read more
Martin Keary

This is a presentation of our ‘Paper’ Motion theme for Ubuntu Mobile.

The theme is informed by the ‘paper’ graphic style of the mobile OS and we have sought to accentuate it wherever possible. Rather than using more overt effects like page curling and folding, we have hinted at the theme by using multiple layers, ‘stacking’ and suggestive effects. Multiple layers of sliding paper can be observed in the animation of the switch button, stacking can be seen occurring on the icons in the launcher and an example of a suggestive page-turning effect can be seen during the ‘App Stacking’ example.

Read more