Canonical Voices

Posts tagged with 'research'

Carla Berkers

As the number of Juju users has been rapidly increasing over the past year, so has the number of new solutions in the form of charms and bundles. To help users assess and choose solutions we felt it would be useful to improve the visual presentation of charm and bundle details on manage.jujucharms.com.

While we were in Las Vegas, we took advantage of the opportunity to work with the Juju developers and solutions team to find out how they find and use existing charms and bundles in their own deployments. Together we evaluated the existing browsing experience in the Juju GUI and went through JSON-files line by line to understand what information we hold on charms.

carla.jpg

We used post-its to capture every piece of information that the database holds about a bundle or charm that is submitted to charmworld.

 

We created small screen wireframes first to really focus on the most important content and how it could potentially be displayed in a linear way. After showing the wireframes to a couple more people we used our guidelines to create mobile designs that we can scale out to tablet and desktop.

With the grouped and prioritised information in mind we created the first draft of the wireframes.

 

In order to verify and test our designs, we made them modular. Over time it will be easy to move content around if we want to test if another priority works better for a certain solution. The mobile-first approach is a great tool for making sense of complex information and forced us to prioritise the content around user’s needs.

jaas-store

First version designs.

Read more
Carla Berkers

I’d like to share my experience working on the project that has been my main focus over the past months: the redesign of canonical.com.

Research methods

As I started talking to people in the design department I quickly discovered we have a lot of information about our site visitors. My colleagues helped me access Google Analytics data and findings from previous user testing sessions. There was also a research-based set of personas, that helped me to put together an initial overview of user needs and tasks.

I was curious to try to validate these user needs and test them against Canonical’s current business requirements. In order to find out more about the company goals I prepared a stakeholder interview script and started putting together a list of people to talk to. I initially planned to limit the number of interviewees to about six to eight stakeholders, as too many opinions could potentially slow down the project and complicate the requirements.

Getting to know the company

I started with eight people to talk to, but with each interview I found out about other people that I should add to my list. At the same time, many of the interviewees warned me that every person I would talk to would have different ideas about the site requirements. By the end of the first round of interviews, ending up with too many stakeholders turned out to be the most commonly mentioned risk to endanger the project finishing on time.

I had very diverse conversations about different aspects of the site with a range of people. From strategic insights from our CEO Jane, to brand guidelines and requirements from our Head of Design Marcus and ideas around recruitment from our HR business partner Alice — each conversation brought unique requirements to light.

After I spoke to about fifteen people I summarised the key points from each stakeholder on post it notes and put them all up on a wall in one of the meeting rooms in the office. As I took out the duplicates and restructured the remaining notes, I began to see a familiar pattern.

Conclusions

When I finished grouping the different audiences, I ended up with five groups of users: enterprise customers, (potential) partners, job seekers, media (a varied group that includes press, tech analysts and bloggers), open source advocates and the more generic tech enthusiasts that want to know more about the company backing Ubuntu.

As these groups aligned very well with the persona’s and other pieces of research that I had found, I felt comfortable continuing my process by moving on to the user needs and site goals that will help build a good site structure and generate useful content for each group of users.

I found that talking to many experts from within the company helped me quickly understand the full range of requirements, saving me time rather than making my job more complicated. Furthermore I was happy to get a chance to get to know people from different parts of the company so soon after I got started.

In order to keep the project moving forward, we appointed one key stakeholder to sign off each step of the process, but I’m looking forward to showing the end results to the broader group to see if I managed to meet all their expectations. We will also conduct user-testing to ensure the site answers our core audiences questions and allows them to complete their tasks.

I hope to share more about this project in the months to come.

Read more
Tingting Zhao

In the previous post, we talked about how to design effective user testing tasks to evaluate the usability of an interface. This post continues with this topic by highlighting a number of key strategies that you may need to use when conducting a formative user testing, whose main aim is to identify usability problems and propose design solutions, rather than compare quantitative metrics (summative testing), eg. task completion time and mouse clicks. It is unlikely that the prepared task script could be strictly applied without any changes, since the testing situation tends to be dynamic and often unpredictable. To get useful data, you need to be able to manipulate your task script with flexibilities, while also maintaining consistency.

Changing task orders during testing

Normally, to avoid order effect, the issuing of the tasks should be randomised for each participant. Order effect refers to the effect in which the tasks are presented, which can affect the results, specifically: users may perform better in later tasks as their familiarity with the interface increases; or users may perform worse as they become more fatigued. However, as discussed in the previous post, the tasks are often contextual and dependent on each other, so you need to carefully consider which tasks could be shuffled. For example, it is a good practice to mark on the script to indicate dependent tasks, so that you know these tasks should not be reordered and separated from each other. In other words, the dependent tasks must always be moved together. It is worth noting that the randomisation of task orders may not always be possible for a testing, for example, when the tasks are procedurally related, such as a testing focusing on payment flow.

Sometimes you may need to change the task orders by considering their levels of difficulty. This is useful in the following two scenarios: when you notice a participant appears to be slightly nervous before the testing has started, provide a simple first task to put him/her at ease; or when you observe a participant has failed to solve several tasks in a row, provide one or two easy tasks to reduce the frustration and stress, and boost confidence.

Another type of changing task order is made in response to users’ solicited goals that are associated with a coming task. For example, in one phone testing, after a participant checked the battery level, s/he spontaneously expressed a desire to know if there was a way to switch off some running apps to save battery. In this case, we jumped to the task of closing running apps, rather than waiting until later. This makes the testing feel more natural.

Remove tasks during testing

There are typically two situations that require you to skip tasks:

  • Time restriction

  • Questions being answered with previous tasks

Time restriction: user testing normally has a time limit. Participants are paid for certain lengths of time. Ideally, all the tasks should be carried out by all the participants. However, sometimes they take longer to solve tasks. Or you may discover areas that require more time for investigation. In this case, not all the tasks could be performed by a participant within the given time. Subsequently, you need to be able to quickly decide which tasks should be abandoned for this specific participant. There are two ways to approach this:

  • Omit tasks that are less important: it is always useful to prioritise the tasks in terms of their importance – what are the most important areas that have key questions that need to be answered and require feedback; what could be left for the next testing, if not covered this time?

  • Omit tasks that have already received abundant feedback: skip the tasks from which you have already gathered rich and useful information from other participants.

Questions were answered with previous tasks: Sometimes questions associated with a specific task would be answered while a participant was attempting to solve the previous task – in this case, you could skip this task.

In one of our phone testings, we asked a participant to send a text to a non-contact (a plumber). During the task-solving process, s/he decided to save the number to the contact book first and then send a text. In this case, we skipped the task of ‘saving a number to contact book’.

However sometimes you should not skip a task, even if it might seem repetitive. For example, if you want to test the learnability and memorability of a feature, having the participant perform the same task (with slightly different descriptions) for the second time (after a certain time duration) could afford useful insights.

Add tasks during testing

There are three contexts in which you could consider adding tasks:

  • Where the user formulates new goals

  • Re-examinations

  • Giving the user a second chance

The added task must be relevant to the aim of the testing, and should only be included if the testing time permits.

User formulates new goals: you could add tasks based on user-formulated goals in the task solving process.

For example, in one phone testing, one participant wondered if s/he could customise the tiles on the Windows phone’s home screen. We made this an added task for her/him. Adding tasks based on user articulated new goals follows their thought process and make the testing more natural. It also provides opportunities for us to discover new information.

Re-examinations: sometimes the users may succeed in a task accidently, without knowing how s/he did it. In this case, the same task (with a slightly changed description) could be added to re-assess the usability.

For example, in one phone testing, we had one task: “You want to call you sister Lisa to say thank you for the phone”. One participant experienced great difficulties in performing this task, and only completed it after a long time and by accident. In this case, we added another task to re-evaluate the ease of making a phone call:

“Your call is cut off while you are talking to your sister, so now you want to call her again.”

Similarly in the Gallery app testing, where participants managed to add a picture into a selected album accidently, we asked them to add another picture into a different album.

Re-examination allows us to judge accurately the impact of a problem, as well as to understand the learnability of interface – the extent to which users could detect and learn interaction patterns (even by accident), and apply the rules later.

Giving the user a second chance: in the majority of the user testing, participants used the evaluated interface for the first time. It could be very demanding for them to solve the tasks successfully in their first attempt. However, as the testing progresses, participants might discover more things, such as features and interaction patterns (although possibly by accident). Consequently, their knowledge of the interface may increase. In this case, you could give them another chance to solve the task that they failed earlier in the tests. Again, this helps you to test the learnability of the interface, as well as assess the impact of a problem.

For example, in a tablet testing, one participant could not find the music scope earlier in the testing, but later s/he accidentally discovered the video scope. To test if s/he now understood the concept of dash scopes, we asked the participant to find the music scope again after several other tasks.

Change task descriptions (slightly) during testing

Information gathered from pre-testing brief interview and participants’ testing verbal data could often be used to modify the task description slightly to make the task more realistic to the users. This also gives the user the impression that you are an active listener and interested in their comments, which helps to build a rapport with them. The change should be minor and limited to the details of the scenario (not the aim of the task). It is important that the change does not compromise the consistency with other participants’ task descriptions.

For example, in a tablet testing, where we needed to evaluate the discoverability of the HUD in the context of photo editing, we had this task: “You want to do some advanced editing by adjusting the colour of the picture.” One participant commented that s/he often changed pictures to ‘black and white’ effect. In response to this, we changed the task to “You mentioned that you often change a picture to black and white, and now you want to change this picture to ‘black and white’”. The task change here does not change the aim of the task, nor the requirements for solving the task (in this case, the access to the HUD), but it becomes more relatable to the participant.

Another example is from a phone testing. We changed the task of “you want to go to Twitter” to “you want to go to Facebook” after learning the participant uses Facebook but not Twitter. If we continued to request this participant to find Twitter, it would make the testing become artificial, which would result in invalid data. The aim of the task is to evaluate the ease of navigation in finding an app, therefore changing Twitter to Facebook does not change the nature of the task.

Conclusions

This post outlines a number of main strategies you could use to modify your task script to deal with typical situations that may occur in a formative user testing. To sum up:

Changing tasks orders: randomise tasks for each participant if possible, and move the dependent tasks as a whole; consider the difficulties of the task and issue an easy task to start with if you feel participant is nervous, or provide an easy task if participants failed several tasks in a row. Allow them to perform a later task if they verbalise it as a goal/strategy for solving the current task.

Remove tasks: if time is running out with a particular participant, omit certain tasks. This could be tasks with low priorities; tasks that already have enough feedback from other participants; or tasks the participant has already covered while attempting a previous task.

Add tasks: if time permits, allow users to perform a new task if it is a user initiated goal and is relevant to the testing; repeat a task (with slightly different wording and at an appropriate time) if the user succeeds in a task accidently, or has failed this task earlier, or if the aim is to test the learnability of the system.

Change task description: slightly amend the details of the task scenario (not the aim of the task) based on users’ verbal data to make it more relatable and realistic to the user. This will improve the reliability of the data.

If you have other ways to maneuver the tasks during the testing session, or have situations you are unsure about, feel free to share your experience and thoughts.

Read more
Tingting Zhao

In past years, we have had many Ubuntu users getting involved in helping with our user research. Now we feel it’s time to form a user research network, which we’re calling: UbuntuVoice.

So,  if you want to:

  • be the voice of over 20 million Ubuntu users. You will have the opportunities to take part in a variety of Ubuntu user research with different products, and help shape the Ubuntu experience. You choose the ones that you are interested in.

  • stay up to date with Ubuntu. Get periodic updates (every two months) via email, such as what designers are working on, how feedback is used, and how users behave when interacting with technology.

  • get a little something extra. Some of our research will come with an incentive, or in the form of a ‘Ubuntu goody’ lucky draw, and some research will be voluntary.

…then join us today by clicking here

If you have any questions, please feel free to contact us at: ubuntuvoice@gmail.com

 

Update: Thank you very much for everyone’s support for the UbuntuVoice! We reached our target number of participants in just a day! Since we are a small team, we can’t have more participants at the moment. However, do keep your eyes on the design blog for updates.

Ubuntu user research team

Read more
Tingting Zhao

Previously, Charline Poirier provided an excellent post about how to recruit representative participants for usability testing. To continue the story, we are going to talk about the next stage: developing effective task sets, which is a crucial part of a test protocol.

We conduct usability testing iteratively and throughout the product life cycle. The testing interface could range from being as simple as paper images, to clickable prototypes, to a fully working system.

In order to assess the usability of an interface, we ask users to carry out a number of tasks using the interface. We use tasks that resemble those that users would perform in a real life context, so that the data we collect is accurate. In other words, the user behaviour we observed is representative, and the problems we found are those that users would be likely to encounter.

 

Design testing tasks – ‘a piece of cake’?

 

When I first learnt about usability testing, I thought: ‘It’s simple: you just need to write some tasks and ask people to solve them, and done!’ But after conducting my first ever usability testing, I realised this was not the case.  I had so many questions: I wasn’t sure where to start or what tasks should be used, and there were numerous details that needed to be thought through. You need to carefully craft the tasks.

Now, having conducted hundreds of usability testings, I would like to share my experience with you about how to design effective tasks. There are three main stages involved:

  • Decide on the tasks

  • Formulate the tasks

  • Be tactful in presenting the order of the tasks

 

Stage 1: Decide on the tasks

Before you sit down to compose a set of tasks, you are likely to go through the following stages:

  • Clearly establish the goal of the testing: specifically what are the main features/areas that require feedback. When we conduct testing, we always have a face to face meeting with the design team to understand their focus and needs.

  • ‘Walkthrough’ with the design team: If testing an early prototype that has not been fully implemented, it’s important to go through the prototype with the designers so that you are aware of how it works, what is working and what is broken.

  • Inspection : go through the test interface at least three times. The first time to get an idea of the general flow and interaction of the interface; the second time to ‘put on the user’s hat’, and examine the interface by thinking about what users would do, and pay attention to any possible difficulties they may experience. This is the stage where you could start to write down some of the potential tasks you could use, which cover the features you need to assess, and the predicted problematic areas; and the third time, you should focus on developing tasks when you are going through the interface again. This gives you the opportunity to evaluate the tasks you identified, and add or remove tasks. By the end, you will have a number of potential task banks to work on.

Dumas and Fox (2008, p1131) provide a very good summary of the kind of tasks that are likely to be involved in usability testing. It is in line with those that we used in our testing sessions in most contexts. These include:

  • tasks that are important, such as frequently performed tasks or tasks that relate to important functions;

  • tasks where evaluators predict users will have difficulties;

  • tasks that enable a more thorough examination of the system, such as those that can only be accomplished by navigating to the bottom of the system hierarchy, or tasks that have multi-links or shortcuts;

  • tasks that influence business goals;

  • tasks that examine the re-designed areas;

  • tasks that relate to newly-added features.

For this step, you don’t need to worry about how to phrase the task descriptions, but make sure all areas that you need to investigate are covered by your tasks.

Stage 2: Formulate the tasks

How well the tasks are formulated determines the reliability and the validity of the usability testing and the usefulness of the data. It’s crucial to get this right. You should consider:

  • The formats of tasks to be used
  • The articulation of the tasks

The formats of tasks

The tasks could be categorised into two main formats:

  • Direct tasks or Scenario tasks

  • Open-ended or Closed task

You need to decide what should be used, and when.

Scenario task or Direct task

A scenario task is presented as a mini user story: often it has the character, the context and the necessary details for achieving the goal. For example, to test the browser and bottom menu on the phone:

You are holding a dinner party this Saturday. You want to find a chicken curry recipe from the BBC food site.

A direct task is purely instructional. For instance, to use the above example:

Find a chicken curry recipe from the BBC food site.

Among these two types, we often use the scenario tasks in the testing. This is because it emulates real-world context that participants can easily relate to, and consequently they are more likely to behave in a natural way. This helps to mitigate the artificiality of user testing to a great extent.  The closer they are related to the reality, the more reliable the test results can be (eg. Rubin, 1994; Dumas and Fox, 2008). In addition, some research (eg. Shi, 2010) shows that the scenario tasks work more effectively with Asian participants.

Interesting research: for Indian participants, Apala Lahiri Chavan’s research (Schaffer, 2002) shows that using a ‘Bollywood’ style task would elicit more useful feedback. For example:

Your innocent and young sister is going to get married this Saturday, and you just get a news the prospective groom is already married! So you want to book a flight ticket as soon as possible to find your sister and save her.

The researchers found that Indian participants feel reluctant to make criticisms to the unfamiliar facilitator, but once they phrased the task in a film-like story, the participants became more talkative and open.

Closed task or Open-ended task

 A closed task is specific to what the participants need to do. This type of task has one correct answer, and therefore allows us to measure if participants solved or failed a task. It is the most commonly used format. For example, to test the telephony on the phone:

 You want to text your landlord to say you will give her the rent tomorrow. Her number is: 7921233290.

An open-ended task contains minimum information and less specific direction as to what you want a participant to do. It gives users more freedom to explore the system. This is particularly useful if you want to find out about what areas users would spontaneously interact with, or which ones matter most to them.

For example, in our Ubuntu.com testing, designers wanted to understand what information was important for users to get to know about Ubuntu. In this case, an open-ended task would be appropriate. I used the task:

 You heard your friends mention something called ‘Ubuntu’. You are interested in it and want to find out more about what Ubuntu is and what it can offer you?

There are three main limitations  of using open-ended tasks:

  • Since participants have control over the task, features that require user feedback might be missed; or vise versa, they may spend too much time on something that is not the focus of the testing. The remedy would be to prepare for a number of closed-tasks, so if certain features are not covered by the participants, these could be used.

  • Some participants may experience uncertainty as to where to look and when they have accomplished the task. Others may be more interested in getting the test done, and therefore do not put in as much effort as what they would in reality.

  • You cannot assign task success rates to open-ended tasks, as there is no correct answer, so it is not suitable if a performance comparison is needed.

The articulation of the tasks

  • Avoid task cues that would lead users to the answers. Make sure the tasks do not contain task solving related actions or terms that are used on the system. For example, in the Juju testing we wanted to know if participants understood the ‘browse’ link for browsing all the charms. We asked participants to find out the types of charms that are available instead of saying ‘you want to browse the charms’.

  • Be realistic and avoid ambiguity. The tasks should be those that would be carried out in the real context, and the descriptions should be unambiguous.

  • Ensure an appropriate level of details. It should contain just enough information so that participants understand what they are supposed to do, but not too much that they are restricted from exploring naturally in their own way. The description of context should not be too lengthy, otherwise participants may lose their focus or forget about it. When closed tasks are used, make sure they are specific enough, so it is clear to the participants as to when they would accomplish their goals. For example, compare the description of ‘You want to show your friends a picture’ to ‘You want to show your friends a picture of a cow’ – which one is better? For the former, the goal is more vague and participants would be likely to click on the first image or a random picture, and assume the task is done. As a result, we might miss usability problems. For the latter,  the task communicates the requirements more effectively: it would be accomplished once they found the picture of a cow. Furthermore, it also provides us with more opportunities to assess the navigation and interaction further, as participants need to navigate among the pictures to find the relevant one.

 

Stage 3: Be tactful in presenting the order of the tasks

In general, the tasks are designed to be independent from each other for two reasons: to grant flexibility in terms of changing the orders of the tasks for different participants; and to allow participants to continue to the next task, even if they failed the previous one.

However, in some contexts, we use dependent tasks (proceeding on to one task depends on whether or not participants solved another task successfully ) on purpose, for instance:

  • When there is a logistic flow involved and the stages of procedures must be followed. To use a very simple example, in order to test account ‘log in’ and ‘out’, we need a task for ‘log in’ first, and then a task for ‘log out’.

  • When testing ‘revisiting’/’back’ navigation (eg. if participants could navigate back to a specific location they visited before) and multitasking concepts (eg. if participants know to use the multitasking facility). For example, when testing the tablet, I had the tasks as follows:

You want to write down a shopping list for all the ingredients you need for this recipe using an app

Here, the participants will need to find the note app and enter ingredients.

Then I had several tasks that were not related to the task above, for example:

 You remember that you will have an important meeting with John this coming Thursday at 10:00 in your office. You want to put it on your calendar before you forget.

Then I instructed participants:

You want to continue with your shopping list by adding kitchen roll on it.

 This requests the participants to go back to the note app that they opened earlier, from which we could find out if they knew to use the right edge swipe to get to the running apps – in other words, whether or not they understood the multitasking feature.

Now you will have your first version of tasks, and on completion, you should always try the tasks out by using the interface to check that they all make sense.

 

Summing up

We use tasks to discover the usability and user experience of an interface. The task quality determines how useful and accurate your testing results would be. It requires time to hone your skills in writing tasks.  Let me sum up some of the main points:

  • Define the goal(s) of the testing;

  • Familiarise yourself with the test interface and go through this interface at least 3 times;

  • Use the appropriate task formats and avoid any inclusion of task-solving cues;

  • Ensure the description is realistic, is at the right level of detail, and avoids ambiguity;

  • Consider the ordering of the tasks, and whether or not you need to use dependent tasks;

  • Pilot the task set with yourself.

What happens next, after you have the list of tasks ready for the  the usability testing? It doesn’t end here.

If time allows, we always pilot the tasks with someone to make sure they are understandable, and that the orders of the tasks work. There are always changes you could make to improve the task sets.

In addition, you will realise that once you are in the actual testing,  no matter how perfect the task sets are,  you will need to react instantly and make adjustments in response to the dynamics of the testing environment: we cannot predict what participants will do. It is therefore important to know how to manipulate the task sets in the real testing condition. We will discuss this in the next post.

References

Dumas, J.S. & Loring, B.A. (2008). Moderating Usability Tests: Principles and Practices for Interacting. San Francisco, CA: Morgan Kaufmann.

Rubin, J. (1994). Handbook of Usability Testing: How to Plan, Design and Conduct Effective Tests. New York: John Wiley & Sons.

Schaffer, E. (2002) Bollywood technique, http://www.humanfactors.com/downloads/jun02.asp#bollywood

Shi, Q. (2010). An Empirical Study of Thinking Aloud Usability Testing from a Cultural Perspective. PhD thesis. Denmark: University of Copenhagen.

 

 

Read more
Tingting Zhao

Understanding user behaviour through user research is an integral part of our design process. In the last ubuntu.com website testing, some insights surfaced about user behaviour, which could help to shape a great user experience for our website. We share the three mains ones here. They have been much discussed in the UX, and the findings from the testing reinforced their importance.

Who were the participants?

12 participants took part in this research. They belonged to two different groups:

  • Ubuntu novices: those who have limited computer knowledge and had not heard of or used Ubuntu before. 8 participants were from this group. They were professionally recruited and of mixed genders.
  • Ubuntu users: those who use Ubuntu OS on a daily basis. They were from our Ubuntu users database pool and were recruited via emails.

What were the three main types of user behaviour found?

The Power of Images

“I go straight to the pictures before I go to the words. You look at pictures and they give you a flavour of what it is all about.”(P3)

” I use images to decide on a product. I tend to work very visually. Sometimes it is not easy to understand the jargon, and it is much easier to see what it is like. ” (P6)

“I’m just looking at the picture to see how much learning is required.” (P10)

In the testing process, we observed that participants appeared to rely on images heavily to help them form an opinion about Ubuntu. They used images in multiple ways throughout their interaction process, including:

  • To understand what the interface is about or make sense of an unfamiliar concept/feature
  • To decide whether or not it looks easy to use
  • To compare it with what they are currently using, and to see how much learning it may require

Images are therefore a powerful communication medium for us to build a positive brand with our users.

Take away:

It is important that images are relevant to their context and offer the best presentation of the product. We should use images to reflect the user friendliness and uniqueness of Ubuntu.

The Journey of Persuasion

“When I first came to your site, you need to tell me why I want to use this. This is paramount.” (P2)

“ It (the site) needs to highlight what I don’t know. Why I should use it, and with examples.” (P5)

When participants first landed on the homepage, they expressed their need to be informed about what Ubuntu does, who it is for, and why they should use it. They wanted to be convinced from the very start.

During the exploration process, when they were looking at Ubuntu pages, participants were attentive to the apparent benefits Ubuntu could offer to satisfy their personal needs. They relied on concrete examples and statistical figures to establish and enhance their understanding and trust. They also enjoyed browsing through different quotations from our users.

Take away:
The persuasion process should start from the moment users land on our homepage, until leaving the site. The key proposition messages should be specific, apparent and repeated throughout the user journey.

Make Use of Opportune Moments

“It says free upgrade for life, that’s good. Built in security, that’s good. Thousands of apps, that’s good too. I want to click on these to find out more.” (P3)

Our website has many good design features that grabbed participants’ attention straight away, for instance, the image tiles for ‘Reasons to love Ubuntu’ and the use of bullet points to outline essential information about Ubuntu’s main features. When participants encounter such design features or content that they found interesting, they often wanted to click an icon or topic to explore it further. They were disappointed or even frustrated if these were not clickable.

Take away:
We should make use of these opportune moments to keep users engaged and informed by providing efficient and desirable navigational paths to lead them to more detailed and relevant information.

What’s next ?

The web team has been carrying out changes in response to the user testing results. The aforementioned user behaviour findings will feed into the next web design cycle to help with the design decisions. This will help users to get even more out of their visits to Ubuntu.com.

Read more
Amritpal Singh Bhachu

Back to Lecturing for the day

In my last post, I spoke about my transition from academia to industry. One thing that I felt I would miss were the opportunities to speak to students, and watch their progression throughout the year. So when I was asked to go back to the University to give a talk, I jumped at the chance.

So I prepared what I was going to talk about and set off to the School of Computing at the University of Dundee to meet these talented students. My first job was to help assess their group pressure projects which they had been tasked with the week before. The theme was educational games. Over the next 2 hours, I sat and was amazed by what the groups had produced in such a short period of time.

The Winning Group with their Ubuntu prizes

Several things frustrated me however.

Each group had 3 minutes to present their game and explain what they did. But they all focussed on showing gameplay and illustrated some of the code that they used. A number of groups stood up and highlighted that they felt their game wasn’t very good because they didn’t have strong coders in their team. When I asked them questions about the processes that they had been through before coding, they all showed evidence of brainstorming, wireframing and design. My biggest issue however was that most of the groups started coding before they considered who the user would be, and therefore, they considered a user to meet the code, rather than producing the code for a specific user.

So this lead me to change what I wanted to talk to them about, and I did an interactive session with the 80 odd students to develop a user profile for the remit they had been given. We looked at who the user group was, what were the characteristics of this user, where would they want to play the game, why they would want to play the game and how they would play the game. We brainstormed on a whiteboard and agreed on which attributes to keep, and which to remove. This was all done in half and hour. The students really took on board the importance of considering the user, and how quickly it could be done for the projects that they would be presented with going forward in their education.

It was the most enjoyable lecture that I had ever taken, and I look forward to doing it again soon.

On another note, later that evening I made my triumphant return to the land of stand up comedy. I was invited back to do Bright Club Dundee having performed last year. It was great fun to do, even though I don’t think I’ll be looking at a change in career anytime soon! Below is a photo of the performers….you can quite clearly see the fear in our eyes!

Bright Club Dundee Performers

If you want to see my set (which contains strong language and little humour) then follow this link.

 

Read more
Paul Sladen

Normally the Ubuntu/Canonical Design Team are busy working on our own projects, but it makes a really good change to work on other Free Software design problems for a change. Starting at 22:00 UTC (15:00 PDT) on Monday 7 May 2012, you can join us for the next Ubuntu Design Theatre at the Ubuntu Developer Summit in Oakland, California:

Bring your design issues along and lets see how we can improve them! There should be visual designers, user interface designers, brand designers, … and the many other people who try and work to make users’ lives better with Free Software.

Read more
Paul Sladen

Some of original sketches for Ubuntu Arabic are about to go on display in Berlin! We’ve talked before about the work done by Rayan Abdullah on drawing and designing the original calligraphy behind the Ubuntu Arabic for the Ubuntu Font Family and from tomorrow you will be able to see that work for yourself.

Until 27 May 2012 you can see some of those original sketches and designs featuring in the Typobau exhbition at the Körnerpark Gallery in Neukölln, Berlin,

It includes many of Rayan’s design projects from the last decade, including the Bundesadler (the Federal Eagle of Germany) and his many Arabic graphic design and typography projects including the logos and typefaces for Burberry, McDonalds, Nokia Pure Arabic and the Ubuntu Font Family Arabic script coverage.

For keen visitors, the grand opening is this week, at 19:00 on Friday 20 April 2012. Or for anyone visiting Messe Berlin in May 2012 for Linuxtag 2012 you will still be able to catch the exhibition. Just take the S-Bahn ring anti-clockwise to S-Neukölln and see Ubuntu and Rayan’s exhibition at the same time as Linuxtag!

The “Typobau” exhibition runs between 21 April 2012 and 27 May 2012, 10:00–20:00, Tuesday—Sunday, at Körnerpark Galerie, Schierker Strasse 8, Berlin-Neukölln

Read more
Mika Meskanen

Ubuntu and Canonical had a very strong presence at this year’s Mobile World Congress in Barcelona. The main attraction was our Ubuntu for Android prototype that was published just a week earlier. The beautiful cubic pavilion also housed the Ubuntu TV demo, Ubuntu One, and our established desktop and cloud offerings. The booth attracted a constant flux of curious visitors representing all walks of life: media, industry people, businessmen, technology enthusiasts, students and… competitors.

John Lea, Oren Horev and myself from the Design Team joined Canonical sales, business and technical staff in this bold effort. In addition to running demos and having interesting conversations with the visitors to the booth, we also had the opportunity to have a look at the endless exhibition halls and floors of the conference and do research on what makes the mobile world tick at the moment.

If the MWC 2012 had to be summarised in one tagline, anyone would probably admit, that it was a one massive Androidfest.

Google’s recently upgraded operating system was simply everywhere. Spearheading the Android avalanche were the latest generation supermobiles – every device manufacturer was showing off with their versions of quad-core, high-definition, 4G/LTE smartphones and tablets bumped up to the latest specification.

Bells and whistles ranged from glasses-free 3D displays to Dolby sound to watertight casings – demonstrating that OEM customisations go beyond branding and skinning the interface.

Google themselves hosted an extensive Android area that was more like a theme park than a typical business affair: fans and passers-by were treated to a smoothie bar, a tube slide (presumably an homage to Google offices), grab-a-plush-Android game – and lucky ones could have had their Nexus phones pimped up with Svarovski crystals assembled by an industrial robot.

In stark contrast to Google’s rather playful attitude towards their ecosystem, the manufacturers were more poised for flexing their technological muscle. The impending hockey-stick curve of serious mobile computing power seems to all but prove the concept behind Ubuntu for Android. The phones of the near future are going to effortlessly run desktop and mobile operating systems simultaneously, and those extra cores can do more than just keep your hands warm in your pocket. Similarly, in our hands-on testing, the demoed 4G/LTE connections were lightning fast, signalling that accessing your cloud and thin client applications from a phone running a full productivity desktop can shift the paradigms of your mobile working life.

While this year’s congress was overrun by Android, it will be interesting to see whether this will be repeated next year, when we can assume to see the effects of Google’s Motorola acquisition and the impact of Windows 8. The latter had reached Consumer Preview stage and was presented in a separate session outside the main exhibition.

Most of the manufacturers had an odd Windows Phone in their inventory, but basically its marketing was left to Nokia, who also occupied a substantial exhibition floor not far from us. The newfound underdogs were quite upbeat about their Lumia phones, 41 megapixel cameras and the staff were very approachable in their stripy Marimekko shirts and funny hats.

In one of the quieter affairs, the Nokia Research Centre demoed an indoor positioning system that promises 30 centimere accuracy and presumably lands in a Bluetooth standard in the near future, enabling a range of user experience scenarios for malls, airports and alike. Affordable Asha phones and Nokia Life for emerging markets were featured as well.

Aside from phones, there were a number of smart TV upstarts. We saw a few demos built on old versions of Android, where a phone interface jumped on the screen as soon as the user leaves the home screen. A more captivating demo came from the Korean company Neo Mtel, who showed off a UI with lots of lively widgets and affectionate animations. They also had a tablet-based “second screen” to complement the product vision.

Perhaps a little surprisingly, Opera (of the Opera browser fame) showcased a TV platform based on web technologies.

In Hall 7 we also had the pleasure of having Mozilla as our next door neighbours. They had set up a nice lounge where people could try out the latest Firebox browser for Android phone and tablet. The Boot to Gecko initiative had matured into the Open Web Device together with Telefonica, and resulted in a working demo of a phone OS, based entirely on web technologies with APIs to talk to the handset’s camera, sensors and telephony software, for example. It was also interesting to exchange thoughts on open-source design and development with the fine Mozilla employees.

Meanwhile, there were some interesting evolutions in device form factors to be discovered. Samsung exhibited a 10-inch Galaxy Note tablet with Adobe Photoshop Touch and very precise and responsive drawing stylus. With the exception of tactile feedback the experience is closing in on that of pen and paper – and for many, the benefits of digital malleability can outweigh the constraints of analogue tools.

Notepad-sized phones are parallel to this trend. The Galaxy Note phone got a rival from LG’s 5-inch Optimus Vu. Both devices channel the passport-size Moleskine or Muji notepad and flaunt oversized screens and stylus input. To prove the point, Samsung had dragged a bunch of portrait street artists to capture the likenesses of volunteering visitors on these polished pixelslates.

The requirement of pocketability and one-handed use has caused many (starting with Apple) to overlook this emerging form factor, but not everyone keeps their mobiles in their pockets and many use their phones with two hands anyway. It will be interesting to see how the notepad phones fare in the market and what kind of UI patterns will prevail there.

Last, but not least, the Padphone from ASUS is a very interesting play on device convergence and as such resonates with Ubuntu for Android. The Padphone is a smartphone that docks into a tablet shell and instantly becomes a tablet. The tablet with the phone inside can then be docked into a keyboard, turning the device into a laptop. While some clunkiness with the hardware remains, the user interface seems to transition from phone to tablet seamlessly and in a snap. However, there’s less wow in the tablet-to-laptop transition, where just a mouse pointer is added into the mix. Since Android is designed for touch this is no surprise, but there’s some added value in having a physical keyboard for typing.

Amidst all the sensory overload and throughout the four days of congress, the Ubuntu booth felt like an oasis of good vibes all the time. The interest and support from people we encountered was really encouraging and very heartwarming. Hands-on videos from the booth went viral across the internet. Many said that Ubuntu for Android was the highlight of the Mobile World Congress 2012.

Visit the Ubuntu for Android site for more…

Read more
Charline Poirier

Every three months, I conduct benchmark usability testing.  I’m calling these tests ‘benchmark testing’ because the aim of these sessions is to measure our progress towards achieving a great user experience with Ubuntu.  Last testing took place in October 2011.  I am now preparing for testing 12.04 to take place a couple of weeks from now.

When I publish the results of usability testing, I get many questions about my process.  So I have thought that the best way to explain how I approach usability is to take you along the preparation and execution of my benchmark testing. Over the next month, I will take you, step by step through my process, from recruiting participants, to writing a test protocol to conducting and analysing usability sessions and writing up results.  This will afford you the possibility of ‘accompanying me’, so to speak, and of conducting usability in parallel, if you are so inclined.

For this post, I walk through the first stage of any testing: recruiting participants.

Recruiting

This is a crucial part of any successful and meaningful testing.  Some argue that just anyone you can get hold of will do.  This attitude, in my view, puts the software before the people who will use it, and carries the implicit assumption that software, by its very nature, is usable. But the simple fact, which we actually all realise, is that it isn’t. Take music players, for instance.  The challenge for this type of software is to fit into the lives of people who want to listen to music.  It doesn’t have to work well for those who don’t listen to music but who are, for instance, heavily into photo editing.  In a word, testing your software with your grandmother or your partner might not provide all the feedback you need to create a user-friendly product if they are not engaged in the activities your software is meant to facilitate.

So, the basic idea is:  in preparing the testing, recruit the right people. The type of participants you work with will determine the quality and reliability of the results you get.

There are some basic rules for writing a screener questionnaire.

Rule 1:  Recruit according to your testing goals

Is your goal to test, for instance, adoption: that is, are you going to assess how new users respond to your software the first time they encounter it and how delighted they are by it?  Alternatively, is your goal to test learning: do you want to assess how easily a novice can figure out how to use your software and how they progress over time? Or are you really interested in expert usage:  do you want to assess how performative your software is in a specific context of use involving expert tasks?  There are, of course, other scenarios as well.  The point here is that you need to be clear about your goal before you begin.

With Unity, we have 2 basic goals:  1) adoption:  we want to know how easy to use and attractive Unity is to someone who has not encountered it before; and 2) expert usage:  we want to know how performative Unity is with highly competent users who are fairly familiar with it.

Given these very different goals, I will need to conduct 2 different user testing sessions with different recruiting screeners or questionnaires, and different protocols.

In this blog, I concentrate on my first project, to test for adoption.

Rule 2:  Know your software

You need to review your software carefully:  you need to (1) identify the main purpose of the software and the activities or tasks that it is meant to facilitate; and (2) identify where you think potential usability weaknesses are.

When I prepare a usability test, and before I even think about recruiting participants, I spend a significant amount of time trying out the software, and even more time discussing with the designers and developers their own concerns.  From this evaluation of the usefulness and usability of the software, I’m able to sketch a profile of participants.  Bear in mind that, given my goals as set out above, the participants will need to be able to use the software right away even if they’ve never used Ubuntu, since I am not testing for learning.

Given what Unity aims to allow users to do, we need to confirm (or not) in the testing that Unity users can easily get set up for and can conduct at least the following activities:

  • writing, saving, printing documents
  • finding, opening applications
  • listening to music
  • watching a movie
  • managing and editing photos
  • customising their computer: organising icons and short-cuts and changing setting
  • browsing the internet
  • communicating

Additionally, the OS should make it easy for users to:

  • multi task
  • navigate and use special features like alt-tab
  • be aware of what’s going on with their computer
  • create short-cuts
  • understand icons, notifications and generally the visual language

In this instance, I want as well to test the new features we have designed since 11.10

Given my goals, my recruitment screener should be written in a way that will provide me with participants who engage in these activities on a regular basis.

Rule 3: Make sure you have an appropriate number of participants, with an appropriate range of expertise, with appropriately different experiences

I’ve often heard it said that all you need is a handful of participants – for example, 5 will do.  While this may be true for very specific testing, when your participants come from a homogeneous group (for example, cardiologists, for testing a piece of cardiology software), it is not true generally.  Much more often, software is meant to be used by a variety of people who have differing goals, and differing relevant experience and contexts of use.

You need to take these into account for 2 purposes: 1) to be able to test the usefulness and appropriateness of the software for different users; and 2) to be able to assess the reasons and origins of any usability problem that you find – these can be explained by comparing differences between users. A usability problem will have a different design solution if it is created by a user’s lack of expertise than if it is created by a shortcoming of the software that stumped all user groups.  It will also help rate the severity of the discovered problems.

Some of the factors a competent recruiting will take into account are:

Different levels of expertise: for example, in the case of software for photo-editing, you probably need to assess the ease of use for people who have been editing their photos for more than 5 years, and for those who have been editing for less than 1 year.  Expertise can be reflected in the length of time they have been engaged in the activity and also in the complexity of their activities.  You may want to recruit people who do basic editing, like eliminating red-eye; and then, to compare their use of your software to the use by people who do special effects, montages, presentations and the like.  This way, you get feedback on a wide range of the software’s features and functionalities.

Different kinds of uses:  potential users will have different needs and different potential uses for the software.  For example, if the software is healthcare related, it may well be used by doctors, nurses, radiologists – and sometimes even patients.  It is useful, when considering recruiting, to include participants from these various professions and other walks of life, so that you will be able to determine how well your software serves the range of needs, processes and work conditions represented by the likely (range of) users.

Different operating systems:  you may want to select participants who use, at least, Windows, Mac and Ubuntu. Users who are new to Ubuntu have acquired habits and expectations from using another OS. These habits and expectations become with time equated with ease of use for these users because of their familiarity.  Recruiting participants with different habits and expectations will help you to understand the impact of these expectations as well as receptivity to innovation.

Recruiting your participants with precision will allow you to understand the usability of your software in a complex and holistic way and will dictate more innovative and effective design solutions.

Keep in mind, however, that the more diverse the kinds of persons who you envisage will be primary users for the software are, the larger the number of participants you will need.  You should recruit at the very least 5 similar participants per group – for instance, in the healthcare example, at least 5 doctors, 5 nurses, and 5 patients.

A few more things to consider explicitly putting into your questionnaire/screener, particularly if you are writing it for a recruiting firm:

It is advisable to have a mix of male and female participants;

Participants from different age groups often have different experiences with technologies, and so you should include a good mix of ages;

The perceived level of comfort with a computer can also help the moderator understand the participant’s context of use.  A question about how participants assess themselves as computer users can very often be helpful;

You should always add a general open question to your screener to judge the degree of facility with which the potential participant expresses ideas and points of view.  The moderator is dependent on the participant to express, in a quite short amount of time, the immediate experience of using the software.  Consequently, being able to understand the participant quickly and precisely is vital to obtaining rich and reliable data.  The individual who makes the recruitment needs to be able to evaluate the communication proficiency of the potential participant.

Rule 4: Observe the basics of writing the recruitment screener

The most reliable way to obtain the desired participants is to get them to describe their behaviours rather than relying on their judgment when they respond to the screening questionnaire.  For example, if you want a participant who has a good experience in photography, instead of formulating your questions as:

Question:  Do you have extensive experience in photography?

Choice of answers:

Yes
No

You should formulate your question in a way to make sure the person has some level of familiarity with photography:

Question:  During the last 6 months I have taken:
Choice of answers:
between 20 and 50 photos a month [Recruit]
Less than 20 photos a month [Reject]

By matching potential participants to actual behaviours, you can make a reasonable guess, for example, here, that someone who has been taking 50 photos every months in the last 6 months is indeed competent in photography, whereas when you rely on the person’s own assessment that they have extensive experience, you can’t know for sure that they are using the same criteria as you do to evaluate themselves.

Your screener should be created from a succession of questions representing a reasonable measure of familiarity and competence with the tasks you will test in your software.

That said, your screener should not be too long, as the recruitment agency personnel will probably spend no more than 10 minutes to qualify candidates they are speaking with on the phone.  At the same time though, you need to ensure that you cover questions about all the key tasks that you will ask participants to perform during the test.

Summing up

Let me sum up the basics I’ve just covered by showing you the requirements I have in my screener for testing the ease of use of Unity by the general public user, not necessarily familiar with Ubuntu. They include that:

  1. there should be a mix of males and females;
  2. there should be a variety of ages;
  3. participants should not have participated in more than 5 market research efforts (because people who regularly participate in market research might not be as candid as others would be);
  4. there should be a mix of Windows, Mac and Ubuntu users;
  5. participants should:
    • have broadband at home (being an indicator of interest in and use of computer during personal time);
    • spend 10 hours or more per week on computer for personal reasons (which shows engagement with activities on computer);
    • be comfortable with the computer, or be a techy user;
    • use 2 monitors on a daily basis (I want to test our new multi-monitor design) to carry out a variety of activities online (part of the designs I want to test relate to managing documents, photos, music, and so forth and  I want my participants to be familiar with these activities already);
    • use alt-tab to navigate between applications and documents (another feature I intend to test for usability);
    • have a general interest in technologies (I want to make sure that their attitude towards new technologies is positive, so they are open naturally to our design);
    • express ideas and thoughts clearly.

 

In closing let me add that testing with friends and relatives is very difficult at many levels.  First, you can’t ask all the questions you need to:  there are many ‘common understandings’ that prevent the moderator from asking ‘basic/evident/challenging’ questions that might need to be asked to participants. Second, participants might not be sincere or candid about their experience:  someone who knows you and understands your commitment to the software might not express what they think, and they may not identify problems they are experiencing and thus, they might minimise the impact of a usability issue or even take the blame for it.  Third, of course, they might not fit as precisely as they should the recruitment screener.

Feel free to use this screener to recruit participants if you would like to conduct testing sessions along with the ones I will be doing at Canonical.

In a couple of days, I will write a blog post about writing the protocol for this round of testing  – which is the next step you’ll need to take while you’re waiting for participants to be recruited.

Read more
Paul Sladen

Ispravka ?irili?nog fonta «?» «? ? ? ?» «? ? ? ?»!
????? ?????? ??? «?»:

Amélie Bonet at Dalton Maag has drawn up redesigns for a number of the Cyrillic and Serbian/Balkans characters that weren’t as clear, or ideal as they could have been. If you use these characters, please help give feedback about whether the suggested improvements are sufficient, or whether they could be improved further. For Greek, there is also a proposed fix to monospace Gamma:

Many appreciations to those who reported the original bugs about the Ubuntu Font Family. We have tried to follow up to the original reports at Blog Russia and at Opennet.ru (thank you to ????????? ???????? and also all those on the #ubuntu-rs IRC channel.

Please comment directly on the bug reports. You can use your own language if it is easier (eg. Russian, Serbian, English, Greek…). ??????? ???????!

Read more
Paul Sladen

UbuntuBetaArabicF in print,

A beta of Ubuntu Font Family Arabic, in print as part of the testing and debugging process for the Arabic coverage. The Arabic script support will cover Arabic, Urdu, Pashto, Kashmiri and other written languages using the base Arabic script.

The magazine is an intriguing tri-lingual production published by the Cultural Office of Saudi Arabia in Germany with the layout prepared by Professor Rayan Abdullah’s team at Markenbau. The magazine starts with German and English articles using Latin script at one cover (reading left-to-right) and articles written in Arabic from the other cover (reading right-to-left).

Ubuntu Arabic, now has horizontal, instead of diagonal dots

Following on from the recent posts about adding Kashmiri/Pashto ringed characters and the Arabic update from the start of 2011, the most significant change to highlight is the that the diagonal dots (?i???m / ??????) have been changed to a horizontal layout.

The resulting arrangement is now closer to an equilateral triangle, and the dots closer to a circle.

(Thank you to Abdallah, Björn Ali Göransson, Chamfay, Masoud, Muhammad Negm, Nizarus, Reda Lazr and others who each took the time to comment and give feedback about the earlier diagonal dot angle).

Read more
Charline Poirier

Recently we hired an external consultant to compare the usability of 2 email clients: Thunderbird and Evolution. I have taken some highlights from the report to compose this blog.

Setting of the usability session

The sessions took place in early June at the Canonical Office in London. Thirty participants were recruited. All of them used at least 2 email clients.
Methodology

One email account was set up in preparation for the sessions; all users were asked to use this account’s details to set up the email package. Days prior to the testing, messages were sent to this account and it was also subscribed to a mailing list, in order to ensure a realistic influx of emails to this Inbox.

Half of the participants interacted with Thunderbird and the other half with Evolution; Thunderbird 5.0 build 1 and Evolution 3.1.2 were used on a desktop machine using Ubuntu 11.10.

During each 60 minute session, participants were asked to:

  • set up the relevant email package for an existing account;
  • create a signature;
  • compose an email and change font colour;
  • manage emails in folders;
  • locate a specific email containing an attachment (sent prior to the session);
  • respond to an email containing an attachment and send the latter back to the sender;
  • create a contact list and send a message using contacts.

Highlights of Report

What Participants Liked

Thunderbird

  • Straightforward and familiar set-up
  • One-click Add to Address Book feature in email preview and window
  • Window/message tabbing system
  • Familiar, intuitive language
  • Quick search that meets expectations

Evolution

  • Useful guiding steps in mail configuration assistant
  • Intuitive contextual menu option to Add Contact in email preview and window
  • Menu items easily accessed as alternative to button shortcuts

Both

  • Both were seen as having a familiar layout
  • Both met expectations in terms of generally intuitive access to contextual menus
  • Both provided intuitive access to search facility


Where Participants Struggled

Thunderbird

  • Confusion over search use (severe)

Users were confused by the existence of two search fields, often opting for the All messages search box as they intuitively saw this as highest in the hierarchy. This choice often resulted in disappointment when users did not expect to be taken away from the folder they were searching in; in addition, they found the search results confusing and inefficient, reporting that they expected the item they were searching for to be more easily visible.

Participants were further frustrated by the fact that if they had misspelled an entry or their search returned no results, they would not be aware of this until taken to the search results tab, which they saw as a frustrating waste.

  • Difficulty locating and managing folders (severe)

The majority of participants successfully created folders, either by right-clicking in the folder area or using the New functionality in the top menu. However, most of these users were unable to easily locate the folder created or move once they had located it. This was due to them not realising they were creating subfolders; once subfolders had been created, unless that folder already had folders within it and it was expanded, users did not notice the expand icon next to the folder and bypassed it. Finally, once they found the created folder, users attempted to relocate them to the desired place; majority of users failed in doing this successfully.

  • Difficulty personalising message text (mild)

More than half of users struggled finding the function to change font colour; the majority looked for this in the message text toolbar, bypassing the colour functionality because they expected the icon to look differently. Users eventually found this with the use of tooltips, but not after looking through all toolbar and top menu options first. Participants voiced the issue for this to be the icon being in black and therefore too subtle; they mentioned preferring a more colourful icon or one resembling a palette.

  • Unclear server options (mild)

Participants reported liking the apparent ease of setting up but most were confused by the server options provided in the second, and final, step. About half reported that they would navigate away from the window and research more into their options, with the rest either ignoring this message and go with the IMAP option already selected or choosing the POP option which caused them some issues finding emails later on. The majority of users reported preferring helpful information and guidance on the options provided in the set-up screen, in order to avoid navigating away or uncertainty.

  • Difficulty finding and personalising signature (mild)

The majority of participants were unsure where to find the signature functionality, with the majority expecting it to be either in the main toolbar, message toolbar or Message menu section. Most participants were unable to find this feature on their own without looking up help or reporting that they would ask a friend for help.


Evolution

  • Longwinded, unexpected set-up (severe)

Despite appreciating the guiding steps outlined in the mail configuration assistant, the majority of participants reported feeling that this process was unexpectedly too long and found the options provided very technical and confusing. For the majority of users, this culminated just at the second step, where they thought they were being asked to retrieve backed-up files, rather than being offered option to set up this feature. Some users failed at this point, reporting that they were confused by this and would revert to using the current email set-up they had.

  • Locating account email (severe)

The majority of participants had difficulty initially locating account email due to the email folders displaying an Inbox and an account-specific email section. Most participants did not notice that the account email area was collapsed and were confused about the ‘Inbox’ shown at the top of the folder list not showing any messages. Users attempted to view the account Inbox by selecting Send/Receive and then clicking through all folders available. Eventually users noticed the email account folder with the expand icon next to it and accessed the account folders that way. This experience caused great alarm in these users, particularly as it was at the beginning of interaction with the system; as a result, many reported loss of trust in the package and considering ending its use.

  • Unintuitive message search (severe)

As discussed, search was intuitively used by participants to quickly find a required message in a large Inbox. Many participants failed finding the required search results because they carried out a search unaware that they had selected an irrelevant folder. This resulted in no results being returned and users being confused because they had expected to be able to search all folders.

  • Once email opened, difficulty getting back to Inbox (severe)

Half of participants naturally double-clicked to read email in more detail; however, in Evolution, this resulted in email opening up over the main Inbox window, hiding the email list. Participants were confused by this and struggled to get back to the message list; majority reported looking for a button or link to Inbox and were extremely weary of closing the window down (either via the buttons or menu items) because they were nervous about potentially closing down the entire email application.

  • Inability to personalise email text (severe)

Almost all participants were unable to personalise message text in Evolution; they expected access to font colour to be available along with the other font toolbar options and entirely bypassed the HTML option. One participant selected HTML and still missed the font colour option. Participants were very disappointed by the lack of this feature and looked at all toolbar and top menu options for access to this.

  • Despite long set-up, confusion over lack of password request (mild)

In addition to finding the Evolution email set-up longwinded, participants were confused why this had not asked them for account password details. The majority saw this as a frustrating time waster, particularly as they were asked for this separately, once their email had been set up.

  • Difficulty locating and managing folders (mild)

As with Thunderbird, the majority of participants successfully created folders, but here mainly by using the Folder functionality in the top menu. All participants expected to be able to create a folder by right-clicking in the folder area and only a few right-clicked on a folder to look for this functionality. Despite being able to create the folders using the top menu, users were disappointed with the lack of quicker access to this feature in the folder area (either by right-clicking or with the use of a button) or a button in one of the top toolbars.


Usability Issues Common to Both

  • Difficulty finding and personalising signature (mild)

As with Thunderbird, the majority of participants were unsure where to find the signature functionality, with the majority expecting it to be either in the main toolbar, message toolbar or Message menu section. When users found the Signature option in the message toolbar, they were very frustrated that this did not provide a shortcut to signature creation. Most participants were unable to find this feature on their own without looking up help or reporting that they would ask a friend for help.

When they were taken to the signature feature, users were again frustrated at the fact they could not find a font editing facility, despite the interface looking like it should allow for this.

Conclusions

As discussed, users gave both positive and negative feedback on their interactions with Thunderbird and Evolution, with Thunderbird consistently being perceived by users as easier to use and fit for purpose than Evolution.

Thunderbird was widely liked for the perceived straightforward set-up and facilitated access to contact save, search and open windows features. In addition, users commented on the familiar language used in the application.

However, participants encountered a few severe issues which tarred their image of the system. These consisted of extreme, at time show stopping, difficulty with:

  • successfully understanding and choosing the relevant search field to use;
  • locating and managing the preferred location of folders.

Finally, these users encountered some lack of clarity over server options in set-up and frustration at the inability to easily format email text.

Participants who interacted with Evolution liked the guiding steps in the mail configuration assistance, the intuitive contextual mention options to add contacts and the ability to easily access alternatives to button shortcuts in the menu.

Users reported multiple severe issues around the Evolution set-up, locating account email, message search use, formatting email text and navigating back to the Inbox. All of these issues were so major that users encountering them reported lack of trust in the Evolution package and a reluctance to continue its use.

One major fact to keep in mind is that, especially as the majority of participants were new to Ubuntu, they saw the email application they used as a representative of the operating system. This is particularly pertinent to the email system that is a system default and it should be ensured that, before either one of these products is chosen for this purpose, the severe issues reported here are addressed.

Read more
Charline Poirier

First and last impressions of Unity were that it was quite user-friendly, and pleasing in its design and ease of learning. The majority of participants left the session with very positive feelings and were looking forward to Unity’s release so they could download it. In short, participants in this testing session were considerably more positive about Unity than participants who tested the previous version in October.

This improvement, no doubt, is due to the significant changes we have made since the last testing, often in response to problems uncovered during that testing. Many of the serious issues discovered then have been resolved. Most significant, as it stands now, there are no longer any “show-stoppers”.

However, there are still a few interactions that were at odds with the product’s general ease of use.

Some important points to keep in mind

First, it appears that those of our participants who were Mac users seemed to have had more facility with the Unity interface than Windows users, especially those using anything previous to Windows 7. Generally, Windows users tended to rely on right click, and they sought menus from which they could find and launch applications as well as move and delete. They did not immediately take advantage of Unity’s visual assets. Accordingly, Windows users will need to be encouraged to manipulate icons and to develop a more physical relationship with Unity than the more text-heavy relationship they have with Windows.

Second, Unity’s concept of ‘Home’ (Nautilus file manager) is different for that of our users, even Mac users, and they did not immediately understand it. They had a tendency to go to the ‘Home’ icon, not only to find information about their computer, but for any programme or application they were looking for. Essentially, many navigated from one application to another using ‘Home’. For example, almost every participant first looked into ‘Home’ to find computer settings and to change their wallpaper.

Third, most participants were not able to figure out how to reveal the Launcher from the upper left corner. They immediately devised work-arounds, like closing windows or moving a window away from the left edge of the screen. They expected to be able to reveal the Launcher by approaching any point along the left side with their pointer. As the Launcher is one of the most important features of Unity, it should be either always visible or at least very easy to bring out.

Fourth, the Dash is hard to discover. The icon is too small and understated compared to the icons in the Launcher. By its size and placement, it is easily associated with the window management buttons. Participants who discovered the Dash found it very useful, but were more inclined to use Files and Applications Lenses at the bottom of the Launcher. This was, I’m convinced, partly due to the fact that there were no data, pictures, music or documents on the computer that they would want to access through the Dash at the time of the testing, whereas the Applications Lens, in early use, is more adapted to general exploration. The Dash needs to be more visible — it needs to be accorded its rightful place as a major feature of the interface.

Notwithstanding these small problems, it is fair to say that this test showed that we have made significant progress since the October testing.

Some Major Issues that Have Been Resolved since October

Visibility of icons at the bottom of the Launcher

During the April testing, participants experienced difficulty seeing the bottom of the Launcher when it was accordioned and then, when the Launcher expanded, it hid the bottom icons.  At the time of the testing, it was very difficult to reveal these bottom icons even by scrolling down. Recent updates have resolved this problem by making the Launcher automatically scroll down when users move the pointer down along it.  This way, the icons that were previously hidden are effortlessly revealed.

A related issue that has also been resolved is that, during testing, participants wanted to make the Launcher visible by touching any part of the left side border – whereas, in fact, the only way to reveal the Launcher was by reaching with the pointer to the upper left corner. With the updated version, users can now reveal the Launcher from any point on the left side of the screen.

Changing the order of icons in Launcher

During the October testing, when the interaction to move an icon in the Launcher was to select it and bring it outside of the Launcher before giving it a new position, many participants failed to do it.  The new interaction supports users’ natural way of moving an icon:  participants were able to move icons in the Launcher by selecting them and moving them vertically up and down.  It should also be noticed that the feedback provided when users select an icon they intend to move helps them understand that they have initiated an action.  Knowing that the icon has effectively been selected afforded them more freedom to move the icon around and to find a way to make it work.

Adding icons to the Launcher

Participants were able, even during the October testing, to drag and drop the icon of an application from the Applications Lens into the Launcher.  However, their first attempt, especially for Windows users, was to right click on the icon they intended to move and expect to be offered an option to attach to the Launcher in a drop down menu; and second, to look at the top of the Launcher for a ‘Launcher  menu’.

Identifying running applications

Most participants were able to see immediately which applications were running by means of the white arrows beside the icon in the Launcher.  However, they were not sure if they had made the right inference.  In short, although participants were unsure about the meaning of the white arrows and bars, they were able to figure them out which indicates that this is a feature easy to learn.

Changing the wallpaper

Most participants easily changed the wallpaper by right clicking on the desktop.

Deleting a document

Most participants easily deleted a document.

Detailed Summary of Benchmarking – Comparison of the October and April Test Results

The points above are the highlights of the findings. Let us now examine individually the differences in performance as revealed in the testing of last October and the one just completed in April.

Performance

October report:  “The level of performance in this regard significantly impaired the flow of use and the user experience.”
April testing:  Unity was quick and responsive.
Outcome:  This is fixed.

Multi-tasking: Having many items opened and accessing them

October report: “Thus, while working on a task, participants expected that Unity would provide them with a representation or visibility of what was available to them and how to easily access what they needed at any given point.”
April testing: No problems were observed with overlapping open applications and documents. Participants could easily move individual windows and reveal items placed underneath.
Outcome:  This is fixed.

General navigation

October report: “Overall, participants found the navigation to be cumbersome.”
April testing: Participants used Nautilus to find applications and documents as well as system settings. This is not, however, the most efficient way to do this.
The Files and Applications Lenses icons need to be more prominent in the Launcher.  However, participants found it easy to go from one document or window to another and to make them all visible to them.
Outcome:  In a recent update, the icon ‘Home’ (Nautilus) has been renamed ‘File Manager’ and the icon has been modified to downplay the home relationship.  This should help users recognise its role and lead them to look for an alternative place for system settings and other programmes and applications.

Minimising a window

October report:  “When participants minimised a document, the document seemed to have disappeared when they expected it to be shown at the bottom of the screen.”

April testing:  A few participants still expected to see a trace of their minimised document at the bottom of their screen.
Outcome:  Since the usability sessions this interaction has been updated to show the window minimised into its Launcher icon even when the Launcher is hidden.  This should help users to locate their minimised documents and windows.

Awareness of running applications

October report:  “Participants did not always see the white arrows that indicate a programme is running or documents are opened. Consequently, they were not aware of what was available to them.”
April testing:  Almost all our participants were able to tell which applications were running by looking at the white arrows. However, some were not sure at first and needed to ‘try it out’.  So they opened and closed windows and applications to check on the behaviour of the icon in the Launcher.
Outcome:  The white arrows seem to be working well once they have been discovered.  Although they are not obvious, users can figure them out.  This is easily learnable.

Displaying documents side by side

October report:  “No participant could find a way to resize his/her openoffice documents in such a way that they could be placed side by side while working on both at the same time.”
April testing:  Except for one, all participants were able to display two documents side by side. However,  as noted above, they were not able to discover the semi-maximised state.
Outcome:  The original problem is fixed.  In the new design of Unity, participants have a way to display their documents side by side and to work on them simultaneously.  The semi-maximised state is not readily discoverable.  Unfortunately, users are not yet taking full advantage of what Unity offers.

Overview of computer

October report:  “Many participants wished they could have an overview of what resides in various parts of their computer, as is facilitated by Windows’ ‘my computer’.”
April Testing:  This is still a problem. Participants in the April sessions were still looking for a place where they could do systems setting and have an overview of their computer.
Outcome:  None of the participants discovered the ‘system settings’ option in the top right indicators menu. Users need an icon either in the Launcher or in the indicator area, or a folder in Nautilus.

  • Bug #764744 (“Add system setting icon to Launcher”)

Delete a document

October report: “Participants could not delete existing documents from their files and folders. “
April testing:  Everyone was able to delete a document that was no longer wanted.
Outcome:  This is fixed. One remaining problem is that many participants cannot see the Rubbish Bin at the bottom of the Launcher.  They used other ways to delete, like pressing the delete key.

  • Bug #764751 (“Launcher – when Launcher contained folded icons, partcipants weren’t able to find the rubbish bin”)

Copy and paste

October report: “Copy and paste from one document to another didn’t always work for participants.”
April testing:  Everyone was able to copy and paste from one document to another.
Outcome:  This is fixed.

Lack of feedback

October report: “Unity is often slow, and as a result participants tended to be confused about what was going on.”
April testing:  Overall, and as noted earlier, the performance of Unity was much better and the system responded more readily to users’ commands. Some issues remain with feedback, however, for example, with the Rubbish Bin.  Participants wanted to be alerted, either with sound or a message that their document had been moved to the Bin.
Outcome:  Confirmative feedback is necessary whenever users complete an action, like deleting a file.

  • Bug #750311 (“Launcher – When a item is deleted by dragging to Trash, the trash should pulse once before the Launcher disappears”)

Nautilus search

October report: “When searching, participants didn’t know what the field and scope were that were covered by the search engine they were using.”
April testing
: Many participants searched for applications successfully. However, there are still problems with search. Participants made inappropriate searches, for instance in Nautilus, searching for Sudoku (search that pertained to Applications Lens) and they did not get the results they expected.
Outcome:  This is partially fixed.  Some issues with search are related to participants’ understanding of the structure of Unity. There should be some guidance hinting at the limitation of the search and thus, the kind of results that can be expected from the various search boxes in the various parts of Unity.

Adding an icon to Launcher

October report:  “Many participants were not able to add a short-cut of an application to the Launcher.”
April testing:  Most participants were able, this time, to add an icon to the Launcher. Windows users, however, had more difficulties than the others did; they tended to look for options in various menus or right clicking on the icon.
Outcome:  This is partially fixed.  The interaction is quite intuitive but, some users (particularly those using earlier versions of Windows) will require more guidance.

Reordering icons in Launcher

October report: “Most participants failed to reorganise the order of icons in the Launcher.“
April testing:  A few participants experienced some difficulty reordering icons in the Launcher because they did not have sufficient feedback to understand when the icon had actually been selected so that they could proceed vertically.  Consequently, they tried to move the icon too quickly after clicking on it and the icon did not respond.
Outcome:  This has been fixed in the latest update by providing feedback on selection – the interaction shows the icon as if it was detached from the Launcher – and by allowing users to move the icons vertically within the Launcher.

Finding the Dash

October report:  “The majority of participants who found the Dash found it by accident. They were not sure what it was, and didn’t know how they had gotten there if they accidentally had.”
April testing:  Participants still cannot readily find the Dash.
Outcome:  The Dash needs to be made more visible and promoted as a major feature of Unity, on a par at least with the icons of the Launcher.

  • Bug #764771 (“The BFB is visually lost and his position does not communicate its importance”)

Ubuntu Software Centre

The same features of the Software Centre were not tested this time because everyone agrees on its need for redesign and its existing usability problems. Nevertheless, some issues emerged in the course of testing other interactions.
April testing:  The Software Centre is still not recognized and, during testing, was mistaken for ‘systems control’.
Outcome:  The Software Centre needs to have a different look and feel and general presentation. Needs redesign.

Changing the wallpaper

October report:  “Many participants did not succeed in changing their wallpaper because the default screen of appearance was open in full screen by default.”
April testing
:  Almost all participants were able to change the wallpaper by right clicking on the desktop. Furthermore, one participant who was able to find ‘appearance’ had no problem changing the wallpaper because now, the screen opens in a way to provide visibility of the background.  The October usability problem was thus, fixed.  However, a new problem emerged.

In the April test, the target feature was, in fact, the ease of use of the Applications Lens by means of changing the wallpaper. Most participants were not able to change the wallpaper by finding ‘Appearance’ in the Applications Lens.  They were looking for ‘system settings’ to do that operation.
Outcome: The initial problem with the appearance screen covering up the immediate change of wallpaper and so, hiding the change from users, has been resolved.  Now, by default, the appearance screen does not open full screen.  In the April test, however, users could not find their ’system settings’, where they expected to make these changes.  Furthermore,  many participants did not think of system settings as an application and, thus, were not confident to find it in the Applications Lens.  Unity needs to provide obvious access to ‘system setting’ and make a distinction in the Application Lens between applications and other programmes.

Visibility of Files and Folders and Applications Lenses and  Rubbish Bin

October report:  “Participants thought that the grey icons at the bottom of the Launcher were inactive.”
April testing: These icons still have issues of visibility, especially when they are folded at the bottom. For example, most participants did not find the Rubbish Bin.  Another usability problem that arose from interacting with the Launcher is that some participants found it difficult to interact with the bottom part of the Launcher.  They found that it was ‘a long way to go’ to the Rubbish Bin or the Lenses when the Launcher was populated with many icons.
Outcome:  These icons still need more visibility. Changing the colour, and perhaps even changing their position in the Launcher, might help.

  • Bug #764751 (“Launcher – when Launcher contained folded icons, partcipants weren’t able to find the rubbish bin”)

Some Usability Issues that Have Arisen from Some of Our New Design

Top Menu Bar

The top menu bar is actually a new design.  There was some confusion about the role of the top menu bar: Participants wondered if it pertained to ‘the computer’ or to the application they had open at the time. When participants had many windows opened, they did not understand that the bar corresponded to the selected window.

System Settings

During testing, I encouraged participants to change their wallpaper in another way than by right clicking on the desktop to see if they could find ‘appearance’ in the Applications Lens.  Finding system settings programmes in the Applications Lens is not intuitive. Most participants did not succeed in changing the wallpaper by going into the Applications Lens. They were looking for a ’system settings’ icon in the Launcher or somewhere in the ‘Home’ at the top of the Launcher.  Those who went into the Applications Lens, did not expect to see ‘system settings’ in that area because they did not think of system settings as applications and accordingly they did not explore.  No one discovered the ‘system settings’ option in the drop down menu under the ‘turn off’ icon in the indicators menu bar.

Notification of message

This is also a new feature since the October testing.  The majority of participants did not see the notification that they had received a message. The change in colour of the icon was not noticed.  However, some noticed the change in the icon in the Launcher, in this case the Xchat, and they induced, by looking at the number that appeared on the icon, that they had received a message.  However, when the Launcher is invisible, participants were not aware that they had a message.

This said, a couple of participants saw the notification and the change in colour of the envelop in the notification area.  They had a strong positive impression of the feature.  It seems that in this case, it might be a question of making the change in the notification area more prominent.

Semi-maximised state

Again, semi-maximised state is another new feature.  Semi-maximised state is not readily discoverable. Only one participant discovered it. This participant was a Windows 7 user and said that there is the same feature in Windows 7. Two other participants interpreted the blue preview shadow as signalling that they were about to make a mistake or to do something not allowed by the system. The preview shadow was interpreted as a warning.  Users need both guidance and reassurance here.

We are doing better with the user’s experience and our users are closer to adoption

Overall, participants left with a strong positive impression of Unity after having tried it for 60 minutes.  Some of their closing comments:

“I like the layout and the screen (…) I want to customise it myself quite easily. It would be good to have a tutorial. (…) I like minimise and the fact that you can move things around. I like the casual font, aesthetically, it looks nice and it is easy to use. Nothing is really difficult. The important things are there and easy to use. It is nice.” [P1]
“The reason it was annoying today is because it is a new package. I like the design and layout. Design is important to me. It is quite clear. (…) “I would like more time to play around with it. It’s Ubuntu, I haven’t used it. This is new, the way I learn is by playing with it. (..) It’s good to use something that is a bit more independent. I like the idea that we can do things rather than being locked down in something more siloed like Windows or Mac. I would like to get it.” [P2]
“I prefer this set up to the start menu. I like the icons. We are a generation to see things with icons. I think there is a lot of significant gesture, like saving documents and I would not have any problem doing these activities. I really like the dragging format. I like to be able to order what I want. I think it is much easier than Windows. With Windows you have to go down menus. (…) I don’t think it’s complicated but it would take some time [to get use to it]. I’ve been working it out in an hour. It’s very user friendly. Even within the hour, I’ve learned a lot about how to do different things.” [P3]
“I really quite like it. I think it’s intuitive with the exception of the favourites, making an application a favourite. I would not be baffled to use it without a manual. I like the look of the desktop. It is modern. It looks like a Mac more than Windows. It’s quick.” [P5]
[About the Software Centre] “I didn’t anticipate to have access that easily to new apps. Also, I like the rating on the side. It’s quite helpful, I can see what I can trust. That’s quite nice.” [P5]
“It’s OK. Quite intuitive but I was going from what I know from Windows. I use the right click a lot, it’s nice to have it on the side. Generally this looks pretty good. It’s a bit more intuitive, for me, though, the right click is vital. It always brings up a good menu.” [P9]
“I think it’s very pretty, very pleasing as it were.” [P11]
“It’s quick and responsive. It’s very responsive, different from what I use, it would take a day or two to get acquainted. I wouldn’t be discouraged. I would rather spend time than pay money.” [P12]

In the summary of their experience post usability testing, participants also highlighted their main difficulties. It is meaningful that, at the end of the session, the following first came to mind:

“I don’t like the dragging in Launcher up and down. I mean I didn’t realise at first this is what I needed to do. It’s difficult to get to the Bin. It’s not easy to get to the top from the Bin, it is hard to drag things down a long way. I don’t like the dropping down.” [P1]
“My frustrations: I would like to know how to change the settings, I expect a button to change wallpaper clicking on a button right at the top. (…) The menu at the top bugged me.” [P2]
“I didn’t like when I have things minimised. There are many things I can’t do without maximising the screen.” [P3]
“It is hard to delete a file in this way. (…) You don’t find the menu bar and you don’t know what’s open.” [P4]
“I don’t know how to make the Launcher visible [when a window is opened]. I’m struggling a bit. This window [Dash] has a tendency to disappear.” [P5]
“I hated the Files and Folders, I didn’t know what it would do when I click on it, if it will open or just let me select it. I wasn’t able to select a document.” [P8]
[About the wallpaper] “I couldn’t find it. I wouldn’t have thought of it as an application for some reason.” [P10]
“I suppose my main thing is what I expected to have in terms of applications and control panel. I couldn’t find it. If I could have found this at the beginning life would have been a lot simple. I feel like I feel with Apple, I feel a bit stupid because I can’t do the things I normally do with my PC. I like things in words a lot, I like the drop down menu. This is interesting because this is generally shown with an icon.” [P11]
“I’m frustrated that I can’t find something like ‘my computer’. I want to find information about ‘my computer’ and what the hardware is, the driver versions, and I want to know if there are updates on Explorer. Here you need to go into ‘control panel’ to see if there are any updates. I still can’t figure it out.” [P12]

You  can also download a PDF of the full report by clicking on this link.

Read more
Charline Poirier

??

I have just completed sessions of usability testing of Thunderbird.

This time, I had the pleasure of working with Andreas Nilsson, who came to London to observe the sessions. It was very useful to get his feedback and to work collaboratively with him on the analysis and implications of the findings. In addition to these benefits of our work together, there is an added one: since he observed participants struggling with certain aspects of the interface, he will no doubt be a very effective user experience advocate with his team.

Andreas, thanks for your time!

The Test

Twelve participants were recruited from the general public – one turned out to be a no-show. They represented a mix of gender and age. Special consideration was given to heavy email users. Of the 11 participants, 5 were exclusively Windows users, 3 were exclusively Mac users, and 3 used both Windows and Mac.

In preparation for the sessions, we set up 2 test email accounts. A few days prior to the sessions, I sent messages to these accounts and also subscribed them to mailing lists. When participants signed up, they had already received a sizable quantity of emails, allowing us to ask them to manage the messages the way they generally do in their own email boxes, to find specific messages, to create filters, and more.

Thunderbird was tested on Maverick and Unity.

Between sessions, Thunderbird was removed and all hidden files were deleted, so the next participant got to start from scratch.

The Methodology

Over the 60 minutes of each session, I went through as many features of Thunderbird as possible with each participant. Participants were asked to:

  • Install Thunderbird from the Software Centre
  • Create an account
  • Sign up
  • Create filters
  • Set up alerts
  • Manage emails in folders
  • Create a signature
  • Change the colour of the font
  • Create a contact list
  • Search for a specific email discussing a form (which I had sent prior to the session)
  • Respond to an email that contained an attachment: in particular, open the attachment, modify it and send it back to the original sender

What participants liked

There were many aspects of Thunderbird that participants enjoyed, and many tasks at which they succeeded.

Participants commented positively on the tab system, which makes the navigation between messages easy and immediate, and which provides visibility on multitasking. The tagging of messages also got positive evaluation. Many participants commented on the simplicity and usefulness of the contacts. Filtering was perceived as effective, although, as we will see below, the majority of participants experienced some challenges here.

Participants found the activities which they carry out most often – opening, reading, responding to and deleting emails – easy and straightforward.

Where the trouble is

Critical issues

Participants encountered few critical usability issues – by ‘critical’, I mean issues that would make it difficult or even impossible to use the application on a regular basis. These issues need to be addressed if we are not to lose users to alternative products.

Install

After installation from the Ubuntu Software Centre, participants could not find Thunderbird to start using it. They did not see, in the product description, the bread crumb indicating the location of the download when provided.

Observation: After having installed a new application, users generally are excited about using their new software. The user experience would flow much better if, as the process of installation ends, the application opens automatically in the main window, allowing users to deal with their settings and messages right away. We need to keep users excited about Thunderbird. As it stands, it is a bit of a let down to not be able to find the new toy!

Create folders

One of the main challenges for participants was managing their many emails by creating folders.

Most participants did manage to create a folder by right clicking on the folder area. However, they could not find the folder once they’ve created it, and so couldn’t drop messages into it. This was because they had in fact not created a ‘folder’ (as promised by the menu label) but a ’sub-folder’. The sub-folder was not visible, because it was hidden under a folder.

Those participants who did eventually find the sub-folder they had created wanted to make it into a folder, but were not able to do so. Users normally organise their folders in a way that facilitates their use of emails. They tried to drag their sub-folder out of the parent folder and relocate it.

There are 4 main folders they want visible: inbox, sent, junk and draft. It is worth noting that the ’sent’ folder in Thunderbird is a sub-folder of gmail; this was confusing to participants. As a case in point, several participants failed at checking if a message they had sent me had really been sent because they couldn’t find the ’sent’ folder at all.

Participants also expressed a preference for ordering their folders. In addition to the point mentioned just above, some indicated that they like to create a work folder and a personal folder. They place these folders next to each other. They were not able to do this in Thunderbird.

Observations: Users manage their mailbox by customising folders. The level of customisation they need goes beyond creating and naming sub-folders. They want to create their own hierarchy of folders and sub-folders as well as to order them for convenience and visibility.

One more thing on this topic: participants were not clear about some of the words used to describe folders. For example, they did not know the difference between a ‘folder’ and a ‘local folder’.

Create filters

Most participants failed at creating a filter.

First, they didn’t know where to look to set filters up. Most participants looked under preferences and account settings. After looking generally at the menus they gave up.

Second, participants were unsure of the meaning of the dialogue boxes and of what was expected of them. They found the process of setting a filter unduly complex and they needed more feedback to measure their progress.

After participants managed to create a filter in the filter rules dialogue box, they clicked OK but didn’t know if the filter was actually set up or not. Additionally, they couldn’t figure out how to run a filter they had created. The issue was that, once having created a filter, when participants came back to the message filters dialogue box, the last filter set up is not selected – thus the run now option is not enabled. At the same time the enabled check-box is selected indicating that the item has been selected.

Observation: After setting up a filter, users would like to run it to confirm that it works. Make the command ‘run now’ the next step in the process without users having to specifically select the filter to run it.

Find open and modify an attachment

None of our participants was able immediately to find the attachment in a message. They expected the attachment to be visible at the top of the message. While most participants eventually found the attachment, some didn’t, and consequently could not open and modify it.

When participants did not find the attachment, they consulted help, but were not provided correct information.

After some participants found the attachment, I asked them to edit it. They did not expect that the attachment would be in a read-only mode and tried to edit it without saving it first. The message warning them that the document is read-only only appeared after many attempts. It would have been friendlier for the message to be shown at the first attempt.

A few participants, after they attached a document, were not clear if the document was in fact attached to the message. They needed a stronger visual cue.

Observation: Sending, finding and reading attachments are fundamental activities on email. The user experience would be greatly improved if the attachments were located where users expect them, at the top of a message and/or if they would be more visible by changing the appearance of the link or using a colourful icon. Additionally, users would benefit from some immediate feedback on ‘read only’ documents as well as from a confirmation that a document has been successfully attached to an email.

In this case, for Thunderbird to be user-friendly, it would need to anticipate users’ needs, mainly need for visibility and for feedback at the first occurrence of an error. This anticipation of users’ needs would show the willingness of Thunderbird to collaborate with its users and to recognize their goals.

Search

Participants were unclear about the differences between the 2 search boxes at the top left of the screen.

Often they didn’t get results because the global search bar doesn’t suggest anything other than names.

Search doesn’t take into account misspellings – and so, when a word was misspelled, participants got no results.

Every time a participant performed a search, a tab opened automatically even if the search provided no results. As a result, participants opened many tabs that were not useful or wanted. They found that the tabs cluttered the interface and made it difficult to find such things as the inbox.

Observation: Users should know, before searching, what the fields will be actually searching. The area dedicated to filters is interpreted as a search and not a filter by participants. In part, the issue for users is that the boxes look virtually identical, and thus, from their point of view, should be interchangeable. A different visual treatment would greatly improve the usability of the different search boxes.

Less-than-critical issues

Participants also highlighted usability issues that were not critical, but that compromised their enjoyment of Thunderbird.

Mail account setup

Participants did not understand the message contained in the mail account setup dialogue box. They had to make a choice between:
IMAP – Access folders and messages from multiple computers
POP – Download all messages onto this computer, folders are local only

Uniformly, they did not understand the implications of this choice and went for the ‘recommended option’ – just because it was recommended. Most participants said that they would not read the message anyway and would just accept and move onto the next screen.

One participant chose the POP option, which caused her problems with search later.

In addition, the mail setup message has a button that says “create account”. This was confusing for some participants who thought they had already created an account and now were in doubt. Some wanted to go back to the signup page to check. There is no way to come back to the signup page, however.

Observation: While users are setting up their account, they are most eager to get the process over with. This is in part, because they want to see the application but also because they need to see what they will get, so to speak, before they can understand the pertinence of the various options proposed to them. In this case, it is good practice to make a recommendation – which simplifies the process. However, the choice should be clear, from the user needs perspective (so users don’t just choose what is recommended because it is simpler and don’t foresee the consequences of their choice).

Set up alerts

2 participants expected to be able to set up alerts in tools, but were not able to. Many participants were not able to find a way to set up alerts at all.

Create a signature

The majority of participants expected to be able to create a signature under ‘composition’. When that failed, they looked under tools, add-ons, preferences, insert and write. Most who wanted to did not succeed at creating a signature.

Some minor issues

‘Minor’ usability issues don’t compromise the main usage of an application or the integrity of the user experience. However, they can be annoying and irritating, particularly when the application is used on a regular basis.

Change the colour of the font of their message

Most participants either failed at changing the colour of the font for all their messages or were not sure they had succeeded after selecting a colour from the palette of ‘display’ in ‘preferences’.

In part, participants could not find the option to change colours. For those who found it, when they selected a new colour, the new selection was not reflected in the messages they wrote just afterwards. But also, after they selected a different colour, they were not sure if the chosen colour would appear in their message.

Observation: Many users like to personalise their communication. Playing with colours and lay-outs should be easier for them, with relevant options more visible. In addition, users need some feedback that their change will be immediately implemented. An ‘apply’ button or a confirmation that the new selection has been registered would reassure them.

Navigational issues

Participants did not know how to get back to their inboxes from either the address book or the ‘write’ screen. They didn’t understand that in these specific cases new windows were opened, instead of tabs, and that they needed to close them to go back to their in-boxes.

And more…

Participants had further suggestions for new features. They wished for:

  • A calendar on the side so they can see messages and their commitments at the same time
  • A way to compress large files directly from the email account
  • Some social networking, at least so that they could see that their friends are online

Read more
Charline Poirier

??’Appropriation’ – the taking of a product and using it for one’s own purposes, in ways unintended by the product creators – is implicitly at the core of the philosophy of opensource, because openness provides for change, adaptation and innovation.

Design specifications

Last year, I conducted several research projects to understand how developers work and how we can add design and a concern for user experience to their already very complex efforts.  I’ve published some of the results already, for example, those coming out of our study of Empathy.

One of my inquiries asked how developers use design specifications.  This research produced very rich results.  We realized that developers’ approaches to our design specifications documents varied quite a bit, and often the documents were not made as central to the developers’ work as we had anticipated.  So far, we’ve been able to characterize four different ways in which developers use our documents.  These illustrate tendencies and not necessarily rigid approaches.  Yet, they help us understand developers’ frame of mind when they deal with design information.

1) Some developers read meticulously specifications and try to figure out what the designer had in mind. These developers like to work closely and collaboratively with individual designers.

2) Some developers have a more organic approach to understanding specifications. They use them in combination with current conversations on chat networks about development topics and issues, and with other conversations that have taken place during past strategic meetings at UDS.  They essentially make the written specifications a second resource in favor of what their colleagues and managers say about them.  They fit the specifications in a dynamic broader context.

3) Yet other developers almost only look at screen representations of the specifications. They try to duplicate the visual guide that accompanies the specifications or simply to compare existing features of an application to the screen shots included in the document, trying to discern similarities and differences between them. Many of them use specifications documents as a simple reminder of past and current discussions and to get a general idea of what’s expected of them.

4) Finally, another group uses the “try and see” method. These developers implement changes as they see fit and rely on their colleagues to provide guidance once the development has been realized. Effectively, they hardly consider the written design specifications at all but like to follow their intuition.

Research, of course, doesn’t judge what people do because it appreciates that people do what they do for a reason. Furthermore, it doesn’t opine on which behaviour is the best – because people do things in a way that works for them in their situation.  What research does is understand the complexity of individual situations and help designers fit seamlessly in people’s contexts and frame of mind what their products offer.

Based on these, and related, results, we have been rethinking our design specs tools and experimenting with new concepts derived from co-design principles, so that these specs become helpful to all developers and enhance their work and not represent merely an external constraint to the work they do.

This is all good.  However, the issue is not restricted to our Ubuntu developers. We should not forget that, in the wider opensource community, many developers do not have access to the Canonical, or any other, design team or to anyone with solid design training. They are the developers who work on their own free time and produce amazing software. They have to wing design.  Many wish they could access such skills to help beautify and enhance the user experience of their products. These contributors deserve our support.

So what?

To us, the solution appears to be ‘design appropriation’.

Our challenge:  how can we create design specifications and design thinking tools that developers can ‘appropriate’, just as mobile phone users started to use their phones to text because it suited their needs even though texting was not considered by the phone first creators to be a very important feature?

How can we design for the unexpected?

Upcoming research this year will be concerned with what developers can teach us about ‘appropriation’ of design.

This represents for us a first step in the investigation of the potentiality of ‘appropriation’ for all opensource.  Ultimately, appropriation should be possible not only for developers but for all end-users.

Read more
Charline Poirier

In preparation for UDS, we conducted usability testing of Unity with general public users.  We are now better informed as to where we should expend further efforts to enhance the experience for users working with Unity.

I summarize my findings below.

Participants Selection

We asked an external recruiting firm to find 15 participants who answered to the following criteria:
1.  Mix of males and females: roughly equal
2.  Age:

  • 7 people between the ages of 18 and 30
  • 6 people between the ages of 30 and 50
  • 2 people 50 or over

3. Employment:  employed full-time or to be full-time student
4. Involvement in marketing research: no such involvement in the last 6 months.
5.  Basic technology:  Each participant needed:

  • To have a broadband connection at home
  • To be a daily internet visitor, staying online for at least 2 hours every day for personal reasons

6.  Online activities:  Participants needed to have done all of the following while online in the last 2 months:

  • Look for information
  • Read comments written by others
  • Read blogs written by others

In addition, they needed to have done at least 4 of the following in the last 2 months:

  • Create or review a social networking profile
  • Post comments or reviews
  • Write a blog and conduct some activities on social networking sites
  • Shop
  • Share photos
  • Play games
  • Download music on a music player.

7.  Participants needed to have a strong interest in technology.
8.  Each participant needed to own at least one portable music device, one mobile phone, and one computer/laptop.

Of the 15 participants recruited, 13 were Windows users, 1 was a Mac user, and 1 used both Windows and Mac.  None of the participants was familiar with Ubuntu.

Methodology

During one-on-one sessions, Unity was presented on a netbook.  The goal of the sessions was to get participants to experience most of Unity’s features and functionalities.  I introduced the session with the following instruction:  “Imagine this is your new computer.  What do you want to do with it?”  I had songs and pictures available for them to import.  During the 60-minute session, I asked them to do some of the activities they normally do, which included importing songs and photos, engaging in social networking, etc..  However, some of the tasks were tailored to fit the specific interests and knowledge of given participants.  For example, if a participant was a professional musician, I proposed to him or her more tasks related to music management; similarly, if a participant was a student, more of the writing and reporting capabilities of Unity were explored.

Findings

What participants liked about Unity

The look and feel:  First impressions of Unity were positive and many comments pertained to its elegance.

One participant said:  “Simple and clear tabs on the side.  I don’t like to be crowded with stuff on the screen.  It looks quite approachable.

The core concept of Unity:  Many participants intuited the new concept underlying Unity’s look and feel and direction.

One participant  said:  “I like the application idea and that it is application-oriented and you can go there and have a quite user-friendly interactivity.  I  like the tabs at the top and when I hover it tells me what it is and it is big and clear, easy to see.”

Workspaces:  Many participants were unfamiliar with the concept of workspaces and were intrigued by its overall potential, and its friendliness.
One participant said:  “What you can do is that you can have different applications in different workspaces.  This is a nice feature.”

Dash:  Participants liked the simplicity of the dash and its look and feel.

Software Centre:  Generally, the software centre was seen as impressive, particularly its large number of available free software.

Usability issues

Performance of Unity

Participants were challenged by various functionalities and conceptual design features of Unity. It is noteworthy that one factor stood out as principally responsible for many usability problems: this was Unity’s slowness and suboptimal responsiveness during testing. The level of performance in this regard significantly impaired the flow of use and the user experience. Testing was done on a mid-range netbook that users were likely to own, and consequently the user experience in this regard was similar to what users would have experienced outside of the usability lab.

Multitasking  – Multitasking on Unity is disconnected and difficult at times

Task flow is interrupted: While working on a task, participants wanted to have all the documents and websites they were using easily available to them.  For them, the task was the unit of organisation of all resources and tools.  Thus, while working on a task, participants expected that Unity would provide them with a representation or visibility of what was available to them and how to easily access what they needed at any given point.  Unity does not, however, make evident the resources and tools users have at their disposal — whether it be multiple documents, programmes or websites. In a word, while Unity relies on users to keep track of the resources they are currently using, users are habituated to relying on the software to “keep track for them,” by making the resources highly visible, for example, by means of tabs. With Unity, resources are hidden from view.

Overall, participants found the navigation between documents to be cumbersome. Often, they looked for a way, a back button or breadcrumb, that would navigate them through their open applications and  documents.  They didn’t find either. As a consequence, they ended up going through the files and folders section to access their open documents. Clicking on the OpenOffice icon to access a document already open was not immediately discoverable.

One participant noted: “[navigation] seem[s] awkward.  I’m not getting around as quickly as I should.  The icons are supposed to tell you but I don’t know what they are.  That is a problem.

Another said: “It seems a bit pedantic to have to go through the application menu to navigate.”

Poor visibility of applications and of windows that are opened: Participants experienced difficulties with the following:

Minimising a document: When participants minimised a document, the document seemed to have disappeared when they expected the mininised document to be shown at the bottom of the screen.  In Unity, there were no traces of the document on the screen and participants could not find it again.

“I’m not entirely sure how to get back into my new document [after minimising it].  [It is] not clear that it opened a new page for it.  I don’t know how to get back to it.  I don’t know, it’s weird.  I expect it to be a tab and be able to switch between the two.”

Knowing which documents are currently opened:  Participants made it clear that they not only wanted to know if their documents were opened, but also wanted to know how many documents or windows were open at any given time.  But they did not always see the white triangle that indicates that a programme is running or that documents are opened.  Consequently, they were not aware of what was available to them.  In short, the general indicator was not enough for their needs for awareness.

Being able to distinguish between similar documents:  In the exposé view, documents are very small and, when 2 documents are similar, participants could not tell which was the one they wanted without opening each one individually.

Identifying a document:  When participants were in the process of writing a document, they looked for the name of the document they were working on in the top menu bar, but couldn’t find it.  They were also looking for such information in the exposé view.

Being aware of multiple open documents or windows they needed to consult or modify: Almost all participants wanted to organise their work by opening several windows and physically positioning them in a way to maintain an awareness of all of them.  For example, one participant opened Facebook and a word document.  She wanted to move the word document so she could expose the top part of Facebook.

“Can you move the window down at all?  I like to have a little bit of space for organisation.  I want to be able to move the window down.  It would be nice to [be able to do that], especially if you want to start working, you want to organise your environment.  You want to feel the sense of relief and organisation before you start working and [the sense] that what will come to you will be manageable.”

Immediate access to documents they currently are working on:  Exposé is not readily discoverable.  The majority of participants could not find the exposé feature.  As it is not a feature they are used to having, they need a prompt to help them explore it.

Exposé does not allow users to work on documents side-by-side.  Participants expected to use the exposé feature to be able to cut and paste between documents directly without having to bring one document up in full screen and then having to navigate to the next document by means of the workspaces icon.  As it is, exposé is useful for viewing only.  Participants would have liked to have had the option to directly navigate and conduct operations (e.g., cut and paste) between documents.

Documents cannot be resized and placed side-by-side:  No participant could find a way to resize his/her Open-Office documents in such a way that they could be placed side-by-side while working on both at once.

Overview of what’s going on: Many participants wished they could have a view of what resides in various parts of their computer, as is facilitated by Windows’s “my computer”.

Back button:  Many participants needed to “go back” to a document or an application that was currently open.  But the back button doesn’t work coherently and seemed to bring participants to arbitrary pages.

“The thing here, backward and forward buttons don’t work.  Back button doesn’t get me back to where I was.  I don’t know where it is taking                 me, I’m frustrated.  The back button is taking me somewhere [else].”

“If you can’t navigate you feel like a fool and you feel ashamed even if you’re alone in your room.”

“I’m struggling to find short-cuts and a back button.  Right now, to me, it feels very clunky to go from one place to another.”

Document management

Open documents: As already indicated briefly, participants didn’t know how many documents they had opened.  The white triangle indicator, meaning that the programme is working or that at least one document is currently open, is too general for users’ needs.

Delete documents:  Participants could not delete existing documents from their files and folders.  They first tried to drag and drop documents in the waste basket; then they right-clicked to see if they had such an option.

Copy and paste:  Copy and paste from one document to another didn’t always work for participants.
“Copy and paste is not going to work.  I’m not impressed, it must work.  I’m one for copying and pasting.”

Lack of feedback and guidance

Unity is often slow, and as a result participants tended to be confused about what was going on. They didn’t know:
1) if the system was slow and would eventually produce the desired effect, or
2) if the system had not registered the participant’s command and as a consequence, they should repeat their action, or even
3) if the system had crashed.

Unity needs to clearly show if it is engaged in a process or not.

“Has it crashed?”

“Let’s make sure I clicked it.  [She opens a document.  She's getting confused navigating between documents and she closes the document without saving it.]  When I close, it should ask me to save it.  This is not good [that it doesn't].  You rely on the computer to save the document or not.”

Search

When entering a search term in the search bar, there is no feedback to show that Unity is indeed engaged in a process.  Many participants were confused by this.

“How do you know this is searching?  Is there any sort of symbol to tell you if it is doing something?  I would like to see if there is a symbol that says the computer is working so I am not clicking around.”

Other issues:  Interactions with the launcher, finding the dash, lack of reference to understanding Ubuntu features

Interactions with the launcher

Adding an icon to the launcher:  Many participants were not able to add a short-cut of an application to the launcher.  They used various strategies but failed:

  1. They tried to drag and drop icons directly from the applications section to the launcher.
  2. They right-clicked, expecting an option to copy.
  3. A few participants noticed that when an application is open, it appears in the launcher, and when they close it, it disappears.  They were confused by this feature, but at the same time perceived it as a clue.  So they tried to drag and drop the icon from the applications section onto the similar icon visible in the launcher.

Changing the order of icons in the launcher:  Most participants failed to reorganise the order of icons in the launcher.  They selected an icon and tried to drag it upwards within the launcher; that is, they did not realise that they needed to remove the icon from the deck first before they could relocate it.

Deleting an icon from the launcher:  Participants did not succeed in deleting an icon from the launcher.  Most selected the icon and then dragged it into the waste basket at the bottom of the launcher.  They did not see that they could remove an icon by dragging it horizontally away from the launcher.

Finding the dash:  The majority of participants who found the dash found it by accident. They were not sure what it was, and didn’t know how they had gotten there if they accidentally had.  As a consequence, they were not able to find it again. The problem for participants here was that the logo is not recognisable to them.  Of course, they were not familiar with Ubuntu; but, equally important, the logo is less visible in size and colour than the icons contained in the launcher.  It took participants a while to see it at all.

“The logo should stand out more.  It would be nice if the icon itself [was] coming at you, more of a three-dimensional thing.

Lack of reference to understanding Ubuntu unique features

Software centre: Almost all participants were surprised when I suggested that they install a game that was not currently installed on their computer.  All of them immediately went to the internet.  When I told them that such games could be found on Ubuntu, they were confused and many didn’t know where to find them.  They went to the applications section, where there remembered a section called “installed” software.  They expected either a section below this one with a list of uninstalled software or (at the very least) a link to the software centre.

Moreover, most participants expected to be able to click on an icon in the dash, for example ‘music,’ and directly access their albums and songs.  They were a bit surprised to be taken to music applications instead, especially in light of the interactions of the other icons in the launcher, which  are quite different.

“I expect to see music files, names of songs or albums, anything you’ve downloaded.  Titles.”

Familiar short-cuts: Participants who normally used short-cuts to navigate didn’t find the familiar short-cuts and were frustrated.

“There are things that are nice to keep from one OS to another, like right click and short-cuts.  It’s nice to have two possible solutions.”

Changing the wallpaper

Many participants did not succeed in changing their wallpaper because the default screen of appearance was open in full screen by default.  Because of this, the action of immediate wallpaper change, as users select a wallpaper, was not visible to them. After participants had selected a wallpaper, they expected a step to confirm that they wanted their wallpaper to be changed.  Instead they were given  other options that were not relevant to what they were trying to do, and at which they had already (but unbeknownst to them) succeeded.

Search

The search functionalities felt limited to participants. When searching, they didn’t know what the field and scope were that were covered by the search engine they were using.

Grey icons in launcher

Participants thought that the grey icons in the launcher were inactive, especially when they clicked on them and the system was very slow in responding

Read more
imlad

I mentioned this in a previous post, but let me call it out explicitly.  The “it” from the previous sentence is a piece of market research conducted by Accentrue.  It talks about where Linux adoption in the enterprise (UK and US) is today.

Some highlights from the 300 businesses that were interviewed for the research:

  • 50% are fully committed to open source in their business
  • 28% say they are experimenting with open source and keeping an open mind to using it
  • 65% have a fully documented strategic approach for using open source in their business
  • 32% are developing a strategic plan for OSS adoption
  • Of the organizations using open source, 88% will increase their investment in the software in 2010 compared to 2009
  • An increase in demand for open source based on quality, reliability and speed, not just cost savings

There is much more in the article, so I strongly recommend clicking through.

Read more
imlad

The good part about LinuxCon were the sessions I attended and the people I spoke with.  The bad part was the sessions I did not manage to attend.

Distance makes the heart grow fonder, as they say, so at this juncture of the narrative I actually have some fond memories of my sojourn in Paris Island, North Carolina, where I learning various life affirming skills as US Marine recruit.  Dark humor aside, I remember being instructed by one of the Drill Instructors about what happens when a Drill Instructor speaks.  If you are curious, the answer is “The world stops” (some epithets were added to that, but I will leave their specific nature to your imagination ;-) ).

Not having become a Drill Instructor, the world around me keeps churning.  Business travel is a bit of a mixed bag – being away from the office allows you to focus on the purpose of travel (though between my cell phone and WiFi access and IRC one could easily fail to notice the difference between being in the office and being on the road).  Going to a local event (LinuxCon taking place in Boston and me taking place in Boston) does not get me far enough from the day to day to escape from it.  So, I got in to LinuxCon Tuesday afternoon, was there on Wednesday (with some breaks to do dry runs of a webinar on Ubuntu Enterprise Cloud we ran today), and today I was back in the office, running the morning and afternoon sessions of the webinar.

So I certainly feel I did not manage to experience the full scope of the event.  One of the themes I did “get” was Linux in the Enterprise – both by way of proliferation, as well as by what needs to happen for Linux and Open Source Software adoption to grow.  An interesting piece of research was published on August 8th by Accenture, showing some interesting numbers around OSS adoption, as well as what is perceived to be the benefits (reliability, stability, speed of bug resolution and cost saving, though cost saving is not the number one reason) around OSS and what are the challenges (perceived lack of sufficient number of developers with skill around OSS and top management buy in).

I felt that the key note address give by Jeffrey Hammond from Forrester Research addressed what is going on with Linux and OSS adoption in the Enterprise and what the Linux world (distros, ISVs, developers – all who care about the topic) need to do to help adoption along.

Bill McQuaide from Black Duck Software gave a talk on distributed, multi-source development with OSS.  Bill showed how, practically speaking, a large organization could manage its software writing and managing efforts in an environment where OSS is a component in the final product.  I think that this particular vision could go a long way towards putting executives’ minds at ease about OSS – showing not only that its introduction into the enterprise will not cause harm, but actually how it can make an organization’s code better.

Read more