Canonical Voices

Posts tagged with 'testing'

Ara

I have been asked to write a chapter for a book about the experiences of people involved in Open Source with the idea of “If I knew what I know today”. I asked if I could re-print my contribution here. I hope it is interesting for people concerned about Open Source testing.

Dogfooding Is Not Enough

I have been involved with Open Source since my early days at University, in Granada. There, with some friends, we founded the local Linux User Group and organized several activities to promote Free Software. But, since I left university, and until I started working at Canonical, my professional career had been in the proprietary software industry, first as a developer and after that as a tester.

When working in a proprietary software project, testing resources are very limited. A small testing team continues the work that developers started with unit testing, using their expertise to find as many bugs as possible, to release the product in good shape for end user consumption. In the free software world, however, everything changes.

When I was hired at Canonical, apart from fulfilling the dream of having a paid job in a free software project, I was amazed by the possibilities that testing a free software project brought. The development of the product happens in the open, and users can access to the software in the early stages, test it and file bugs as they encounter them. For a person passioned about testing, this is a new world with lots of new possibilities. I wanted to make the most of it.

As many people do, I thought that dogfooding, or using the software that you are aiming to release, was the most important testing activity that we could do in open source. But, if “given enough eyeballs all the bugs are shallow”, (one of the key lessons of Raymond’s “The Cathedral & The Bazaar”), and Ubuntu had millions of users, why very important bugs were still slipping into the release?

First thing that I found when I started working at Canonical was that the organized testing activities were very few or nonexistent. The only testing activities that were somehow organized were in the form of emails sent to a particular mailing list calling for testing a package in the development version of Ubuntu. I don’t believe that this can be considered a proper testing activity, but just another form of dogfooding. This kind of testing generates a lot of duplicated bugs, as a really easy to spot bug will be filed by hundreds of people. Unfortunately, the really hard to spot but potentially critical bug, if someone files it, is likely to remain unnoticed, due to the noise created by the other hundreds of bugs.

Looking better

Is this situation improving? Are we getting better at testing in FLOSS projects? Yes, I really believe so.

During the latest Ubuntu development cycles we have started several organized testing activities. The range of topics for these activities is wide, including areas like new desktop features, regression testing, X.org drivers testing or laptop hardware testing. The results of these activities are always tracked, and they prove to be really useful for developers, as they are able to know if the new features are working correctly, instead of guessing that they work correctly because of the absence of bugs.

Regarding tools that help testing, many improvements have been made:

  • Apport has contributed to increase the level of detail of the bugs reported against Ubuntu: crashers include all the debugging information and their duplicates are found and marked as such; people can report bugs based on symptoms, etc.
  • Launchpad, with its upstream connections, has allowed having a full view of the bugs, knowing that bugs happening in Ubuntu are usually bugs in the upstream projects, and allowing developers to know if the bugs are being solved there.
  • Firefox, with its Test Pilot extension and program, drives the testing without having to leave the browser. This is, I believe, a much better way to reach testers than a mailing list or an IRC channel.
  • The Ubuntu QA team is testing the desktop in an automated fashion and reporting results every week, allowing developers to have a very quick way to check that there have not been any major regressions during the development.

Although testing FLOSS projects is getting better, there is still a lot to be done.

Looking ahead

Testing is a skilled activity that requires lots of expertise, but in the FLOSS community is still seen as an activity that doesn’t require much effort. One of the reasons could be that the way we do testing is still very old fashioned and does not reflect the increase of complexity in the free software world in the last decade. How can it be possible that with the amount of innovation that we are generating in open source communities, testing is still done like it was in the 80s? Let’s face it, fixed testcases are boring and get easily outdated. How are we going to grow a testing community, who is supposed to find meaningful bugs if their main required activity is updating testcases?

But, how do we improve testing? Of course, we cannot completely get rid of testcases, but we need to change the way we create and maintain them. Our testers and users are intelligent, so, why creating step-by-step scripts? Those could easily get replaced by an automated testing tool. Instead of that, let’s just have a list of activities you perform with the application and some properties it should have, for example, “Shortcuts in the launcher can be rearranged” or “Starting up LibreOffice is fast”. Testers will figure out how to do it, and will create their testcases as they test.

But this is not enough, we need better tools to help testers know what to test, when and how.  What about having an API to allow developers to send messages to testers about updates or new features that need testing? What about an application that tell us what part of our system needs testing based on testing activity? In the case of Ubuntu we have the data in Launchpad (we would need testing data as well, but at least we have bug data). If I want to start a testing session against a particular component I would love to have the areas that haven’t been tested yet and a list of the 5 bugs with more duplicates for that particular version, so I avoid filing those again. I would love to have all this information without leaving the same desktop that I am testing. This is something that Firefox has started with Test Pilot, although they are currently mainly gathering browser activity. Google is also doing some research in this area.

Communication between downstream and upstream and vice-versa also needs to get better. During the development of a distribution, many of the upstream versions are also under development, and they already have a list of known bugs. If I am a tester of Firefox through Ubuntu, I would love to have a list of known bugs as soon as the new package gets to the archive. This could be done by having an acknowledged syntax for release notes, that could then get easily parsed and bugs automatically filed and connected to the upstream bugs. Again, all of this information should be easily available to the tester, without leaving the desktop.

Testing, if done this way, would allow the tester to concentrate on the things that really matter and that make testing a skilled activity; concentrate on the hidden bugs that haven’t been found yet, on the special configurations and environments, on generating new ways to break the software. On having fun while testing.

Wrapping Up

From what I have seen in the latest three years, testing has improved a lot in Ubuntu and the rest of FLOSS projects that I am somehow involved with, but this is not enough. If we really want to increase the quality of open source software we need to start investing in testing and innovating the ways we do it, the same way we invest in development. We cannot test 21st century software, with 20th century testing techniques. We need to react. Open Source is good because is open source is not enough anymore. Open Source will be good because it is open source and has the best quality that we can offer.

 

Read more
Ara Pulido

If you have ever participated in Ubuntu ISO testing you may know what this title is about. To coordinate testing and to avoid duplicating efforts, every time one of us starts a new testcase, we enter a line like the one in the title in the #ubuntu-testing Freenode IRC channel.

In this example it means that I have started the Full Disk testcase for the Ubuntu Live CD i386 image. Others willing to help will know that I am already working on that one and will be able to concentrate their efforts in other testcases.

This system is far from perfect, as not everybody is in IRC and, even if you are, you can lose the messages sent before you logged in.

To improve the system I have added a new “Started” status to the test reports. Now, when you start a testcase, instead of having to communicate it in the IRC channel, add a “Started” result to that testcase and others will know that you are working on it (it will show up in the list of results with an icon of a clock.

Testcase started

Hopefully this will improve the coordination of the ISO testing activities.


Read more
Ara Pulido

Mago introduction in GNOME Journal

This is old news, but I have been busy lately and I just haven’t had the time for a quick post about it.

I wrote an introductory article about Mago for the GNOME Journal and it was published in its November issue.

If you want to learn about Mago and how can it can help you testing your desktop application, the article is a good starting point.

GNOME Journal: GNOME desktop testing automation and how to use Mago


Read more
Ara Pulido

As you may already know, next Ubuntu release, Lucid Lynx (10.04) is an LTS release.

For testers this means one important thing: upgrades should be smooth from either Ubuntu 9.10 (Karmic Koala) or latest Ubuntu LTS release (8.04, Hardy Heron).

As we all know, nowadays, computer storage is very very cheap, but bandwidth is not. Later in the cycle we are going to need to test as many upgrades from Hardy and Karmic as possible. So, why not planning ahead and start downloading today Hardy and Karmic images? The unstoppable Shane Fagan has started doing so already! You rock!

Later in the cycle you will be able to easily install Hardy or Karmic in a spare machine or a virtual machine and upgrade from there. You will have part of the work done. And you can start contributing to your beloved distribution just now :-)

Other releases from Ubuntu derivatives can be found at:


Read more
Ara Pulido


This Thursday Karmic reaches the last milestone before the final release. As for every milestone, we need to test all the ISO images we produce, with every possible installation.

All of these test cases will appear, with instructions to follow, in the ISO tracker. If you don’t know how to use the tracker, this blog post will serve as starting guide.

One of the complains of the new comers is that they don’t know which test case needs testing. The coordination is done at #ubuntu-testing at Freenode and not everybody can access IRC. This time, Dave Morley and I, will try something new. As the RC images start appearing and testing begins, we are going to update in Twitter, using #ubuntutesting as tag.

If you want to help us testing RC images, please, follow us in Twitter and make sure to search for #ubuntutesting for updates. And if you’re helping testing, please, tweet about it!

Of course, this is an extra way to get informed. Coordination will happen, as usual, at #ubuntu-testing IRC channel.

Read more
Ara Pulido


Ubuntu 9.10 (Karmic Koala) Beta was released last Thursday. I am so glad to announced that we 98.9% coverage of the test cases in the ISO tracker. I would like to thank the community members that helped testing the ISOs, specially those who joined recently. Thanks! I am discussing with the Community team about the possibility of including this participation in the Ubuntu Hall Of Fame, just as the bug triagers or sponsors are.

I will blog about Release Candidate ISO testing as we approach the milestone week ;-)

Also, and because we are getting new contributions, I would like to comment some of the reports we got, so we can improve every milestone.

Not really a failure

We got this comment, in a test case marked as failure:

I have a tablet fujitsu p1630 and the stylus works in the life cd! great, congratulations!
(missing is the calibration tool which should be loaded. The stylus is not properly calibrated and cannot reach the top line (where the application menus sit!).[...]

In the ISO test tracker we mark as failures those experiences that prevented us to do what we want to achieve in that test case. I.e. If we want to install, and the partition manager fails, that’s a failure. If we do install (or can access to the Live environement, as in this case), the test didn’t fail as such. We would mark that as success, but will link the non-critical bugs that we find.

Usability bugs are bugs

The lack of colour in the default options during installation could cause problems for new users.
The default setting of Mute, for sound could cause problems for new users.

These are great examples of usability bugs. Thanks for noticing them! Usability bugs are bugs, so do not only put them as comments in your report, also go and file bugs in Launchpad for them. They will help a lot to new users to understand how Ubuntu works.

Read more
Ara Pulido

Mago is in Karmic!!


Yes, if you updated your Karmic repositories lately, you can install Mago just typing “sudo apt-get install mago”. What does that mean? Not much, for the moment.

We have packaged the library and harness, but not the tests. Once you have installed Mago in your machine, you can start writing tests using the library, you will be able to run the tests with the Mago harness and get nice reports in XML and HTML. You won’t have the already written tests, though. Tests change a lot, so it does not make sense to keep them in the repositories, which are quite static. PPAs are a much better place for tests, which I am planning to maintain.

I still have a lot of work to be done regarding this: I have to set up the PPA for the tests and update the documentation at the Mago site, to start. Why did I push Mago into Karmic, then? There are two main reason: Feature Freeze and Holidays. Ubuntu development schedule has an important date called Feature Freeze, after which no new packages are generally accepted and only bug fixes are supposed to get uploaded. Karmic FF is happening August 27th, but I will be on holidays since August 14th, so it was now or never (or Karmic+1).

The main advantage of having Mago in the repositories is that, if a project wants to use it as part of the testing of their daily builds, they can set Mago as a Build-Depends of the project and forget about whether Mago is installed or not in the build machine. And, don’t worry, I will be updating the documentation and setting up the tests PPAs after my holidays, but, in the mean time, Mago is there, ready before Feature Freeze. Happy testing!

Read more