Canonical Voices

Nicholas Skaggs

As promised, here is your reminder that we are indeed fast approaching the final image for trusty. It's release week, which means it's time to put your energy and focus into finding and getting the remaining bugs documented or fixed in time for the release.

We need you!
The images are a culmination of effort from everyone. I know many have already tested and installed trusty and reported any issues encountered. Thank you! If you haven't yet tested, we need to hear from you!

How to help
The final milestone and images are ready; click here to have a look.

Execute the testcases for ubuntu and your favorite flavor images. Install or upgrade your machine and keep on the lookout for any issues you might find, however small.

I need a guide!
Sound scary? It's simpler than you might think. Checkout the guide and other links at the top of the tracker for help.

I got stuck!
Help is a simple email away, or for real-time help try #ubuntu-quality on freenode. Here are all the ways of getting ahold of the quality team who would love to help you.

Community
Plan to help test and verify the images for trusty and take part in making ubuntu! You'll join a community of people who do there best everyday to ensure ubuntu is an amazing experience. Here's saying thanks, from me and everyone else in the community for your efforts. Happy testing!

Read more
Nicholas Skaggs

Time to test trusty!

Say that three times fast. Time to test trusty,
time to test trusty, time to test trusty!
Ahh it's my favorite time of the cycle. This is the part were we all get serious, go a little bit crazy, and end super excited to release a new version of ubuntu into the world. This time it's even more special as the new version is a brand new LTS, which we look forward to supporting for the next 5 years.


The developers and early adopters have been working hard all cycle to put forth the best version of ubuntu to date. For you! For all of us! It's time to fix bugs, do last minute polish and prepare for the release candidate which will occur around April 11th.

We need you!
This is were you dear reader come in. You see despite their good looks and wonderful sense of humor and charm, the release team doesn't like to release final images of ubuntu that haven't been thoroughly tested.

The release team is ready to pounce on untested images
We need testing, and further, we need the results of that testing! We need to hear from you. Passing test results matter just as much as failures. The way to record these results is via the isotracker; we can't read your mind sadly!

How to help
Mark your calendars now for April 11th - April 16th. Pick a good date for you and plan to download and test the release candidate image. You'll see a new milestone on the tracker, and an announcement here as well when the image is ready. I won't let you forget, promise!

Execute the testcases for ubuntu and your favorite flavor images. Install or upgrade your machine and keep on the lookout for any issues you might find, however small.

I need a guide!
Sound scary? It's simpler than you might think. Checkout the guide and other links at the top of the tracker for help.

I got stuck!
Help is a simple email away, or for realtime help try #ubuntu-quality on freenode. Here's all the ways of getting ahold of the quality team who would love to help.

Community
Plan to help test and verify the images for trusty and take part in making ubuntu! You'll join a community of people who do there best everyday to ensure ubuntu is an amazing experience. Here's saying thanks, from me and everyone else in the community for your efforts. Happy testing!

Read more
Nicholas Skaggs

It may be hard to believe but the next, and dare I say, best, LTS for ubuntu is releasing very soon. We need your help in polishing out critical bugs and issues!

How can I help? 
To help test, visit the iso tracker milestone page for 'Trusty Beta 2'.  The goal is to verify the images in preparation for the release. Find those bugs! The information at the top of the page will help you if you need help reporting a bug or understanding how to test. 

So what's new? 
Besides the usual slew of updates to the applications, stack and kernel, unity has new goodies like minimize on click, menus in toolbar, new lockscreen, and borderless windows!

What if I'm late?
The testing runs through this Thursday March 27th, when the the images for beta 2 will be released. If you miss the deadline we still love getting results! Test against the daily image milestone instead.

Thanks and happy testing everyone!

Read more
Nicholas Skaggs

Keeping ubuntu healthy: Core Apps

Continuing our discussion of testing within ubuntu, today's post will talk about how you can help the community core apps stay healthy.

As you recall the core apps go through a series of QA before being released to the store. However bugs in the application, or in the platform itself can still be exposed. The end result is that the dashboard contains tests failures for that application. To release a new stable image, we need a green dashboard, and more importantly we need to make sure the applications work properly.

Getting plugged in
So to help out, it's important to first plug into the communication stream. After all, we're building these applications and images as a community! First, join the ubuntu phone group on launchpad and sign up for the phone mailing list. The list is active and discussing all issues pertaining to the ubuntu phone. Most importantly, you will see landing team emails that summarize and coordinate issues with the phone images.

From there you can choose a community core app to help improve from a quality perspective. These applications all have development teams and it's helpful to stay in contact with them. Your merge proposal can serve as an introduction!

Finding something to work on
So what needs fixing? A landing team email might point out a failing test. You might notice a test failure on the dashboard yourself. In addition each application keeps a list of bugs reported against it, including bugs that point out failing tests or testing needs. For example here's the list of all new autopilot tests that need to be written for all of the core apps. Pick an app, browse the buglist, assign a bug to yourself, and fix it.

For example, here's the list of bugs for music app. As of this writing you can see several tests that need written, as well as a bug for a test improvement.

You can also simply enhance the app's existing testsuite by fixing a flaky test, or improving the test to use best practices, etc. As a bonus for those reading this near it's original publication date, we just had a session @ vUDS covering the core apps and the testing needs we have. Watch the session / browse the pad and pick something to work on.

Fixing things
Look into any failures you find and have a look at the tests. Often the tests can use a little improvement (or maybe an additional test), and you can help out here! Sometimes failures won't happen every run -- this is the sign of a weird bug, or more likely a flaky test.  Fix the test(s), improve them, or add to them. Then commit your work and submit a merge proposal. Follow the guide on the wiki if you need help with doing this.

Remember, you can iteratively run the tests on your device as you work. Read my post on click-buddy for help with this. If you are lacking a device, run the tests on your desktop instead and a reviewer can test against a real device before merging.

Getting Help
For realtime help, check out #ubuntu-quality and #ubuntu-autopilot on freenode. You'll find a group of folks like yourself working on tests, hacking on autopilot and sharing advice. If IRC isn't your thing, feel free to contact us through another method instead. Happy hacking!

Read more
Nicholas Skaggs

Continuing our discussion of the testing within ubuntu, today's post will talk about how you can help ubuntu stay healthy by manually testing the images produced. No amount of robots or automated testing in the world can replace you (well, at least not yet, heh), and more specifically your workflow and usage patterns.

As discussed, everyday new images are produced for ubuntu for all supported architecture types. I would encourage you to follow along and watch the progression of the OS through these images and your testing. Every data point matters and testing on a regular basis is helpful. So how to get started?

Settle in with a nice cup of tea while testing!

The Desktop
For the desktop images everything you need is on the image tracker. There is a wonderful video and text tutorial for helping you get started. You report your results on the tracker itself in a simple web form, so you'll need a launchpad account if you don't have one.

The secondary way to help keep the desktop images in good shape is to install and run the development version of ubuntu on your machine. Each day you can check for updates and update your machine to stay in sync. Use your pc as you normally would, and when you find a bug, report it! Bugs found before the release are much easier to fix than after release.

Phablet
Now for the phablet images you will need a device capable of running the image. Check out the list. Grab your device and go through the installation process as described on the wiki. Make sure to select the '-proposed' channel when you install so you will see updates to get the latest images being worked on every time they are built. From there you can update everyday. Use the device and when you find a bug, report it! Here's a wiki page to help guide your testing and help you understand how and where to report bugs.

Don't forget there's a whole team of people within ubuntu dedicated to testing just like you. And they would love to have you join them!

Read more
Nicholas Skaggs

Since just before the last LTS, quality has been a buzzword within the ubuntu community. We've come a long way since precise and I wanted to provide some help and prospective on what ubuntu's process for quality looks like this cycle. In simple terms. Or as reddit would say, "explain to me like I'm 5".

I'll try and define terms as we go. First let me define CI, which is perhaps the buzzword of this cycle, lest I lose all of you! CI stands for continuous integration, and it means we are testing ubuntu. All the time. Non-stop. Every change we make, we test. The goal behind this idea is to find and fix problems, before well, they become problems on your device!

CI Dashboard
The CI dashboard then is a way to visually see the results of this testing. It acts as a representation of the health of ubuntu as a distribution. At least once a day runs are executed, apps and images are tested and benchmarked, and the results are populated on ci.ubuntu.com. This is perhaps the most visible part of the CI (our continuous testing efforts) that is happening within ubuntu. But let's step back a minute and look at how the overall CI process works within ubuntu.

CI Process
App developers hack on a bit of code, fixing bugs or adding new features to the codebase. Once the code is ready, a merge proposal1 is created by the developer and feedback is sought. If the code passes the peer review and the application's tests, it will then become part of the image after a journey through the CI train.


For the community core apps, the code is merged after peer review, and then undergoes a similar journey to the store where it will become part of the image as well. Provided of course it meets further review criteria by myself and Alan (we'll just call him the gatekeeper).
Though menacing, Alan assures me he doesn't bite

Lest we forget, upstream2 uploads3 are done as well. We can hope some form of testing was done on them before we received them. Nevertheless, tests are run on these as well, and if they pass successfully, the new packages will enter the archive4 and become part of the image.

Generating Images
Now it's time to generate some images. For the desktop a snapshot of what's in the ubuntu archive is taken each day, built, and then subjected to a series of installation tests. If the tests pass, it is released for general testing called Image (or ISO) testing. An image is tested and declared stable as part of a milestone (or testing event) and can become the next version of ubuntu!
Adopted images are healthy images!

On the ubuntu phone side of things, all the new uploads are gathered and evaluated for risk. If something is a large change, it might be prudent to not land it with other large changes so we can tell what broke should the image not work properly. Once everything is ready, a new image is created and is released for testing.  The OTA updates (over-the-air; system updates) on your ubuntu phone come from this process!

How you can help?
Still with me I hope? As you can see there's many things happening each day in regards to quality and lots of places where you can create a positive change for the health of the distro! In my next few posts, I'll cover each of the places you can plug in to help ubuntu be healthy everyday!

1. A merge proposal is a means of changing an applications code via peer review.
2. By upstream, I mean the communities and people who make things we use inside of ubuntu, but are not directly a part of it. Something like the browser (firefox) and kernel are good examples.
3. This can happen via a general sync at the beginning of the cycle from debian. This sync copies part of the debian archive into the ubuntu archive, which in effect causes applications to be updated. Applications are also updated whenever a core ubuntu developer or a MOTU uploads a new version to the archive. 
4. In case you are wondering, "the archive" is the big repository where all of your updates and new applications come from!

Read more
Nicholas Skaggs

Recently the ubuntu core app developers and myself have been on an adventure towards adopting cmake for the all the core applications. While some of the applications are pure qml it's still been useful to embark on adopting a singular build system for all of the projects. Now that (most) of the pain of transitioning is gone, I'm going to talk about one of the useful features of setting up cmake for your project; click-buddy!

Click-buddy is an evolving tool that helps you build and deploy click packages to your phablet device. In addition, it has the ability to setup the device to run your autopilot test suite. So, rather than writing anything further, let's cover an example. You are going to need phablet-tools installed for this to work. I'm going to branch the clock app, build the click package, install it on my device, and finally run the tests.

bzr branch lp:ubuntu-clock-app
cd ubuntu-clock-app
click-buddy --dir . --provision
phablet-test-run -v ubuntu_clock_app

Click-buddy is also gaining the ability to build your project, even it involves a plugin and you are interested in building for your phablet device (armhf). Once landed you will be able to run something like this for non-qml applications.

sudo click chroot -a armhf create
click-buddy --arch armhf --provision

This will setup a chroot automagically for you and compile and build your application. Give it a try!

Note, as of this writing, emulator support for the ubuntu-ui-toolkit emulator is not yet built in. If your tests fail with a module import, run this line from your connected pc. It will copy over the ubuntu-ui-toolkit emulator (provided you have it installed on your pc :-) ) and your tests should now properly run.

adb push /usr/lib/python2.7/dist-packages/ubuntuuitoolkit /home/phablet/autopilot/ubuntuuitoolkit

Read more
Nicholas Skaggs

Trusty Cycle Updates

First things first, we have our burndown up for the trusty cycle! If your name appears on the list next to a work item, congratulations! Let's work through the items and make it happen ;-) If your name doesn't appear, don't worry! Feel free to help complete the tasks anyway, or jump in an get started on something you love. The mailing list is always open and we love to hear new ideas. See a blueprint that looks interesting and want to help? Get in touch!

What's happening now?
Now, for an update as to what is going on in quality. This week starts Alpha 1 for the flavors that are participating. In addition, the first work items are starting to see completion.

Ali has been working on getting the wiki pages and workflow ready for our community calls for testing. You can check out the page on the wiki here, and schedule testing events. The page explains the basic workflow -- I'd encourage you to schedule events for your favorite packages and let others know about the event on the mailing list. Be sure and report the results once you've completed.

Alberto has been working on getting the papercuts project running for the trusty cycle and ensuring the buglist is maintained and accessible.

Dan has been working on getting our automated testing in place and we're meeting with the CI team this week to get this implemented as part of the release process. Excellent work Dan!

Dan has also been working on the autopilot gtk emulator, which seeks to do the same thing as we've done with the autopilot ubuntu sdk emulator. Both of these frameworks help make writing tests easier.

Speaking of tests, Carla and others have been busy adding and fixing tests to make sure our dashboard stays green. And look at it! We're over 99% passing tests for the ubuntu touch platform. I want to personally thank everyone who's helped write and maintain these tests. Kudos to you all!

On the desktop side, Dan is helping new folks land tests, and the ubuntu desktop default apps continue to run and provide feedback on the desktop.

We also saw several new members join the ubuntu community via ubuntu quality recently. Congrats to Jean-Baptiste, Dan Chapman and Daniel Kessel (my apologies if I missed you!). 

Finally we have Thibaut and other folks looking at refactoring and improving ubuntu startup disk creator. Checkout the full blueprint for more information.

So, what's next?
Well for most of us, it's the Holiday season. Looking beyond the Holidays, , there's several big items in my list to work on in January:

  1. Continue to keep the dashboard green all the time
  2. Do wiki, mailing list, launchpad integration with bugsquad
  3. Add wiki documentation and videos for exploratory testing
What am I doing?
Well, you dear reader, I trust is getting involved in some of the work items and other activities. It starts with installing the development version of ubuntu and then diving in from there. Perhaps you want to write some tests, triage or fix some bugs, or schedule a testing event. If you are interested in making a contribution to ubuntu, we can always use good testers, test writers, developers and triagers!

If you are celebrating holidays this time of year, enjoy your time!  Thanks everyone and Happy Testing!

Read more
Nicholas Skaggs


Since I last posted on the topic of our communities future, we've helped push out a new release of ubuntu. Saucy and fresh, the little salamander is now out in the wild. With the release out it's time to move forward with these changes.

(c) Carla Sella
The beginning of a new cycle is always a time to breathe for us in quality and these past few days of relative calm has indeed been a welcome respite from the craziness of testing leading up to a release.

We can rebuild it. We have the technology. Better than it was before. Better, stronger, faster.

So, let's talk about some of the changes coming to quality for this new cycle.

Defined Roles
I want to help our new members become a productive part of ubuntu quality as soon as possible. To this end I've created a list of roles for the quality team. By defining roles within the team it is easy to see how you can contribute as a 'tester', 'test writer', or 'developer'.

Contribute anytime
While release milestones and calls for testing will still be important, contributing to ubuntu quality can be a daily task. There are activities you can perform any day and anytime in the cycle. The roles pages list activities for each role that can be done right now. If you are a tester, check out the activities you can do right now to help!

More exploratory testing
We're iterating faster and faster. Builds of new code are landing each day, and in some cases several times a day. We can't afford to only test every other week with cadence testing. Instead, we've ramped up efforts to automatically test on each of these new builds. But we still need manual testing!

As a tester you can provide testing results for images, packages and hardware at any time! In addition, exploratory testing is highly encouraged. This is were we as manual testers shine! I want to encourage ongoing exploratory testing all throughout the cycle. Run and use the development version of ubuntu on your machine all the time!

Tackling some big projects
One of the things I wanted to push us as a team towards was tackling some projects that have a wider impact than just us. To that end, you can see several big projects on the activities pages for each role.

For testers, we are undertaking making reporting issues better for users. For the test writers, one of the largest projects is spearheading the effort to make manual image testing less burdensome and more automated by automating ubiquity testing. And for developers, the autopilot emulators are big projects as well and need help.

More involvement with bugs
As a quality community with interact with many bugs. Sometimes we are finding the bug, other times we might confirm them or verify a fix works. However, we haven't always gone the extra step towards doing work with SRU verifications and bug triage. The bugsquad and others have traditionally performed these tasks. If you'll notice these activities are now encouraged for those in the tester role. Let's dive more into the bug world.

A potential expansion of the team
With the mention of the bugsquad and the encouraged involvement with bugs for our team, I would like to propose a union between the quality and bugsquad teams. I would encourage current bugsquad members to take up tester roles, and consider some of the additional opportunities that are available. For those who have been testing with the quality team in past cycles, let's get more invovled with bugs and traditional bugsquad activities as mentioned above.


Making a quality LTS
Trusty Tahr is going to be the next LTS release. Those of you who remember and use Precise will agree that it is going to be a tough act to follow. The bar is set high, but I am confident we can reach it and do better.

We can't do this alone. We need testers, test writers, and developers! If you are interested in helping us achieve our goal, please join us! Now is an excellent time to learn and grow with the rest of us on the team. Thanks for helping making ubuntu better!

Read more
Nicholas Skaggs

(c) http://www.flickr.com/photos/lalalaurie/
Here are 3 easy steps to testing prowess. Do your part to be prepared! Let's help make saucy a great release!

1) Review both the critical bugs and those found during isotesting throughout the cycle. This is handy in case you find something during testing and want to know if someone has already reported it or not.

2) Make sure you know how to use the tracker and how to test an iso. When the milestone is created and the images appear on October 10th, you want to be ready to go!

3) Test and report your results to the tracker ;-) Note, at this point in the cycle we prefer real hardware results. Upgrade your machine! Try a fresh install. If you've been sitting on raring, now is the time to migrate to saucy and let us know of any bugs you find.

Happy Testing!

Read more
Nicholas Skaggs

We're still here and we're still testing!



And you can still join us!

Grab your nexus device, check out the wiki page, and flash away. File bugs, and keep ontop of the updates coming out. Use the device as you would any other phone. Ohh and check out the app development contest winners by installing the apps from the click store on the phone (check out the more suggestions section).

The release is only 2 weeks away. That's 14 days or a mere 336 hours. And we'll be spending 112 hours sleeping (perhaps less, get some sleep!). That leaves just 224 hours for testing. Join us!

#UbuntuTestDanceMaster out.

Read more
Nicholas Skaggs

As I eluded to in my previous post, it's time for us as a community to fully embrace the place of automated testing in our workflows. Over the past year we've learned how to write testcases and to then apply that knowledge towards writing autopilot tests. At the same time ubuntu has been building testing infrastructure and launching a CI team to help run and maintain it.

With these changes and our acquired knowledge we can construct a sustainable vision for testing. So just what is that vision?

Let's make manual testing fun again. As we've ramped up our workloads, we find ourselves repetitively testing the same packages again and again (and running the same tests!). We'd call this regression testing and it's a perfect candidate for automated testing. Let's automate these tests and keep our focus on where we as humans excel.

In addition, we as a community participate in "calls for testing". These calls have become more and more exploratory in nature. In general, we have moved further away from a defined testcase ("perform steps 1, 2, and 3"). Instead, we encourage testers to adopt a try and break it attitude. This is where we as humans can excel, and you know what, have some fun at trying! Remember QA is the only place we encourage you to break things!

So let's get manual testing back to the exploratory testing we love.

Thank you little testing robot!
Let's expand our automated tests. We can increase testing coverage forever with a one time investment of writing a testcase. Developers, write tests for your code and feature sets as you develop them. As a quality community, let's do our best to make it easy for developers to do so.

In addition, a well written bug report can become an excellent testcase to be added to the application testuite. Let's look at bug reports for potential testcases. We can write the test and then forget it. The little robots will tell us if it becomes a problem again in the future and even prevent the bad code from getting shipped where it can break our machines (again). Not bad little robots!

So let's focus on ensuring tests exist for critical regressions when they get fixed.

Let's tackle some big projects. Who said we can't achieve amazing things in the quality community? This vision would be nothing more than an idea without some actions behind it!

Right now we as a community already have a chance to put these ideas in action. Exploratory testing is happening right now on the phablet devices. Get involved! This is the manual, have fun and try and break it, exploratory testing we all love. Even without a phablet device, regressions can be turned into autopilot tests. This meets our goal to expand our automated tests and to look at bugs as potential sources for those tests.

All test and no play makes for a sad image
Moving forward there are still a some big projects to tackle. One of the largest is the amount of manual testing effort required in cadence and milestone testing. We can bring automated testing to the mix to both reduce our on-going effort and raise the quality bar in our releases.

So let's be the leaders for change in ubuntu.

Are you in?

Read more
Nicholas Skaggs

Spreading the (testing) weight

With all the buzz and excitement around quality over the last few cycles, we've been busy. We've built tools, started processes, wrote testcases and committed ourselves to testing. When the time came for us to write new applications for the ubuntu touch platform, we took our quality aspirations with us and tested all through development. The "core apps" for the phone is a perfect example of this; community developed software including tests. Kudos to everyone who is taking part in these efforts.

However, as more and more tests came online, new problems developed. How can we run these tests? When do we want to run them? How can we develop more? What tools do we use? The quality team met all of these challenges, but the weight started to grow heavier.

Rob Macklem Victoria BC derivative: Plasticspork [CC-BY-SA-3.0]
Something needed to be done. Before the bar breaks or our arms give out, let's split the weight. And thus a new team was formed.

Ivanko Barbell Company

Enter the CI team. Behind the scenes running tests, getting our infrastructure in place, and chasing down regressions, the CI team has their hands full. In a nutshell they will ensure all the tests are run as needed. If a test fails they will chase down and ensure the right folks are notified so fixes can be made. Ohh, and yes, that means potential bugs will never even hit your system dear user.

The QA team's mission then gets to be refocused. They will continue to write tests, develop tools if needed, and continue to push for greater quality in ubuntu and upstream. However they no longer have to balance these tasks with ensuring builds stay green, or writing a dashboard to review results. The result is a better focus and the opportunity to do more.

So how does this affect us as a community? The simple answer is that we too can take advantage of this new testing infrastructure and CI team. Let's focus on what coverage we need and what the best way to achieve it might be. We'll also be able to align our goals and ambitions even more with the QA team. I'll share more of my thoughts on this in the next post.

Read more
Nicholas Skaggs

Autopilot Sandboxing

Autopilot has continued to change with the times, and the pending release of 1.4 brings even more goodies; including some performance fixes. But today I wanted to cover a newly landed feature from the minds of Martin and Jean-Baptiste (thanks guys!).

If you've developed autopilot tests in the past you will have noticed how cumbersome it can be to run the tests. If you run a test on your desktop, you lose control of your mouse and keyboard for the duration, and you might even accidentally cause a test to fail. This can be especially noticeable when you are iterating over getting your test to run "just right" while wanting to keep your introspection tree in vis open, or reviewing someone else's code while wanting to verify the tests run. Enter sandbox mode.

A new command called autopilot-sandbox-run lets you easily run a testsuite inside your choice of two sandboxes; Xvfb by default, or if you want to visually see the output, Xephyr. Have a quick look at the command options below as of this writing:

Usage: autopilot-sandbox-run [OPTIONS...] TEST [TEST...]
Runs autopilot tests in a 'fake' Xserver with Xvfb or Xephyr. autopilot runs
in Xvfb by default.
   
    TEST: autopilot tests to run

Options:
    -h, --help           This help
    -d, --debug          Enable debug mode
    -a, --autopilot ARG  Pass arguments ARG to 'autopilot run'
    -X, --xephyr         Run in nested mode with Xephyr
    -s, --screen WxHxD   Sets screen width, height, and depth to W, H, and D respectively (default: 1024x768x24)


The next time you want to get your hands dirty with some autopilot tests, try out the sandbox. I'm sure you'll find a very nice use for it in your workflow; after all wouldn't it be handy to run multiple testsuites at once?

Read more
Nicholas Skaggs

As of today, we are exactly one month away from the release of Saucy Salamander. As part of that release, ubuntu is committed to delivering an image of ubuntu-touch, ready to install on supported devices.

And while folks have been dogfooding the images since May, many changes have continued to land as the images mature. As such, the qa team is committing to test each of the stable images released, and do exploratory testing against new features and specific packagesets.

If you have a device, I would encourage you to join this effort! Everything you need to know can be found upon this wiki page. You'll need a nexus device and a little time to spend with the latest image. If you find a bug, report it! The wiki has links to help. Testing doesn't get anymore fun than this; flash your phone and try to break it! Go wild!

And if you don't own a device? You can still help! As bugs are found and fixed, the second part of the process is to create automated tests for them so they don't occur again. Any bug you see on the list is a potential candidate, but we'll be marking those we especially think would be useful to write an autopilot tests for with a " touch-needs-autopilot" tag.

Join us in testing, confirming bugs, or testwriting autopilot tests. We want the ubuntu touch images to be the best they can be in 1 month's time. Happy Testing!

Read more
Nicholas Skaggs

Jamming Quality Style

It's that time of year again! Time to get your jam on (I like mine on a bagel).
  

While you are making plans for Ubuntu Global Jam, don't forget you can contribute to quality as well. There's a separate subpage of the global jam wiki dedicated to it.

We love new test contributions, and there's a collection of wiki tutorials and videos to help you contribute them. You don't have to be technical to write tests -- we need manual testcases also which are written in plain English :-)

More interested in submitting your results for tests? We've also got you covered. We have tests for the default applications of ubuntu as well as the images of ubuntu. Download an image and run it on your machine. Try running through some default testcases for ubuntu or your favorite flavor. An image and a pc or laptop is all you need to get started. Happy Jamming!

Read more
Nicholas Skaggs

As mentioned in my last post, Mir is one of the biggest changes coming in 13.10. With feature freeze now happening this week, it's time to amp up our testing engines once more to test the final features and help land Mir into the archive.

The Mir team has put together both a ppa and wiki page that contains all the information you need to help with testing. The testing window closes in 2 days on August 28th, just in time for feature freeze. The biggest changes for Mir are the inclusion of multi-monitor support and thus are a focus for this testing. So here's the details you need to know.

What?
Help test Mir using your current system, ubuntu saucy and the Mir team ppa.

When?
Now through August 28th.

How?
The full instructions for installing the ppa, running the tests, and reporting the results can be found on this wiki page. Results are reported on this page or via the package tracker testing page.

Thank you for your contributions! Good luck and Happy Testing Everyone!

Read more
Nicholas Skaggs

Automated Testing in ubuntu

So with all the automated testing buzz occurring in the quality world of ubuntu this cycle, I wanted to speak a little about why we're doing the work, and what the output of the work looks like.

The Why
So, why? Why go through the trouble of writing tests for the code you write? I'll save everyone a novel and avoid re-hashing the debate about whether testing is a proper use of development time or not. Simply put, for developers, it will prevent bugs from reaching your codebase, alleviate support and maintenance burdens, and will give you more confidence and feedback during releases. If you want your application to have that extra polish and completeness, testing needs to be a part. For users, the positives are similar and simple. Using well tested software prevents regressions and critical bugs from affecting you. Every bug found during testing is a bug you don't have to deal with as an end user. Coincidentally, if you like finding bugs in software, we'd love to have you on the team :-)

The How
In general, three technologies are being used for automated testing.
Autopilot, Autopkg, and QML tests. Click to learn more about writing tests for each respectively.

Any color is good, as long as it's green
The Results
The QA Dashboard
The dashboard holds most of the test results, and gives you a much nicer view of the results than just looking at a jenkins build screen. Take some time to explore all of the different tabs to see what's available here. I wanted to highlight just a couple areas in particular.
  • Autopkg Tests 
    • These testcases run at build time and are great testcases for low level libraries and integral parts of ubuntu. Check out the guide for help on contributing a testcase or two to these. Regressions have been spotted and fixed before even hitting the ubuntu archives.
  • Smoke Tests for ubuntu touch
    • This is some of the newest tests to come online and displays the results of the ubuntu phone image and applications, including the core apps which are written entirely by community members. Want to know how well an image is running on your device? This is the page to find it.
Ubiquity Installer Testing
Curious about how well the installer is working? Yep, we've got tests for those as well. The tests are managed via the ubiquity project on launchpad.

Ubuntu Desktop Tests
Wondering how well things like nautilus, gedit and your other favorite desktop applications are doing? Indeed, thanks to our wonderful quality community, we've got tests for those as well. The tests can be found here.

The Next Steps
We want to continue to grow and expand all areas of testing. If you've got the skills or the willingness to learn, try your hard at helping improve our automated testcases. There's a wide variety to choose from, and all contributions are most welcome!

Sorry robot, all our tests are belong to us
Don't have those skills? Don't worry, not only are machines not taking over the world, they aren't taking over testing completely either. We need your brainpower to help test other applications and to test deeper than a machine can do. Join us for our cadence weeks and general calls for testing and sample new software while you help ubuntu. In fact, we're testing this week, focusing on Mir.

Finally, remember your contributions (automated or manual) help make ubuntu better for us all! Thank you!

Read more
Nicholas Skaggs

Feature freeze coming? Let's test!

We're already approaching feature freeze at a quickening pace, and thus the next few weeks are rather important to us as a testing community. 13.10 is landing in October, which is now rapidly approaching (where did the summer go?!).

What?
Let's run through some manual tests for ubuntu and flavors. I'd like to ask for a special focus to be given to Mir/xMir. We plan to have a rigorous test of the package again in about a week once all features have landed. In the interim, let's try and catch any additional bugs.

When?
This week Saturday August 17th through Saturday August 24th. It's week 5 for our cadence tests.

Ok, I'm sold, what do I need to do?
Execute some testcases against the latest version of saucy; in particular the xMir test.

Got any instructions?
You bet, have a look at the Cadence Week testing walkthrough on the wiki, or watch it on youtube. If you get stuck, contact us.

Where are the tests?
You can find the Mir test in it's own milestone here. Remember to read and follow the installation instructions link at the top of the page!
The rest of the applications and packages can be found here.

I don't want to run/install the unstable version of ubuntu, can I still help?
YES! Boot up the livecd on your hardware and run the live session, or use a virtual machine to test (install ubuntu or use a live session). The video demonstrates using a virtual machine booting into a live session to run through the tests. For the Mir/xMir tests, however, we'd really like results from real hardware.

But, virtual machines are scary; I don't know how to set one up!
There's a tool called testdrive that makes setting up a vm with ubuntu development a point and click operation. You can then use it to test. Seriously, check out the video and the walkthrough for more details.

Thank you for your contributions! Good luck and Happy Testing Everyone! 

Read more
Nicholas Skaggs

Autopilot best practices

I've now had the pleasure of writing autopilot tests for about 9 months, and along the way I've learned or been taught some of the things that are important to remember.

Use Eventually
The eventually matcher provided by autopilot is your best friend. Use it liberally to ensure your code doesn't fail because of a millisecond difference during your runtime. Eventually will retry your assert until it's true or it times out. When combined with examining an object or selecting one, eventually will ensure your test failure is a true failure and not a timing issue. Also remember you can use lambda if you need to make your assert function worthy.

Assert more!
Every test can use more asserts -- even my own! Timing issues can rear there ugly head again when you fail to assert after performing an action.

  • Everytime you grab an object, assert you received the object
    • You can do this by asserting the object NotEquals(None); remember to use eventually Eventually(NotEquals(None))!
  • Everytime you interact with the screen, try an assert to confirm your action
    • Click a button, assert
    • Click a field to type, assert you have focus first
      • You can do this by using the .focus property and asserting it to be True
      • Finished typing?, assert your text matches what you typed
        • You can do this by using the .text property and asserting it to be Equal to your input
Don't use strings, use objectNames
We all get lazy and just issue selects with English label names. This will break when run in a non-English language. They will also break when we decide to update the string to something more verbose or just different. Don't do it! That includes things like tab names, button names and label names -- all common rulebreakers.

Use object properties
They will help you add more asserts about what's happening. For instance, you can use the .animating property or .moving property (if they exist) to wait out animations before you continue your actions! I already mentioned the .focus property above, and you might find things like .selected, .state, .width, .height, .text, etc to be useful to you while writing your test. Check out your objects and see what might be helpful to you.

Interact with objects, not coordinates
Whenever possible, you should ensure your application interactions specify an object, not coordinates. If the UI changes, the screen size changes, etc, your test will fail if your using coordinates. If your interaction is emulating say something like a swipe, drag, pinch, etc action, ensure you utilize relative coordinates based upon the current screen size.

Use the ubuntusdk emulator if you are writing a ubuntusdk application
It will save you time, and ensure your testcase gets updated if any bugs or changes happen to the sdk; all without you having to touch your code. Check it out!

Read the documentation best practices
Yes, I know documentation is boring. But at least skim over this page on writing good tests. There is a lot of useful tidbits lurking in there. The gist is that your tests should be self-contained, repeatable and test one thing or one idea.

Looking over this list many of the best practices I listed involve avoiding bugs related to timing. You know the drill; run your testcase and it passes. Run it again, or run it in a virtual machine, a slower device, etc, and it fails. It's likely you have already experienced this.

Why does this happen? Well, it's because your test is clicking and interacting without verifying the changes occurring in the application. Many times it doesn't matter, and the built in delay between your actions will be enough to cover you. However that is not always the case.

So, adopt these practices and you will find your testcases are more reliable, easier to read and run without a hitch day in and day out. That's the sign of a good automated testcase.

Got more suggestions? Leave a comment!

Read more