Canonical Voices

Nicholas Skaggs

Starting tomorrow February 9th, 2013 (heh, some of you reading this might already be in tomorrow), the quality community team will start testing for cadence week #6. During this week, we as a team try and help test specific packages looking for regressions, doing new feature or hardware testing, and making sure our images are in good shape. If your still confused, there's a nice wiki page that lays out what "cadence" means in a bit more detail.

So what does this mean for you, dear reader? Well, we as a team would like you to be involved in helping us test! Everyone has unique ways of interacting with software, and naturally no two computer setups are exactly the same between us. Now, I know what your thinking -- how can I help? I'm no tester, and I don't run development versions of ubuntu!

That's ok! You can still help test without needing to compromise your system. If you don't want to install the development version on your machine, you can use a virtual machine installation instead. If you are unable to run virtual machines, or are confused at the idea, you can still help test by simply running a live session and executing tests there. It's not too hard for you! Check out this walk-through for participating using only an image of the development version of ubuntu and your computer.

To help demonstrate how you can participate, I'll be hosting two live events this next week where I'll be on-hand running through the cadence week tests along with others from the quality team. There will even be a livestream, so if your a visual person (like me!), you can see for yourself how you might be able to contribute. Here's the dates you need to know:

Monday Feb 11th, 1800-1900 UTC in #ubuntu-quality. I'll also be streaming live my participation in executing the tests .

Thursday Feb 14th, 1400-1500 UTC in #ubuntu-quality. No stream, but we'll be hanging out answering questions, and working on submitting test results.


Please consider attending a session or watching the video of the stream afterwards. If you can download an image and boot your computer, you can help test. You want to be a part of ubuntu; let us help you contribute!

Read more
Nicholas Skaggs

PSA: Ubuntu Quality wants you!


NOTICE: To whom it may concern, the ubuntu quality team is seeking those with a desire to help ubuntu to contribute to the quality and testing efforts. With a little time and a willingness to learn, you too can unlock the tester within you!

Interested? Please inquire below!

If that text didn't get you, I hope the picture did. Seriously though, if you are here reading this page, I want to offer you an opportunity to help out. We as a team have expanded our activities and projects this cycle and we want to extend an offer for you to come along and learn with us. We're exploring automated testing with autopilot and autopkg, manual testing of images, and the virtues of testing in a regular cadence.

But we can't do it alone, nor do we wish to! We'd love to hear from you. Please have a look at our getting involved page (but do excuse the theme dust!) and get in touch. I offered a challenge to this community in the past, and I was blown away by the emails that flooded my inbox. Send me an email, tell me your interests, and ask me how you can help. Let me help get you started. Flood my inbox again1. Let's make ubuntu better, together!

1. If anyone is counting, I believe the record is ~100 emails in one 24 hour period :-p

Read more
Nicholas Skaggs

Introspecting with Autopilot

If you remember our post from last time, we had gone through writing our first testcase for firefox, converting a simple manual testcase to utilize autopilot instead. In this post I'd like to talk about how introspection can be used to perform some more complicated automated testcases.

First of all, let's briefly define what we mean by introspection. Specifically, we're talking about introspecting the dbus session for an application on our screen. Trust me, it sounds worse than it is. I'll let you do your own googling if you are curious to learn more. For the rest of us, let's just have a look visually at what we're talking about :-)

If you've got autopilot installed (check out the previous post; install autopilot ppa, sudo apt-get install python-autopilot), you should be able to launch the visualization tool.

autopilot vis

A window should launch, and allow you to select a connection.  This allows you to select which application you wish to introspect. Go ahead and select 'Unity'. If the bareness of the tool scares you, remember it's a development aid, not your browser ;-)

Ok, under the Tree node, you should find a giant list of of nodes and properties for Unity. It may come as a surprise that your desktop is providing this much data about what's going on right now. For instance, have a look under PanelController->Indicators. There's entries for each indicator you are running, along with properties about each one. Ok, so what if your not running Unity? Or, for our purposes, you wish to write a test about an application on our desktop?

Never fear, we can use another feature of autopilot to help launch and introspect an application using the same tool. Go ahead and close the visualization window and enter the following.

autopilot launch gedit
autopilot vis

Notice now we have a new connection called 'Root'. Select it and you'll see the node tree for the gedit window you just launched. The amount of nodes spawned is a bit overwhelming, but you can now use this data to make assertions about what's going on when you interact with the application.

As a quick example, let's say I wanted to know the size of the current gedit window. I we look under 'GeditWindow->globalRect' we can see the current position, and infer the size of the window as well. We can also see things like the title, is_active, and other 'xwindow typish' properties.

In addition, I can find out what buttons are present on the gedit toolbar of the current gedit window. I we look under 'GeditWindow->GtkToolbar' we can see several GtkToolButton nodes. Each has a set of properties, including a name and label. 

So, let's put it all together for a quick example.
 
bzr branch lp:ubuntu-autopilot-tests

Inside the resulting directory you'll notice a geditintrospection folder.

cd ubuntu_autopilot_tests
autopilot run geditintrospection

A gedit window should spawn and disappear -- and hopefully the one testcase should pass. Open up the file geditintrospection/test_geditintrospection.py. Inside you'll notice we're using some of the properties we found while introspecting to show off how we can utilize them to test gedit. Let's cover some of the new functions briefly. You can use the autopilot documentation for more information on what you see.

     def select_single(self, type_name='*', **kwargs):
        """Get a single node from the introspection tree, with type equal to
        *type_name* and (optionally) matching the keyword filters present in
        *kwargs*.


    def select_many(self, type_name='*', **kwargs):
        """Get a list of nodes from the introspection tree, with type equal to
        *type_name* and (optionally) matching the keyword filters present in
        *kwargs*.

  
These two functions allow us to get back the nodes that match our query. For example, you can see the first step of the testcase is to check for a New File button, which is on the gedit toolbar. After asserting it exists, we then click it. Load up gedit for yourself and use the vis tool to confirm. You'll find the node under GeditWindow->GtkBox->GtkToolbar->GtkToolButton. Each button is represented, in this case we pulled the one with label=_New, representing the new file button.

Later we actually interact with gedit by turning on overwrite mode. Normally we would be unable to verify if this indeed worked or not, but you'll notice we once again check for the GtkLabel change from INS to OVR. Again, you can see this under GeditWindow->GtkBox->GeditStatusbar->GtkLabel.

Finally, you'll notice some slight differences from our non-introspection testcase. We now use GtkIntrospectionTestMixin, and launch our application via the launch_test_application function, rather than using AutopilotTestCase. I've shown gtk based examples here, but introspection works with Qt too (using QtIntrospectionTestMixin), so don't be afraid to try it out on any application you are interested in. 

You may also have noticed you branched from a project; ubuntu-autopilot-tests. I'm happy to announce this is the master repository for all the autopilot testcases we'll be writing as a community. Interested in helping contribute? Come join us and get involved! We're tracking work items, including proposed tests for you to work on. In addition, the tests themselves will soon be running on jenkins for everyone in the community to benefit.

Now introspection is still new, and we as a Quality Community team are excited about adopting and utilizing the tool.  There might be some bugs and feature requests (wink, wink autopilot team!) to work out, but we are excited to build a repository of automated tests together.

NOTE: Due to an error during build, it appears the autopilot online documentation is absent or missing pieces. If this occurs, please use the local documentation installed on your machine as part of the autopilot package. You'll find a copy you can browse at /usr/share/doc/python-autopilot/html/index.html.

Read more
Nicholas Skaggs

Jamming Thursday's!

Right now as I type we have two jams going on! Last week Jono posted about enhancing the ubuntu.com/community page. If your a part of the community, join in raising the banner for your specific focus area. The fun is happening now on #ubuntu-docs. For the full details, see Jono's post. For us quality folks, the pad is here: http://pad.ubuntu.com/communitywebsite-contribute-quality. Feel free to type and edit away!

In addition, as Daniel Holbach mentioned, there is a hackathon for automated testing. Come hang out with us on #ubuntu-quality, learn, ask and write some tests. Again, the full details can be found on Daniel's post.

Come join us!

Read more
Nicholas Skaggs

Our first Autopilot testcase

So last time we learned some basics for autopilot testcases. We're going to use the same code branch we pulled now to cover writing an actual testcase.

bzr branch lp:~nskaggs/+junk/autopilot-walkthrough

As a practical example, I'm going to convert our (rather simple and sparse) firefox manual testsuite into an automated test using autopilot. Here's a link to the testcase in question.

If you take a look at the included firefox/test_firefox.py file you should recognize it's basic layout. We have a setup step that launches firefox before each test, and then there are the 3 testcases corresponding to each of the manual tests. The file is commented, so please do have a look through it. We utilize everything we learned last time to emulate the keyboard and mouse to perform the steps mentioned in the manual testcases. Enough code reading for a moment, let's run this thing.

autopilot run firefox

Ok, so hopefully you had firefox launch and run through all the testcases -- and they all, fingers-crossed, passed. So, how did we do it? Let's step through the code and talk about some of the challenges faced in doing this conversion.

Since we want to test firefox in each testcase, our setUp method is simple. Launch firefox and set the focus to the application. Each testcase then starts with that assumption. Inside test_browse_planet_ubuntu we simply attempt to load a webpage. Our assertion for this is to check that the application title changes to "Planet Ubuntu" - - in other words that the page loaded. The other two testcases expand upon this idea by searching wikipedia and checking for search suggestions.

The test_search_wikipedia method uses the keyboard shortcut to open the searchbar, select wikipedia and then search for linux. Again, our only assertion for success here is that the page with a title of Linux and wikipedia loaded. We are unable to confirm for instance, that we properly selected wikipedia as the search engine (although the final assertion would likely fail if this was not the case).

Finally, the test_google_search_suggestions method is attempting to test that the "search suggestions" feature of firefox is performing properly. You'll notice that we are missing the assertion for checking for search suggestions while searching. With the knowledge we're gained up till now, we don't have a way of knowing if the list is generated or not. In actuality, this test cannot be completed as the primary assertion cannot be verified without some way of "seeing" what's happening on the screen.

In my next post, I'll talk about what we can do to overcome the limitations we faced in doing this conversion by using "introspection". In a nutshell by using introspection, autopilot will allow us to "see" what's happening on the screen by interacting with the applications data. It's a much more robust way of "seeing" what we see as a user, rather than reading individual screen pixels. With any luck, we'll be able to finish our conversion and look at accomplishing bigger tasks and tackling larger manual testsuites.

I trust you were able to follow along and run the final example. Until the next blog post, might I also recommend having a look through the documentation and try writing and converting some tests of your own -- or simply extend and play around with what you pulled from the example branch. Do let me know about your success or failure. Happy Testing!

Read more
Nicholas Skaggs

Getting started with Autopilot

If you caught the last post, you'll have some background on autopilot and what it can do. Start there if you haven't already read the post.

So, now that we've seen what autopilot can do, let's dig in to making this work for our testing efforts. A fair warning, there is some python code ahead, but I would encourage even the non-programmers among you to have a glance at what is below. It's not exotic programming (after all, I did it!). Before we start, let's make sure you have autopilot itself installed. Note, you'll need to get the version from this ppa in order for things to work properly:

sudo add-apt-repository ppa:autopilot/ppa
sudo apt-get update && sudo apt-get install python-autopilot

Ok, so first things first. Let's create a basic shell that we can use for any testcase that we want to write. To make things a bit easier, there's a lovely bazaar branch you can pull from that has everything you need to follow along.

bzr branch lp:~nskaggs/+junk/autopilot-walkthrough
cd autopilot-walkthrough

You'll find two folders. Let's start with the helloworld folder. We're going to verify autopilot can see the testcases, and then run and look at the 'helloworld' tests first. (Note, in order for autopilot to see the testcases, you need to be in the root directory, not inside the helloworld directory)

$ autopilot list helloworld
Loading tests from: /home/nskaggs/projects/

    helloworld.test_example.ExampleFunctions.test_keyboard
    helloworld.test_example.ExampleFunctions.test_mouse
    helloworld.test_hello.HelloWorld.test_type_hello_world

 3 total tests.


Go ahead and execute the first helloworld test.

autopilot run helloworld.test_hello.HelloWorld.test_type_hello_world
 
A gedit window will spawn, and type hello world to you ;-) Go ahead and close the window afterwards. So, let's take a look at this basic testcase and talk about how it works.

from autopilot.testcase import AutopilotTestCase

class HelloWorld(AutopilotTestCase):

    def setUp(self):
        super(HelloWorld, self).setUp()
        self.app = self.start_app("Text Editor")

    def test_type_hello_world(self):
        self.keyboard.type("Hello World")


If you've used other testing frameworks that follow in the line of xUnit, you will notice the similarities. We implement an AutopilotTestCase object (class HelloWorld(AutopilotTestCase)), and define a new method for each test (ie, test_type_hello_world). You will also notice the setUp method. This is called before each test is run by the testrunner. In this case, we're launching the "Text Editor" application before we run each test (self.start_app("Text Editor")). Finally our test (test_type_hello_world) is simply sending keystrokes to type out "Hello World".

From this basic shell we can add more testcases to the helloworld testsuite easily by adding a new method. Let's add some simple ones now to show off some other capabilities of autopilot to control the mouse and keyboard. If you branched the bzr branch, there is a few more tests in the test_example.py file. These demonstrate some of the utility methods AutopilotTestCase makes available to us. Try running them now. The comments inside the file also explain briefly what each method does.

autopilot run helloworld.test_example.ExampleFunctions.test_keyboard
autopilot run helloworld.test_example.ExampleFunctions.test_mouse

Now there is more that autopilot can do, but armed with this basic knowledge we can put the final piece of the puzzle together. Let's create some assertions, or things that must be true in order for the test to pass. Here's a testcase showing some basic assertions.

autopilot run helloworld.test_example.ExampleFunctions.test_assert
  
Finally, there's some standards that are important to know when using autopilot. You'll notice a few things about each testsuite.

  • We have a folder named testsuite.
  • Inside the folder, we have a file named test_testsuite.py
  • Inside the file, we have TestSuite class, with test_testcase_name
  • Finally, in order for autopilot to see our testsuite we need to let python know there is a submodule in the directory. Ignoring the geekspeak, we need an __init__.py file (this can be blank if not otherwise needed)
Given the knowledge we've just acquired, we can tackle our first testcase conversion! For those of you who like to work ahead, you can already see the conversion inside the "firefox" folder. But the details, my dear Watson, will be revealed in due time. Until the next post, cheerio!

Read more
Nicholas Skaggs

A glance at Autopilot

So, as has been already mentioned, automated testing is going to come into focus this cycle. To that end, I'd like to talk about some of the tools and methods for automated testing that exist and are being utilized inside ubuntu.

I'm sure everyone has used unity at some point, and you will be happy to know that there is an automated testsuite for unity. Perhaps you've even heard the name autopilot. The unity team has built autopilot as a testing tool for unity. However, autopilot has broader applications beyond unity to help us do automated testing on a grander scale. So, to introduce you to the tool, let's check out a quick demo of autopilot in action shall we? Run the following command to install the packages needed (you'll need quantal or raring in order for this to work):

sudo apt-get install python-autopilot unity-autopilot

Excellent, let's check this out. A word of caution here, running autopilot tests on your default desktop will cause your computer to send mouse and keyboard commands all by itself ;-) So, before we go any further, let's hop over into a 'Guest Session'. You should be able to use the system indicator in the top right to select 'Guest Session'. Once you are there, you'll be in a new desktop session, so head back over to this page. Without further ado, open a terminal and type:

autopilot run unity.tests.test_showdesktop.ShowDesktopTests.test_showdesktop_hides_apps

This is a simple test to check and see if the "Show Desktop" button works. The test will spawn a couple of applications, click the show desktop button and verify clicking on it will hide your applications. It'll clean up after itself as well, so no worries. Neat eh?

You'll notice there's quite a few unity testcases, and you've installed them all on your machine now.

autopilot list unity

As of this writing, I get 461 tests returned. Feel free to try and run them. Pick one from the list and see what happens. For example,

autopilot run unity.tests.test_dash.DashRevealTests.test_alt_f4_close_dash

Just make sure you run them in a guest session -- I don't want anyone's default desktop to get hammered by the tests!

If you are feeling adventurous, you can actually run all the unity testcases like this (this will take a LONG TIME!).

autopilot run unity

As a sidenote, you are likely to find some of the testcases fail on your machine. The testsuite is run constantly by the unity developers, and the live results of commit by commit success or failure is actually available on jenkins. Check it out.

So in closing, this cycle we as a community have some goals surrounding easing the burden for ourselves in testing, freeing our resources and minds towards the deeper and more thorough testing that automation cannot handle. To help encourage this move of our basic testcases towards automation, the next series of blog posts will be a walkthrough on how to write Autopilot testcases. I hope to learn, explore and discover along with all of you. Autopilot tests themselves are written in python, but don't let that scare you off! If you are able to understand how to test, writing a testcase that autopilot can run is simply a matter of learning syntax -- non-programmers are welcome here!

Read more
Nicholas Skaggs


Greetings from Copenhagen! I thought I would give a mid-UDS checkup for the quality team community. You may have already heard some of the exciting stuff that is already been discussed at UDS. Automated testing is being pursued with full vigor, the release schedule has been changed, and cadence testing is in. In addition, ubuntu is being focused into getting into fighting shape by targeting the Nexus 7 as a reference platform for mobile.

I was honored enough to have a quick plenary where attendees here got to see and hear about the various automated testing efforts going on. Does that mean the machines have replaced us? Hardly! The goal with bringing automated testing online is to help us be more proactive with how and why we test. We've done an amazing job of reacting to changes and bugs, but now as a community I would like us to focus on being proactive with our testing. The changes below are all going to help set us firmly in this direction. By proactively testing things, we eliminate bugs, and repetitive or duplicated work for ourselves. This frees us to explore more focused, more interesting, and more in-depth testing. So without further ado, here's a quick rundown of the changes discussed here in Copenhagen -- hang on to your testing hats!

Release
The Release schedule has dropped all alphas, and the first beta, resulting in a beta and then final release milestone only. In addition, the freezes have been moved back a few weeks. The end result is the archive will not be frozen till late in the cycle, allowing development and testing to continue unencumbered. This of course is for ubuntu only. Which brings us to flavors!


Flavors
Flavors will now have complete control over there releases. They can chose to test, freeze, and re-spin according to there own schedule and timing. Some will adopt ubuntu's schedule, others may retain the old milestones or even do something completely different.


ISOs
Iso's will now be automatically 'smoke' tested before general release. No more completely broken installers on the published images! In addition, the iso's will be published daily as usual, but will not have typical milestones as mentioned above. Preference will be given to the daily iso -- the current one -- throughout the cycle. Testing will occur in a cadence instead of a milestone.

Cadence
Rather than milestones, a bi-weekly cadence of testing will occur with the goal of assuring good quality throughout the release cycle. The cadence weeks will be scheduled and feature testing different pieces of ubuntu in a more focused manner. This includes things like unity, the installer, and new features landing in ubuntu, but will also be the target of feedback from the state of ubuntu quality.

State of ubuntu Quality
A bold effort to generate a high level view of what needs testing and what is working well on a per image basis inside of ubuntu. This is an experimental idea whose implementation will garner feedback early in the cycle and will collect data and influence decisions for testing focus during the cycle. *fingers crossed*

AutoPilot
This tool will integrate xpresser to allow for a complete functional UI testing tool. One of the first focuses for testcases will be automating the installer from a UI perspective to free our manual testing resources from basic installer testing! From the community perspective, we can join in both the writing, and executing of automated, as well as the development of the tool itself.

Hardware Testing Database
This continuing experiment will become more of a reality. The primary focus of the work this cycle will be to bring the tool, HEXR, online and to do basic integration with the qatracker for linking your hardware profiles. In addition, focused hardware testing using the profiles will be explored.

I hope this gives you a nice preview of what's coming. I would encourage you to have a look a the blueprints and pads for the sessions, and ask questions or volunteer to help in places you are interested. I am excited about the opportunities to continue bringing testing to the next level inside of ubuntu. I  owe many thanks to the wonderful community that continues to grow around testing. Here's to a wonderful cycle.

Read more
Nicholas Skaggs

Readying for UDS

I trust everyone is readying themselves -- don't blink! Ubuntu UDS-R is already upon us. Those of you who have been watching closely may have heard about some of the planned sessions for QA, but if not feel free to take a look. Don't worry, I'll wait.

But wait, there's more! In addition, there is going to be an evening event where testing is the focus. It's happening Tuesday evening. The goal is to learn about some of the testing efforts going on inside ubuntu, including automated testing; and more importantly, to write some testcases! Folks will be on hand to help talk you through and discuss writing both automated and manual test cases.

Looking through the tsessions, I hope you have the sense that testing is continuing to play a large role in ubuntu. And further, that you can be even more invovled! UI testing, automated testing, testcase writing -- all of these are focus points this cycle and have sessions. Get involved -- and if your at UDS, please do come to a session or two, add your voice, and grab some work items :-) Let's make it happen for next cycle.

Read more
Nicholas Skaggs

Community Charity-a-thon: The Aftermath

I wanted to express my heartfelt thanks to everyone who contributed. To those who gave on behalf of the debian community, thank you as well! I stated that for every five donations I would do a manpage for a package that is missing one :-) I received just under 5 donations marked debian, but not to worry, I'll still create a manpage for one in need. Although I did other work during the marathon, I purposefully held onto creating the manpage until I was a bit more rested -- I have enough trouble speaking English sometimes without adding in sleep deprivation. The man page readers will thank me, and I'm sure those who get my page to review will as well.

To the rest of you, thank you very much. We raised $943.34 for WaterAid. That's amazing! I'm truly touched by your generosity. Here's the complete list of donors, hats off to all of you -- I know several of you donated anonymously, thank you!

Anonymous :-)
Cormac Wilcox
Gema

Anders Jonsson
Arthur Talpaert
Sam Hewitt
Alvaro

Ólavur Gaardlykke
Joey-Elijah Sneddon
steve burdine

Thomas Martin (tenach)
Daniel Marrable
sebsebseb Mageia
Jonas Grønås Drange
Gregor Herrmann
Mark Shuttleworth
phillw
Thijs K
Alvaro
Max Brustkern
Jane Silber
Gema Gomez-Solano
Martin Pitt
Michelle Hall


Now I know no one wants to re-watch that crazy 24 hours of video, but I wanted to bring you a few highlights as well. I spent time doing some of my normal work, but I also promised to do something outside the norm. I was able to scratch an itch, and although my on-air demo failed (an uh-duh moment), I was able to record this video immediately after demonstrating where we in QA are focusing next cycle. In addition, there were several talks from QA personnel, and I recommend watching this clip if your interested in hearing Rick's take on where ubuntu is going, and indeed how quality will play a role. You can skip to here if you only want to hear his take on quality. Now is a great time to be involved in QA -- I'm excited to see things unfold for 14.04, and I hope you are to.

For the readers who actually made it this far, I've got the best for last. There were some gags in those 24 hours; for instance, check out my chicken dance! (*cough* this was supposed to be a group thing *cough*). Ohh, and there's always this lovely screencap. To be fair, this was about 20 hours or so in.


Read more
Nicholas Skaggs

Preparing for the Community Charity-a-thon

In preparation for the 24 hour marathon, I thought I would share with everyone my thoughts on my I chose my charity, and what I plan to do for 24 hours :-) To start off the post, let me get this right out front <insert flashing lights and whirlygigs> DONATE HERE </insert>

First the charity, WaterAid. I chose WaterAid upon realizing how important water is to me. I love water -- I love to stare out across a vast sea, or to sail along it using the wind, and hearing only the splashing of the waves, and the smell of life beneath it. And of course, I consume water each day in order to sustain life. I am happy to support an organization whose goal it is to provide sustainable water sources to everyone. Water is important to life, and is a basic need for us all as humans. We need clean water, and even moreso, we need sustainable access to it. Water is precious, and it's important for us to not pollute the water we have as well. WaterAid understands this need and works with locals to help create clean renewable water sources. Consider donating to help those who don't have access to the same resource we take for granted -- available anytime out of our faucet.

I'm also placing a call out to those who are interested in QA on both the Debian and ubuntu communities to participate. I plan to spend my time during the 24 hours doing something to further the work of how you interact with ubuntu and QA. So, to that end, I'd like to ask those of you who are interested in ubuntu to donate and install ubuntu during the marathon. I'll be here to provide technical installation support for you during the install. Let's see how many exotic configurations we can see successful installations on. Got a RAID setup or some exotic hardware? Multiple disks, partitions, and Os's? Get your systems backed up now, and let's try and install. NOTE, I'd encourage using the latest daily iso for installation, but you are welcome to also use beta2.

In addition for those of you in the Debian community, I am issuing a challenge for donations. For every 5 donations from the Debian community, I will write a missing man page from the list. I'll be focusing on things I use, but feel free to offer a suggestion during the marathon!

I would also issue a challenge to the greater ubuntu community. Do you have a problem that you are unable to solve within ubuntu? While I can't promise a fix for your issue, I will offer you my personal attention to help solve your problem. I'll help you file a bug, confirm it, or help you debug the problem to the best of my ability. I'll even offer my karma on askubuntu to your question ;-) If I get overwhelmed with donations, I'll pull the highest donators first -- but we do have 24 hours to fill! Note, I plan to do this work on Thursday, on-air, but you can donate in advance. Just leave me a note, or simply send me an email after your donation with your request if you donate in advance.

Finally, if none of the above suits you, I am happy to have a personal 1 on 1 match of highest level house of cards building, or another quick playing game of your choice on air. And don't worry, I'm ruthless competition, no pushover here!

Read more
Nicholas Skaggs

As you may have read about this week here or here unity is getting some late additions to it's feature set, with much of it landing in the form of new lenses. Here's what's landed in Unity 6.6:

  • Improved Window and workspace management
    • When switching between multiple application windows, the windows scale
  • The window decoration properly follows the selected theme
  • All launcher icons can be reordered
  • New shopping lens
  • Video lens enhancements
  • Improved lens previews
  • Improved home dash
In support of this late landing code, the Unity development team is asking for some extra testing on these specific features. To that end, I've updated the testcases for Unity, and posted the testcases specific to these changes for testing. Pay attention to the cases marked 'mandatory' and 'run-once'. These testcases contain the changes mentioned above, but it doesn't hurt to run through the optional cases if you have time. We don't like regressions either :-) By the time this post hits you, unity 6.6 should be landing in quantal, and you'll simply need to do a dist-upgrade -- instructions here. Here's a link to the testcases themselves:

Unity 6.6 Testing

Before I forget, here's a link to help you navigate the tracker, if you've never used it before: Guide on using the qatracker.

I'll keep the call for testing open through beta2 next week. If you participate in testing the beta2 images, consider also running through these testcases to make sure Unity is in good shape as well.

Thank you in advance for your help, and happy testing everyone!

Read more
Nicholas Skaggs

We've all had this experience. Upgrade our hardware to the new version of ubuntu (or perhaps installing ubuntu for the first time on some new hardware), and everything works perfectly. We quickly dive in and enjoy the new experience of our favorite OS. Well, it's almost perfect. There's just this one bug that is bothering you.

Perhaps instead you are a tester. Someone who lives and breathes for breakage and the latest and greatest raw code packaged from the hands of it's developers. There's a new upgrade out and everything is perfect; well, except for that one bug.

So what do you do in these situations? Sadly some people may chose to do nothing but hope and wait. "Next cycle, or next release I'm hoping my bug will be fixed". Odds are, no one knows about your bug, and it can't be fixed if it's unknown. And if you are using proprietary software, "wait and see" is about the limit of your options. As a part of ubuntu however, we as a community can do so much more!

With that let me present my patented wizzbang approach to a successful resolution of your bug within ubuntu!

  
First, let me clarify what a successful resolution is. If you have been around ubuntu, you may have seen a bug that expired as 'incomplete'. This is clearly not a successful resolution. In my eyes, a successful resolution to a bug sees the status changed to a 'won't fix, fix committed, triaged, etc'.

Ok, so here's the steps:
  1. If you don’t know how to file a good bug, ask first! It's important to do your best to describe the problem you are experiencing, and if possible how to repeat the bug. Check out the post on askubuntu which has a nice summary of resources available to help you.
  2. File a good bug, using your newly formed knowledge from above :-)
  3. Get someone else to confirm it. This is important! If possible, have a friend confirm the bug on their system. Once they've confirmed it, have them mark the bug as affecting them as well on launchpad.
  4. Answer questions promptly when asked by others. Make sure you are getting emails from launchpad, and when someone asks a question on your bug, respond promptly.
  5. Get your bug triaged. If your bug is confirmed and filed correctly, the bug triagers should help triage the bug. If a long time has passed without this occurring, check to make sure you bug is in good order to be triaged. If so, asking politely for a triager to look at your bug on the #ubuntu-bugs channel is a good way to keep your bug moving.
  6. Help debug, test, and confirm and fixes that are made for your issue. If the developer spends time working on your bug, do what you can to help confirm it fixes your issue.
  7. Remember no one will care about your bug as much as you do! When you file a bug, commit to carrying it as far along in the process as you can.
That's it! There's no guarantee every bug you file now will receive your desired outcome, but you should see proper resolution, instead of your bugs expiring. By being a good participant you are ensuring you can feel good about the resolution of the bugs you file. Remember we are all human, and sometimes things get missed. Stick with your bug, and shepherd it through.

So is the process perfect? Not at all. We as a community still need to think more about improving our experience in dealing with problems. Not every "problem" encountered is a bug, and a process to better handle these problems is still worthy of thought. I invite those of you interested in this to look for a UDS session on the topic.

Special thanks to TheLordofTime and hggdh for their discussions surrounding bugs, and of course for our marvelous bugsquad without whom this would not be possible!

Read more
Nicholas Skaggs

Global Jam with QA love

Global Jam is a unique time for folks to get together across the world and celebrate ubuntu. As part of the celebrations, there is an opportunity to download the latest beta (released yesterday!) and check out the next version of ubuntu. You can run it in a livecd or perhaps in a virtual machine. Ether way, there's opportunities for you to test the common applications and report your results.

The testcases are available here: Ubuntu Global Jam Testcases. For some of you, this page may seem a bit funny, but fortunately there is a handy walk-through for you to understand how to use the tracker and specifically, how to report results against these tests. Check out the guides below. If you follow the link, you'll even find some video of yours truly giving you a visual demonstration.

https://wiki.ubuntu.com/Testing/QATracker
https://wiki.ubuntu.com/Testing/CallforTesting/Walkthrough

If you get stuck, remember your friends in #ubuntu-testing on freenode are also happy to help. Have fun jamming, and if you do test, Happy Testing! Most of all, celebrate ubuntu!

Read more
Nicholas Skaggs

Ubuntu QA goes social

Since I posted the survey results, some of the members of the qa community noticed the team was unknown by others in the greater ubuntu community and beyond. After posting the article and ending my day, no sooner do I awake then my inbox already contains links to several social media locations where groups have been setup. I was floored and impressed by everyone's excitement to let others share in the fun we have here in QA (great work everyone!).

So, what does that mean for everyone else? We've gone social! Check out the new groups, and if your already a member of the social community in question, I would encourage you to join up if your interested in qa. And for anyone who has been involved in the oppurtunities we undertake in qa, I would encourage you to write about them. Share them with your friends.. in a word, communicate :-) You don't need to find a new medium, take qa and ubuntu to where you already interact!

https://wiki.ubuntu.com/QATeam/Contact

Read more
Nicholas Skaggs

It's the testing event I've been waiting for! A new version unity has arrived, stock full of compiz fixes. The team has removed metacity completely, and also migrated to gsettings from gconf. The result is removal of unity2d, and the enablement of llvmpipe on unity for those running without hardware acceleration.

Now, of course this work, in particular the settings migration, needs to be tested. This is where all of you come in! The package tracker now contains an entry for unity, complete with testcases for this migration. For those who helped test last cycle, you will also notice all of the checkbox testcases have been ported over as well! Many thanks to the unity developers for there help in migrating these tests. For this call for testing, the 'Unity GSetting Migration' testcases have been marked as mandatory, meaning that testcase is the primary focus. However, if you are able, executing the other testcases also helps the unity team ensure there haven't been any regressions.

Please note the 'Unity GSetting Migration' has steps for you to complete BEFORE you install from the ppa. Please read it first before diving in. Here's a link to the testing on the tracker. And if you are new, check out our wiki for a guide on using the qatracker to help test.

Now, as an added bonus, the unity developers have also tagged several multi-monitor bugs and have asked users to go through the list and confirm any bugs that they can. Read the bug report, and if you have a multi-monitor unity setup, see if it's still affecting you. Leave a comment on the bug with your result. The unity team wants to make sure the bugs are able to be triaged and get proper attention if they are still valid.

List of unity bugs, tagged multi-monitor

Thank you in advance for your help, and happy testing everyone!

Read more
Nicholas Skaggs

The grand "Cadence" experiment

It all started innocently enough. A simple idea, turned into a simple post on the mailing list. This idea eventually led the ubuntu QA community to perform an experiment for this cycle, which has been dubbed "cadence testing".

Now, before I manage to confuse everyone with this "cadence testing" term, let's define cadence testing. Scratch that, let's just give a simple definition of what was intended by original idea. If you want the whole story read the thread. heh. I'll be waiting (hint it's LONG!).

Cadence testing was intended to introduce regularity into testing. If the development release could be "stable" everyday (which was the grand experiment during the precise cycle), could we not also test to ensure that things were good all throughout the release? If the everyday images and archive were now to the quality of previous release's milestones, could we just eliminate the milestone idea and go with a calendar schedule for testing? Thus, a proposal was made test every 2 weeks, whether or not a milestone had been planned, and report the results.

Fast forward 2 months to today. So what happened? Well, I'm happy to report that the QA community despite the confusion more or less met the goal of testing the desktop images every 2 weeks (milestone or not). But what did this achieve? And where are the results?

Let's step back a moment and talk about what we learned by doing this. My comments are specific to the non-milestone cadence testing weeks. First, the development process inside ubuntu is still built around milestones. The daily images during cadence testing weeks were sometimes stable, and sometimes flat out broken by a new change landing from a development team. Second, the tools we used are built around milestone testing as well. The qatracker as well as any qa dashboard or report available doesn't have a good way to track and image health across the cadence week. This meant it was both difficult to test and difficult to see the results of the testing. Finally, the development teams were not expecting results against the daily images, and couldn't follow-up well on any bugs we reported, nor where we able to coordinate well with the release team, as the bugs reported were not available in a summarized or meaningful way.

Now, I'll save the discussion on my ideas of a healthy QA workflow for a later post, but I think we can agree that testing without good result reporting, and without developer follow-up has a limited impact. So does this mean "cadence testing" was a bad idea? No, it was simply poorly executed. The trouble comes in the assumptions listed above.

The archive has not been "stable" everyday, and development teams have have continued development, pausing only as required by the current milestones. In addition, changes, even major ones (like the ubiquity changes landing a few weeks ago, or the nvidia change just this past weekend), are not well communicated. Since they land without little or no warning, we as a QA community are left to react to them, instead of planning and executing them. In this environment, cadence testing makes little sense.

So was the experiment a failure then? In my mind, not at all! In fact, I think the future of ubuntu and QA is to push for complete adoption of this idea, and this experiment confirms the obstacles we will face in getting there. I'll be posting more about what this vision for QA looks like, but I'll leave you with a few thoughts until then.

In my mind, QA should enhance and improve developers, testers, and users lives and workflows. Our work is critical to the success of ubuntu. I would like to see a future where users receive regular, timely scheduled updates that the folks in the QA community have vetted by working with the development and release teams to deliver focused quality updates. The ideal workflow is more fluid, more agile and yes, it has a cadence.

Read more
Nicholas Skaggs

Quality Perceptions Survey Results

A couple Fridays ago I asked for feedback on how quality and the ubuntu QA team this cycle. That survey has now been completed and I have some results to share with everyone. Before I dive into the numbers, let me take a moment to say thank you to all of you who responded. Thank you! I read all of the comments left as well, and all were helpful feedback. Remember the survey was anonymous, so I cannot respond individually to anything written. Feel free to contact me if you wish to discuss anything further or to receive a response.

The second question on the survey asked rather simply, "What does quality mean to you?".


As it turns out, the largest answers mirrored those of a later question, in which I asked "What's the biggest problem with quality in ubuntu right now?".
Note, I read all of the "other" responses and categorized them into some new categories to display here for general consumption.
So there is some agreement amongst those who were polled both about what quality means, and about where ubuntu's biggest problems lie. The respondents indicated the largest issue with quality in ubuntu, according to them, was also the definition of what "quality" is!

Now I asked this question for a specific reason. "Quality" is a subjective term! Perhaps I'll get some disagreement on this, but hear me out. All of the answers for the question in my mind are valid with respect to quality. As an example, let's say, I asked you to recommend software to balance my checkbook. If I specified I wanted a quality piece of software, would you not recommend to me a stable (works without crashing), mature (good development/bug workflow), and easy to use (just works) piece of software that has a nice feature set (latest and greatest)? It's easy to see that "quality" can refer to all of this and more.

Still, in my mind, when I speak to wanting a "quality" release of ubuntu, I tend to focus on the stability and ease of use aspects. As the graphs indicate, the respondents seemed to echo this idea. In other words, it's really important to the idea of quality that things "just work". In the context of ubuntu this means applications run without crashing, and the operating system runs on your hardware. If things don't "just work", even if all the other indications of quality are true, you aren't likely to describe or perceive the product as having "good quality".

Let's ponder that thought for a moment and look at some more results. The survey captured about a 50/50 split of folks who run the development release, and over 70% run it or intend to run it before the final release.
So among those 50-70% who run or will run the development release, how many have participated in ubuntu qa?
Yikes! Only about a third. Just under half have no idea a ubuntu QA team existed. There's some clear evangelizing work to be done here. Let me take pause here just for a moment to say the team does exist, and would love to have you!

Ok, now onto the last multiple-guess multiple-choice question.
I'm happy to see people desire to help! That's wonderful. The responses regrading time, being technically able, or where to start are all very solvable. I would encourage you to watch this space for invitations to help test. QA work comes in all shapes and sizes, sometimes it's as little as 15 minutes, and the ability to install/uninstall a package and reboot a machine. If this sounds like something you would be able to do, please start by having a look at our wiki page. Send us an email and introduce yourself. There's no requirements or forced participation and we welcome everyone. And who knows, you might even learn something about ubuntu :-)

Ok, so I've shared the hard numbers from the survey, but I'd like to leave you with a few takeaway thoughts. First, while quality is subjective, our focus in ubuntu QA should be to have things "just work". That doesn't mean we shouldn't also help improve our development and bug processes, or continue to push for new applications and features, but rather that we ensure that our efforts help forward this cause.

I've said it before, but I want to help deliver good computing experiences. That story I shared when I introduced myself was close to home. My first interaction with the ubuntu community came via the forums, and yes, getting a printer to work. The community undertook work to change what was once a nightmare not for the feint of heart to child's play. The execution of this work is what defines the experience. This is where QA fits. We aren't just testing; we're delivering the result of the entirety of the ubuntu community's labor.

Judging from the survey results, many of you share this same vision. So won't you join us? QA transcends across teams and the ubuntu community. I would encourage you to get involved and be a part of making it happen. The list of "problems with quality" reach many areas. Would you be part of the solution?

Read more
Nicholas Skaggs

Quality mid-cycle checkup

About 2.5 months ago I wrote about the plans for the ubuntu QA community for the quantal cycle. We were building off of lots of buzz from the precise release and we planned to undertake lots of new work, while being very careful to avoid burnout. Our focus was to take QA to the next level and help us communicate and grow as a team to take on the opportunities we have.

So, how are we doing? Let's go over each of the points noted in the original post and talk about the progress and plans.

ISOTesting
Our alpha1 testing went very well, but the alpha2 and alpha3 have seen less participation. In addition we were asked and responded to a plan to test our iso's every 2 weeks as part of a more cadenced testing. Overall, isotesting continues to be a weak spot for us as a community. ISO Testing is perhaps the most important piece of testing for us as a greater ubuntu community. The image we produce is literally the first experience many folks have with ubuntu. If it fails to install, well, there went our chance for a positive first impression :-( I would be happy to hear ideas or comments on isotesting in particular.

Application Testing
This work has been mostly completed. The package tracker now allows us to perform work that was done via checkbox or manual testing last cycle. We can now manage results, tests and reporting all in one tool -- and it's all publicly available. For more information about the qatracker, see this wiki page.

SRU Verification
This work is still on paper, awaiting for the 12.04.1 release before further discussions and work will begin.

General Testing (eg, Day to Day running of the development version)
I am still experimenting with understanding how to enable better reporting and more focused testing on this. The current plan is to track specific packages that are critical to the desktop, and allow those run the development version the ability to report how the application is working for each specific upload during the development release. This is done with the qatracker. I'll blog more about this and the results in a later post. Contact me as always if your interested in helping.

Calls for Testing
This has been a wonderful success. There have been several calls for testing and the response has been wonderful. A big thank you to all of you who have helped test this. We've had over 50 people invovled in testing, and 41 bugs reported. Myself and the development teams thank you! But we're not done yet, unity testing among other things are still coming!

QATracker development
There is still room for more developers on the qatracker project. It's written in drupal, and I would happy to help you get started. As we grow, there will continue to be a need for people who want to build awesome tools to help us as a community test. If you have ideas for a tool (or know of a tool) that would help us test, please feel free to share with me.

Hardware Database
Work has been completed to spec out the design, and is scheduled now to land this cycle not in a future cycle. Fingers crossed we'll sneak this in before we release quantal :-) I'm very excitied to share this new tool with you; as soon as it's complete we'll be able to incorporate it into our workflow on the qatracker.

Testcases
Done, and for the most part our testcases have been migrated over. In addition, there is now a special team of folks who help to manage and maintain our testcases. If you have a passion for this work, contact me and I can help get you involved with the team.

Overall, I am happy to see signs of growth and newcomers to the community. If your on the fence about getting more involved with ubuntu, I would encourage you to check out QA. We collaborate with almost every area of ubuntu in some way, and no two days are the same :-) Send an email to the ubuntu-qa mailing list and introduce yourself.

So what's your opinion? Feel free to respond here with your thoughts and/or fill out the quality survey to give feedback.

Read more
Nicholas Skaggs

Quality Perceptions Survey

What's your perception of quality this cycle? Are things working well for you? It's been several months now since precise landed, and ubuntu development for the next version has been ongoing. The ubuntu QA team has had a busy summer putting into place the new tools we spoke about at UDS. The qatracker has been revamped to allow us to consolidate our testcases and test reporting across all of our activities. In addition, we've been helping in the release of 3 alpha milestones, and 3 testing campaigns. To all those who have helped in this testing, a very big thank you!

I have my own thoughts about the impact to the ubuntu project this testing has had, and I will continue to share my thoughts to point out the progress we make in this regard. But now, I want your input. I have created a survey to understand the community perspective on how we as a ubuntu project are doing on quality. If you have a few moments, please fill out the survey and let your thoughts and perspective be known. The survey will be anonymous, but I will share an aggregation and summary of the results.

My hope is to help gain an understanding of how we can focus our efforts on what's important to ubuntu as a project in terms of quality, as well as how we can help you (yes you!) become a more active part of QA if your interested.

Here's a link to survey. I'll leave it open until next Friday August 10th. Thanks in advance for your participation.

Read more