Canonical Voices

Posts tagged with 'ubuntu'

Nicholas Skaggs

QATracker Survey + bonus mockup

Hot on the heels of our first cadence week, I wanted to take the opportunity to collect feedback about the tools we as a community utilize. Specifically the QATracker which we heavily rely on for managing our work, testcases and results. From the wiki, "The QATracker is the master repository for all our our testing within ubuntu QA. It holds our testcases, records our results, and helps coordinate our testing events."

This is a link to a brief survey asking a few simple questions about how you've used the tool. All your responses are anonymous, but I will publish the aggregate question information and share it with the community once completed. The goal is to help ensure the tool is meeting our needs and is being utilized.

I'll leave the survey up until June 24th. My hope is to encourage more folks to help test as well as make it more enjoyable for those already taking part. I want to ensure our tools and processes continue to evolve, strengthen and become more robust for everyone as we continue on our mission. Part of that is making sure the tools we use are enjoyable!

Thanks in advance everyone!

As a bonus, Pasi, aka knome, has put together some mockups on how we might be able to switch what the results page looks like. This is perhaps the most utilized page of the site, so without further ado, here's a mockup of some changes proposed to make it more usable:

Old Site
New Site Mockup


What a change eh? The add test results has been moved to the sidebar and simplified, the bugs listing has been written out, and the results have been moved to the top. Finally the links have also been moved to the sidebar and Pasi has updated the icons ;-)

SO, what does everyone think about the changes? Many thanks to Pasi for putting this together! Leave a comment, a message on the mailing list, or reflect your thoughts in the survey.

Read more
Nicholas Skaggs

Join the ubuntu quality community team's effort this week! As a community we test different things about every ~2 weeks in ubuntu, and share the results to flesh out bugs and problem areas.

So what's up for testing this week? The daily images, the default applications in ubuntu and a new version of the sound stack for testing.

Ready to help? Full details are here.

Need some help on how to contribute? Have a look at this page and the walkthroughs listed. Of particular interest is the ISO testing and Cadence Week testing walkthroughs.

Do note that you don't need anything special to participate in cadence week testing! Both an installed version of the development branch of ubuntu (aka saucy) in a VM or on a real box, or even a live session of the latest daily image will work. For more information on how to use a live session to test, check out the Cadence Week testing walkthrough or watch the youtube video of the same.
Happy Testing!

Read more
Nicholas Skaggs

A few months ago the ubuntu touch core apps project was launched. For those of you following along with Michael's regular updates have gotten to see these applications grow up rather quickly.

Autopilot Says: How can I help?
Now it's time to add some more testing around these applications as they have reached a basic functional level of usability. Automated testing via autopilot to the rescue!

To help kickstart this process we've put together a recipe for writing autopilot tests specific to QML applications and added it to developer.ubuntu.com. In addition, we'll be hosting a hackfest next week on June 13th to help add basic autopilot testcases for each of the core apps. Folks will be on-hand ready to field your questions and hack together on the autopilot testcases needed for the applications. Join us and help support the wonderful community of application developers making awesome applications for ubuntu!

So how can you help? 
  1. First, go read through the recipe on writing autopilot tests for QML applications. It's also a good idea to have a look through the official tutorial for autopilot and bookmark the API reference link so it's handy.
  2. Armed with your new knowledge, start hacking on some autopilot tests for the core apps. Here's a list of core applications along with the status of autopilot tests. Choose something that looks interesting to you and add some tests.
  3. Follow the contributing guide to help you get your work contributed into the ubuntu touch core application project you chose.
  4. Finally come out to the hackfest! It's your chance to share your work, ask questions, get your tests sorted and merged and socialize and meet other members of the community.
  5. Don't forget there is a wonderful quality community you can be a part of and get help from if you get stuck! There's a mailing list for ubuntu-touch, and ubuntu-quality as well as IRC channels #ubuntu-touch, #ubuntu-autopilot and #ubuntu-quality. Use these resources to help you!
See you next week and happy testing!

Read more
Nicholas Skaggs

Consider this text your giant disclaimer. Just a reminder these images are not intended for end-users; please don't go flashing your device thinking you'll have a replacement for android. These images are intended for developers, enthusiasts and testers who want to help. If this describes you, please read on!

I'm happy to announce the ubuntu touch images are now available for testing on the isotracker. And further, the images are now raring based! As such, the ubuntu touch team is asking for folks to try out the new images on there devices and ensure they are no regressions or other issues.




There are 4 product listings representing each of the officially supported devices; grouper (nexus 7), maguro (galaxy nexus), mako (nexus 4), and manta (nexus 10). You can help by installing the new images following the installation instructions, and then reporting your results on the isotracker. If your device has never run a developer preview image for ubuntu touch, you might need to read and follow the steps on the touch wiki first.


There are handy links for download and bug information at the top of the testcases to help you out. If you do find a bug, please use the instructions to report it and add it to your result. Never used the tracker before? Take a look at this handy guide or watch the youtube version.

Once all the kinks and potential issues are worked out (your feedback requested!) the raring based images will become the default, and moving forward, the team will continue to provide daily images and participate in testing milestones as part of the 's' cycle.

As always please contact me if you run into issues, or have a question.
Thank you in advance for your help, and happy testing everyone!

Read more
Nicholas Skaggs

Filling the Gaps

I wanted to post briefly about the work that has been going on at the end of the cycle in the ubuntu quality team. Yes, we're testing the final images! Yes, it's been a wild ride that is nearing the finish! Yes, you can help contribute results! (And as we'll see below, you can help write tools too!)

But more than all of that, several team members have stepped out of there comfort zones and went to work on one of the testing tools we as a team utilize. The tool is called "Testdrive" and is written in python. Now, one of the great things I love to espouse on about with QA is the opportunity to work on many different things. There are needs to fit all interests, and if you are willing, the capability to learn.

In this instance, there is an opportunity to learn a little python and to work with a new team to help keep a testing tool alive. I'm happy to see that the same tool that was rendered broken in January by updates is now alive and well, with brand new contributors, fresh patches and even a release! Many thanks to smartboyhw, noskcaj, SergioMeneses, phillw, and the others who have reached out to ensure the tool that ships in raring still works. Thanks as well to the testdrive development team for engaging with us, reviewing merge proposals, and helping to ensure testdrive still works.

I look forward to a bright feature of new and improved testing tools. Specifically to those who contributed patches, with your new coding abilities, I can't wait to see what will happen next cycle! *wink, wink*

Read more
Nicholas Skaggs

The quality team invites you to a testing event for the final beta iso images. We'll be providing real-time help (IRC, or even one on one video hangouts if needed), encouraging you to download the final beta images, install, upgrade and test them out with us. You only need yourself, a machine (virtual or real!) and a bit of willingness to learn. We'll even be broadcasting for part of the event on ubuntuonair. So here's the details you need to know:

Tuesday April 2nd, 2013

  • 1800 UTC - 2200 UTC 
    • Quality team members are dedicated to hanging out in #ubuntu-quality executing testcases and helping answer questions
  •  2000 UTC: 
    • We'll be streaming live on ubuntuonair doing live testing demos and offering help
      • Basic iso test install
      • More exotic examples -- netboot, server, non-english
      • Your requests!


Interested? Great, mark the time and date on your calendar and check out the tutorial here to get a leg up on what you'll be doing during the event.


Can't make the 4 four window? Don't worry! Give testing a whirl anyways, and feel free to ask on #ubuntu-quality, and our mailing list for help.

See you on Tuesday!





Read more
Nicholas Skaggs

As discussed and planned, Smart Scopes have landed! Unity 7 too is landing, with many more features around getting 100 scopes installed, privacy, and dash improvements. For details on what Unity 7 is bringing, check out this post.

In support of the Unity changes, the Unity development team is asking for some extra testing on these specific features. So, we've updated and added a new testcase to our unity suite for these smart scopes. Pay attention to the cases marked mandatory and optional. The testcases relating to the smart scopes have all been marked as mandatory, and are the essential tests to run. That said, it doesn't hurt to run through the optional cases if you have time. We don't like regressions either :-)

So, here's what you need to know!

Never done a call for testing before? Read/Watch this first!; Call for testing walkthrough

Install the new unity from a ppa; Installation Instructions
 
Load the testcases and select one; Unity 7 Testing

Read the testcase, perform the actions listed and record your results.

If you run into any issues, please file a bug

Finally, please note the changelogs and build status found on the tracker, as well as any known bugs while testing. New builds will continue to trickle in over the next few days with new changes coming in. I'd encourage you to test and then re-test later in the week to follow-up on bugs you find, or test the new things that land.

As always please contact me if you run into issues, or have a question.
Thank you in advance for your help, and happy testing everyone!

Read more
Nicholas Skaggs

I wanted to write a post about the excitement of the new platform and the wonderful new challenges we face ahead of us. Now, given that this platform is being written right now from the ground up, those with a nose for quality instantly perk up. We love well tested applications, and developing with tests in mind from the start is much easier than attempting to retrofit. Seeing the first fruits of the developer effort is very exciting -- good work everyone!

So with that in mind, I started looking at some of the excellent work the core apps teams are doing with there applications. They've been working with the design community to turn the nice mockups into reality. I took the liberty of checking out and running some of the first versions of these applications. The calculator is one that stood out to me as already closing in on its specification. So armed with some of the design conversation for the calculator, I started a branch to create a set of manual tests for ubuntu touch applications, starting with the calculator. If you are interested in quality, now is the time to be involved! The applications can all be installed and run on your phone or even a ubuntu desktop.

So what can you do?

If you're a tester;

If you're a developer and have questions on writing tests for your application, feel free to contact me. I would love to see not only nice unit test driven code, but also some end user tests via autopilot, and I want to make sure you as a developer have the resources to do so. In addition, we as a quality community are happy to help test with you and write some manual tests to do so for your application.

I'm helping!


Read more
Nicholas Skaggs

After being away and enjoying some lovely downtime, I've returned to the online world to be met with the rush of a virtual UDS, a rolling release announcement, and a new windowing stack announcement. With the discussion advancing and the UDS sessions completed, it's time to weigh in and speak my thoughts as well.

I'd like to stare down the scarecrow -- that is, let us examine the straw man argument of a rolling release.



On a rolling release
I am definitely in favor of streamlining what we ship and support. The inter-LTS releases in general only make sense to run until the next one arrives. From a quality perspective, I really like what we've done with precise. I think it's an excellent solid base and the point releases we've done keep it relevant and offer a really nice way to get the latest stuff and keep a stable and long-term supported system.

As for a rolling release with LTS's sprinkled in, I have run a rolling release distro in the past (alongside ubuntu). I definitely enjoyed having access to the latest stuff, and having everyone on the same archive all the time (community-wise) kept us tighter and more able to relate and help each other. Overall, I think the pros outweigh the cons on moving towards it, but I have several caveats with the current approach.
  • Monthly snapshots
    • As Colin Watson put it, if we're presumbly releasing and testing a monthly snapshot, we failed in a rolling release sense -- we don't have daily quality.
    • In general, I don't see a target audience for a monthly snapshot. Why can't we create an installer image (do full testing milestone on it), then call it gold for a long period (until new installer changes we want to bring in, which require us making a new image). In other words, I would like to see us only generating new images for an actual release (in this case only LTS's), or for a new installer. (Note that I mean supported images (in any sense), not just an image for testing)
  • Quasi-rolling mentatility
    • It seems like we want to support the idea of users running a snapshot of our archive on a certain date, and then only update at certain days and times
      • This is insane fragmentation and defeats the purpose of being a rolling release. People should strive to run up-to-date systems, and always be current. For us, we need to ensure the archive is always upgradeable so they can do so
  • LTS point-releases
    • Continue and enhance point-releases for LTS to keep regular flows of new, well supported and stable software
      • This was discussed and well noted in discussions and at UDS. As I mentioned, I really like how precise is going, and we can continue and bolster these efforts even more in a rolling world.

On flavors
I will prefix everything I say here with the fact I have never put together and supported a flavor, but I most certainly have enjoyed utilizing them, and working with members of the community who focus there efforts on them.

I was able to catch the end of the UDS session on flavors and had an excellent discussion with some folks from xubuntu and kubuntu. Thanks to those folks who helped provide some flavor feedback on the proposal.

I would like to challenge the flavors to engage in healthy discussions about how there release process works, how to serve the needs of there users, and how to make the best use of there time and resources. I'm sure this type of introspection happens in each flavor on a regular basis, but I'd like to call special attention to how releases work.

Last cycle, ubuntu adopted a cadence for driving quality into the daily images. This work has been on-going since precise really, and rick's idea's for a rolling model continue this line of thinking. If you'll ask the community folks who helped be a part of these cadences, they can tell you it was a challenging change, but we're really hitting our stride now. The constant iterations on how we test and what we do I think had been extremely positive towards helping quality make a bigger impact.

With that in mind, our release processes shouldn't be exempted from this. QA (and development :-) ) efforts are seen as linked to the current release process, resulting in chaos if you are radical enough to unlink them. So what options (in my opinion!) of course exist for a flavor in this new world?
  • LTS only
    • Some flavors have already gone to an LTS only model, and I think it's been extremely helpful for those who've done so, in terms of what they can focus on without worrying about supporting lots of releases.
  • Rolling only
    • You can chose to go full force into a rolling release, and eschew LTS's altogether
  • 2 year "normal" releases
    • You could choose to simply push a new image out every 2 years (like an LTS), but without long term support. Instead, consider supporting until the next (2-years or so) release.
  • Keep things as-is
    • As kubuntu and others have shown this cycle by not adopting a cadence for testing, you can keep the traditional model in place. The buildbots are still there, the testing tools still exist, and the knowledge and experience in releasing on a 6 month cadence is there. Remember, ubuntu itself has synced from upstream debian every 6 months; a flavor could choose to do the same with ubuntu.
Now of all these options, at this point I would personally recommend adopting the LTS only model. Work with and sync your development to your upstream project and land your work in the rolling release. Release an LTS as normal and deliver timely point release updates to the LTS. There is nothing stopping you from even delivering these point releases every 6 months (or a different timetable!), emulating the current process but with a stable ubuntu LTS base and a simplified upgrade process.

On some alternative ideas

6-8? month-stable releases between LTS

Not a bad idea for retaining the flavor of the current system. Indeed, if you really like the idea of monthly snapshots and updating, this is probably the better way to do it. However I don't think it solves any issues for an OEM or for flavors. Namely, the release support timeline is too short for an OEM to base an image on, while for flavors, it would force an even faster cadence and churn upon users. I also don't see a target audience for it. Who would run this, but not run a rolling? Folks who want stability couldn't adopt such a small supported time-frame, and I feel like our efforts to test and release would be wasted as we throw it away as soon as the next stable is out.

Don't change anything!
This idea is just a knee-jerk reaction to change. Unless you feel like our last release was the pinnacle of perfection (I don't), we should be evaluating how we do things, iterate and try and do them better. 

On quality in a rolling release
I want to talk specifically about quality as that's what is dearest to me. How do you ensure quality in a rolling release world?
 
First, I would like to challenge you on what you mean by quality. Is older software better quality than newer software? Age != quality, even though we often traverse down that slippery slope. I wrote about this before, but simply put how we define quality is subjective. For the sake of comparison here, let's talk about quality as having a desktop that just works. That is, your hardware works and the applications and software running on it enables you to accomplish your tasks without issue.

So, with that in mind, how does that work in a rolling release? If you've run the development versions of ubuntu in the past, there have been times where a bad package may have rendered your system unbootable. For any user trying to run this as there daily system, it's obvious that level of 'quality' doesn't work. But at the same time, I've found bugs running the development version of ubuntu that cause actual crashes (see this for example), yet have little impact (if any) on my system working properly to enable me to accomplish my tasks. So how can we define quality metrics (and we should!) for each release? Here's a quick summary of my expectations in extremely simple terms:
  • LTS
    • No issue that hinders or prevents utilizing ubuntu
  • Rolling
    • No issue that would cause a crash for expected usecases and workflows
The key difference to me is usability. If I am forced to use a workaround for a crash in a minor application or task in a rolling release is probably ok. Note that I say probably, because well, we haven't defined these metrics yet as a community. Being forced to do so in an LTS is not an acceptable level of quality. And of course, causing a system to not boot is never acceptable.

On the reaction and the future
I'm excited to see these discussions taking place, and I would encourage people to think critically and take part in these discussions. There are definitely some wonderful ideas and conversation taking place.

Just remember we're a team and all part of ubuntu. Healthy debate is a very important part of continuing to better ourselves as a community and project. 


Read more
Nicholas Skaggs

Ubuntu Global Jam is just a few short weeks away. I trust you've seen the posts announcing and asking you to plan your events. Maybe you are confused about what type of session to plan or how the event could go. I will echo my friend Daniel Holbach in saying just do it! Grab a buddy (even an online one!) and plan to jam together. If your confused about what to jam with, check out the testing page.

It's got everything you need to run a session, and the documentation has all been done for you. Folks can choose what they are interested in testing (packages, images, or hardware), or even do some hacking on testcases. No need to be a programmer, manual tests can be written by anyone! Participants don't need anything besides there laptop and an image of ubuntu on a cd or usb stick (assuming of course they aren't already running ubuntu raring :-) ).

If your curious about wanting to host a testing event, checkout the testing page on the global jam site. Feel free to get in touch with me as well if you wish to share your stories or ask questions. Let's jam, quality style!

Read more
Nicholas Skaggs

A thank you to some quality rockstars

The quality team has completed a series of classroom sessions held over the last two months. None of these would have been possible without the help from these wonderful instructors:

phillw, gema, noskcaj, smartboyhw, primes2h, letozaf, sergiomeneses

Thank you!

Thank you as well to pleia2, JoseeAntonioR and the other classroom team members who helped us schedule and run the sessions.


You all rock!

Read more
Nicholas Skaggs

Some quality resources

A couple posts ago, I mentioned the ubuntu quality team was looking for people to join the team and help out in the testing efforts we undertake. Thanks to those of you who've already answered the call and our now joining our testing ranks. We love sharing the joys of testing with others!

We're serious about wanting to make sure you are able to contribute and join the community as easily as possible. So for the last couple months, as a team we've been writing tutorials, giving classroom sessions, and hosting testing events. We really do want you as part of the team. Check out some of the resources available to you and consider becoming a part of the team!

Classroom Sessions
Video Tutorials
Written Walkthroughs


Read more
Nicholas Skaggs

Starting tomorrow February 9th, 2013 (heh, some of you reading this might already be in tomorrow), the quality community team will start testing for cadence week #6. During this week, we as a team try and help test specific packages looking for regressions, doing new feature or hardware testing, and making sure our images are in good shape. If your still confused, there's a nice wiki page that lays out what "cadence" means in a bit more detail.

So what does this mean for you, dear reader? Well, we as a team would like you to be involved in helping us test! Everyone has unique ways of interacting with software, and naturally no two computer setups are exactly the same between us. Now, I know what your thinking -- how can I help? I'm no tester, and I don't run development versions of ubuntu!

That's ok! You can still help test without needing to compromise your system. If you don't want to install the development version on your machine, you can use a virtual machine installation instead. If you are unable to run virtual machines, or are confused at the idea, you can still help test by simply running a live session and executing tests there. It's not too hard for you! Check out this walk-through for participating using only an image of the development version of ubuntu and your computer.

To help demonstrate how you can participate, I'll be hosting two live events this next week where I'll be on-hand running through the cadence week tests along with others from the quality team. There will even be a livestream, so if your a visual person (like me!), you can see for yourself how you might be able to contribute. Here's the dates you need to know:

Monday Feb 11th, 1800-1900 UTC in #ubuntu-quality. I'll also be streaming live my participation in executing the tests .

Thursday Feb 14th, 1400-1500 UTC in #ubuntu-quality. No stream, but we'll be hanging out answering questions, and working on submitting test results.


Please consider attending a session or watching the video of the stream afterwards. If you can download an image and boot your computer, you can help test. You want to be a part of ubuntu; let us help you contribute!

Read more
Nicholas Skaggs

PSA: Ubuntu Quality wants you!


NOTICE: To whom it may concern, the ubuntu quality team is seeking those with a desire to help ubuntu to contribute to the quality and testing efforts. With a little time and a willingness to learn, you too can unlock the tester within you!

Interested? Please inquire below!

If that text didn't get you, I hope the picture did. Seriously though, if you are here reading this page, I want to offer you an opportunity to help out. We as a team have expanded our activities and projects this cycle and we want to extend an offer for you to come along and learn with us. We're exploring automated testing with autopilot and autopkg, manual testing of images, and the virtues of testing in a regular cadence.

But we can't do it alone, nor do we wish to! We'd love to hear from you. Please have a look at our getting involved page (but do excuse the theme dust!) and get in touch. I offered a challenge to this community in the past, and I was blown away by the emails that flooded my inbox. Send me an email, tell me your interests, and ask me how you can help. Let me help get you started. Flood my inbox again1. Let's make ubuntu better, together!

1. If anyone is counting, I believe the record is ~100 emails in one 24 hour period :-p

Read more
Nicholas Skaggs

Introspecting with Autopilot

If you remember our post from last time, we had gone through writing our first testcase for firefox, converting a simple manual testcase to utilize autopilot instead. In this post I'd like to talk about how introspection can be used to perform some more complicated automated testcases.

First of all, let's briefly define what we mean by introspection. Specifically, we're talking about introspecting the dbus session for an application on our screen. Trust me, it sounds worse than it is. I'll let you do your own googling if you are curious to learn more. For the rest of us, let's just have a look visually at what we're talking about :-)

If you've got autopilot installed (check out the previous post; install autopilot ppa, sudo apt-get install python-autopilot), you should be able to launch the visualization tool.

autopilot vis

A window should launch, and allow you to select a connection.  This allows you to select which application you wish to introspect. Go ahead and select 'Unity'. If the bareness of the tool scares you, remember it's a development aid, not your browser ;-)

Ok, under the Tree node, you should find a giant list of of nodes and properties for Unity. It may come as a surprise that your desktop is providing this much data about what's going on right now. For instance, have a look under PanelController->Indicators. There's entries for each indicator you are running, along with properties about each one. Ok, so what if your not running Unity? Or, for our purposes, you wish to write a test about an application on our desktop?

Never fear, we can use another feature of autopilot to help launch and introspect an application using the same tool. Go ahead and close the visualization window and enter the following.

autopilot launch gedit
autopilot vis

Notice now we have a new connection called 'Root'. Select it and you'll see the node tree for the gedit window you just launched. The amount of nodes spawned is a bit overwhelming, but you can now use this data to make assertions about what's going on when you interact with the application.

As a quick example, let's say I wanted to know the size of the current gedit window. I we look under 'GeditWindow->globalRect' we can see the current position, and infer the size of the window as well. We can also see things like the title, is_active, and other 'xwindow typish' properties.

In addition, I can find out what buttons are present on the gedit toolbar of the current gedit window. I we look under 'GeditWindow->GtkToolbar' we can see several GtkToolButton nodes. Each has a set of properties, including a name and label. 

So, let's put it all together for a quick example.
 
bzr branch lp:ubuntu-autopilot-tests

Inside the resulting directory you'll notice a geditintrospection folder.

cd ubuntu_autopilot_tests
autopilot run geditintrospection

A gedit window should spawn and disappear -- and hopefully the one testcase should pass. Open up the file geditintrospection/test_geditintrospection.py. Inside you'll notice we're using some of the properties we found while introspecting to show off how we can utilize them to test gedit. Let's cover some of the new functions briefly. You can use the autopilot documentation for more information on what you see.

     def select_single(self, type_name='*', **kwargs):
        """Get a single node from the introspection tree, with type equal to
        *type_name* and (optionally) matching the keyword filters present in
        *kwargs*.


    def select_many(self, type_name='*', **kwargs):
        """Get a list of nodes from the introspection tree, with type equal to
        *type_name* and (optionally) matching the keyword filters present in
        *kwargs*.

  
These two functions allow us to get back the nodes that match our query. For example, you can see the first step of the testcase is to check for a New File button, which is on the gedit toolbar. After asserting it exists, we then click it. Load up gedit for yourself and use the vis tool to confirm. You'll find the node under GeditWindow->GtkBox->GtkToolbar->GtkToolButton. Each button is represented, in this case we pulled the one with label=_New, representing the new file button.

Later we actually interact with gedit by turning on overwrite mode. Normally we would be unable to verify if this indeed worked or not, but you'll notice we once again check for the GtkLabel change from INS to OVR. Again, you can see this under GeditWindow->GtkBox->GeditStatusbar->GtkLabel.

Finally, you'll notice some slight differences from our non-introspection testcase. We now use GtkIntrospectionTestMixin, and launch our application via the launch_test_application function, rather than using AutopilotTestCase. I've shown gtk based examples here, but introspection works with Qt too (using QtIntrospectionTestMixin), so don't be afraid to try it out on any application you are interested in. 

You may also have noticed you branched from a project; ubuntu-autopilot-tests. I'm happy to announce this is the master repository for all the autopilot testcases we'll be writing as a community. Interested in helping contribute? Come join us and get involved! We're tracking work items, including proposed tests for you to work on. In addition, the tests themselves will soon be running on jenkins for everyone in the community to benefit.

Now introspection is still new, and we as a Quality Community team are excited about adopting and utilizing the tool.  There might be some bugs and feature requests (wink, wink autopilot team!) to work out, but we are excited to build a repository of automated tests together.

NOTE: Due to an error during build, it appears the autopilot online documentation is absent or missing pieces. If this occurs, please use the local documentation installed on your machine as part of the autopilot package. You'll find a copy you can browse at /usr/share/doc/python-autopilot/html/index.html.

Read more
Nicholas Skaggs

Jamming Thursday's!

Right now as I type we have two jams going on! Last week Jono posted about enhancing the ubuntu.com/community page. If your a part of the community, join in raising the banner for your specific focus area. The fun is happening now on #ubuntu-docs. For the full details, see Jono's post. For us quality folks, the pad is here: http://pad.ubuntu.com/communitywebsite-contribute-quality. Feel free to type and edit away!

In addition, as Daniel Holbach mentioned, there is a hackathon for automated testing. Come hang out with us on #ubuntu-quality, learn, ask and write some tests. Again, the full details can be found on Daniel's post.

Come join us!

Read more
Nicholas Skaggs

Our first Autopilot testcase

So last time we learned some basics for autopilot testcases. We're going to use the same code branch we pulled now to cover writing an actual testcase.

bzr branch lp:~nskaggs/+junk/autopilot-walkthrough

As a practical example, I'm going to convert our (rather simple and sparse) firefox manual testsuite into an automated test using autopilot. Here's a link to the testcase in question.

If you take a look at the included firefox/test_firefox.py file you should recognize it's basic layout. We have a setup step that launches firefox before each test, and then there are the 3 testcases corresponding to each of the manual tests. The file is commented, so please do have a look through it. We utilize everything we learned last time to emulate the keyboard and mouse to perform the steps mentioned in the manual testcases. Enough code reading for a moment, let's run this thing.

autopilot run firefox

Ok, so hopefully you had firefox launch and run through all the testcases -- and they all, fingers-crossed, passed. So, how did we do it? Let's step through the code and talk about some of the challenges faced in doing this conversion.

Since we want to test firefox in each testcase, our setUp method is simple. Launch firefox and set the focus to the application. Each testcase then starts with that assumption. Inside test_browse_planet_ubuntu we simply attempt to load a webpage. Our assertion for this is to check that the application title changes to "Planet Ubuntu" - - in other words that the page loaded. The other two testcases expand upon this idea by searching wikipedia and checking for search suggestions.

The test_search_wikipedia method uses the keyboard shortcut to open the searchbar, select wikipedia and then search for linux. Again, our only assertion for success here is that the page with a title of Linux and wikipedia loaded. We are unable to confirm for instance, that we properly selected wikipedia as the search engine (although the final assertion would likely fail if this was not the case).

Finally, the test_google_search_suggestions method is attempting to test that the "search suggestions" feature of firefox is performing properly. You'll notice that we are missing the assertion for checking for search suggestions while searching. With the knowledge we're gained up till now, we don't have a way of knowing if the list is generated or not. In actuality, this test cannot be completed as the primary assertion cannot be verified without some way of "seeing" what's happening on the screen.

In my next post, I'll talk about what we can do to overcome the limitations we faced in doing this conversion by using "introspection". In a nutshell by using introspection, autopilot will allow us to "see" what's happening on the screen by interacting with the applications data. It's a much more robust way of "seeing" what we see as a user, rather than reading individual screen pixels. With any luck, we'll be able to finish our conversion and look at accomplishing bigger tasks and tackling larger manual testsuites.

I trust you were able to follow along and run the final example. Until the next blog post, might I also recommend having a look through the documentation and try writing and converting some tests of your own -- or simply extend and play around with what you pulled from the example branch. Do let me know about your success or failure. Happy Testing!

Read more
Nicholas Skaggs

Getting started with Autopilot

If you caught the last post, you'll have some background on autopilot and what it can do. Start there if you haven't already read the post.

So, now that we've seen what autopilot can do, let's dig in to making this work for our testing efforts. A fair warning, there is some python code ahead, but I would encourage even the non-programmers among you to have a glance at what is below. It's not exotic programming (after all, I did it!). Before we start, let's make sure you have autopilot itself installed. Note, you'll need to get the version from this ppa in order for things to work properly:

sudo add-apt-repository ppa:autopilot/ppa
sudo apt-get update && sudo apt-get install python-autopilot

Ok, so first things first. Let's create a basic shell that we can use for any testcase that we want to write. To make things a bit easier, there's a lovely bazaar branch you can pull from that has everything you need to follow along.

bzr branch lp:~nskaggs/+junk/autopilot-walkthrough
cd autopilot-walkthrough

You'll find two folders. Let's start with the helloworld folder. We're going to verify autopilot can see the testcases, and then run and look at the 'helloworld' tests first. (Note, in order for autopilot to see the testcases, you need to be in the root directory, not inside the helloworld directory)

$ autopilot list helloworld
Loading tests from: /home/nskaggs/projects/

    helloworld.test_example.ExampleFunctions.test_keyboard
    helloworld.test_example.ExampleFunctions.test_mouse
    helloworld.test_hello.HelloWorld.test_type_hello_world

 3 total tests.


Go ahead and execute the first helloworld test.

autopilot run helloworld.test_hello.HelloWorld.test_type_hello_world
 
A gedit window will spawn, and type hello world to you ;-) Go ahead and close the window afterwards. So, let's take a look at this basic testcase and talk about how it works.

from autopilot.testcase import AutopilotTestCase

class HelloWorld(AutopilotTestCase):

    def setUp(self):
        super(HelloWorld, self).setUp()
        self.app = self.start_app("Text Editor")

    def test_type_hello_world(self):
        self.keyboard.type("Hello World")


If you've used other testing frameworks that follow in the line of xUnit, you will notice the similarities. We implement an AutopilotTestCase object (class HelloWorld(AutopilotTestCase)), and define a new method for each test (ie, test_type_hello_world). You will also notice the setUp method. This is called before each test is run by the testrunner. In this case, we're launching the "Text Editor" application before we run each test (self.start_app("Text Editor")). Finally our test (test_type_hello_world) is simply sending keystrokes to type out "Hello World".

From this basic shell we can add more testcases to the helloworld testsuite easily by adding a new method. Let's add some simple ones now to show off some other capabilities of autopilot to control the mouse and keyboard. If you branched the bzr branch, there is a few more tests in the test_example.py file. These demonstrate some of the utility methods AutopilotTestCase makes available to us. Try running them now. The comments inside the file also explain briefly what each method does.

autopilot run helloworld.test_example.ExampleFunctions.test_keyboard
autopilot run helloworld.test_example.ExampleFunctions.test_mouse

Now there is more that autopilot can do, but armed with this basic knowledge we can put the final piece of the puzzle together. Let's create some assertions, or things that must be true in order for the test to pass. Here's a testcase showing some basic assertions.

autopilot run helloworld.test_example.ExampleFunctions.test_assert
  
Finally, there's some standards that are important to know when using autopilot. You'll notice a few things about each testsuite.

  • We have a folder named testsuite.
  • Inside the folder, we have a file named test_testsuite.py
  • Inside the file, we have TestSuite class, with test_testcase_name
  • Finally, in order for autopilot to see our testsuite we need to let python know there is a submodule in the directory. Ignoring the geekspeak, we need an __init__.py file (this can be blank if not otherwise needed)
Given the knowledge we've just acquired, we can tackle our first testcase conversion! For those of you who like to work ahead, you can already see the conversion inside the "firefox" folder. But the details, my dear Watson, will be revealed in due time. Until the next post, cheerio!

Read more
Nicholas Skaggs

A glance at Autopilot

So, as has been already mentioned, automated testing is going to come into focus this cycle. To that end, I'd like to talk about some of the tools and methods for automated testing that exist and are being utilized inside ubuntu.

I'm sure everyone has used unity at some point, and you will be happy to know that there is an automated testsuite for unity. Perhaps you've even heard the name autopilot. The unity team has built autopilot as a testing tool for unity. However, autopilot has broader applications beyond unity to help us do automated testing on a grander scale. So, to introduce you to the tool, let's check out a quick demo of autopilot in action shall we? Run the following command to install the packages needed (you'll need quantal or raring in order for this to work):

sudo apt-get install python-autopilot unity-autopilot

Excellent, let's check this out. A word of caution here, running autopilot tests on your default desktop will cause your computer to send mouse and keyboard commands all by itself ;-) So, before we go any further, let's hop over into a 'Guest Session'. You should be able to use the system indicator in the top right to select 'Guest Session'. Once you are there, you'll be in a new desktop session, so head back over to this page. Without further ado, open a terminal and type:

autopilot run unity.tests.test_showdesktop.ShowDesktopTests.test_showdesktop_hides_apps

This is a simple test to check and see if the "Show Desktop" button works. The test will spawn a couple of applications, click the show desktop button and verify clicking on it will hide your applications. It'll clean up after itself as well, so no worries. Neat eh?

You'll notice there's quite a few unity testcases, and you've installed them all on your machine now.

autopilot list unity

As of this writing, I get 461 tests returned. Feel free to try and run them. Pick one from the list and see what happens. For example,

autopilot run unity.tests.test_dash.DashRevealTests.test_alt_f4_close_dash

Just make sure you run them in a guest session -- I don't want anyone's default desktop to get hammered by the tests!

If you are feeling adventurous, you can actually run all the unity testcases like this (this will take a LONG TIME!).

autopilot run unity

As a sidenote, you are likely to find some of the testcases fail on your machine. The testsuite is run constantly by the unity developers, and the live results of commit by commit success or failure is actually available on jenkins. Check it out.

So in closing, this cycle we as a community have some goals surrounding easing the burden for ourselves in testing, freeing our resources and minds towards the deeper and more thorough testing that automation cannot handle. To help encourage this move of our basic testcases towards automation, the next series of blog posts will be a walkthrough on how to write Autopilot testcases. I hope to learn, explore and discover along with all of you. Autopilot tests themselves are written in python, but don't let that scare you off! If you are able to understand how to test, writing a testcase that autopilot can run is simply a matter of learning syntax -- non-programmers are welcome here!

Read more
Nicholas Skaggs


Greetings from Copenhagen! I thought I would give a mid-UDS checkup for the quality team community. You may have already heard some of the exciting stuff that is already been discussed at UDS. Automated testing is being pursued with full vigor, the release schedule has been changed, and cadence testing is in. In addition, ubuntu is being focused into getting into fighting shape by targeting the Nexus 7 as a reference platform for mobile.

I was honored enough to have a quick plenary where attendees here got to see and hear about the various automated testing efforts going on. Does that mean the machines have replaced us? Hardly! The goal with bringing automated testing online is to help us be more proactive with how and why we test. We've done an amazing job of reacting to changes and bugs, but now as a community I would like us to focus on being proactive with our testing. The changes below are all going to help set us firmly in this direction. By proactively testing things, we eliminate bugs, and repetitive or duplicated work for ourselves. This frees us to explore more focused, more interesting, and more in-depth testing. So without further ado, here's a quick rundown of the changes discussed here in Copenhagen -- hang on to your testing hats!

Release
The Release schedule has dropped all alphas, and the first beta, resulting in a beta and then final release milestone only. In addition, the freezes have been moved back a few weeks. The end result is the archive will not be frozen till late in the cycle, allowing development and testing to continue unencumbered. This of course is for ubuntu only. Which brings us to flavors!


Flavors
Flavors will now have complete control over there releases. They can chose to test, freeze, and re-spin according to there own schedule and timing. Some will adopt ubuntu's schedule, others may retain the old milestones or even do something completely different.


ISOs
Iso's will now be automatically 'smoke' tested before general release. No more completely broken installers on the published images! In addition, the iso's will be published daily as usual, but will not have typical milestones as mentioned above. Preference will be given to the daily iso -- the current one -- throughout the cycle. Testing will occur in a cadence instead of a milestone.

Cadence
Rather than milestones, a bi-weekly cadence of testing will occur with the goal of assuring good quality throughout the release cycle. The cadence weeks will be scheduled and feature testing different pieces of ubuntu in a more focused manner. This includes things like unity, the installer, and new features landing in ubuntu, but will also be the target of feedback from the state of ubuntu quality.

State of ubuntu Quality
A bold effort to generate a high level view of what needs testing and what is working well on a per image basis inside of ubuntu. This is an experimental idea whose implementation will garner feedback early in the cycle and will collect data and influence decisions for testing focus during the cycle. *fingers crossed*

AutoPilot
This tool will integrate xpresser to allow for a complete functional UI testing tool. One of the first focuses for testcases will be automating the installer from a UI perspective to free our manual testing resources from basic installer testing! From the community perspective, we can join in both the writing, and executing of automated, as well as the development of the tool itself.

Hardware Testing Database
This continuing experiment will become more of a reality. The primary focus of the work this cycle will be to bring the tool, HEXR, online and to do basic integration with the qatracker for linking your hardware profiles. In addition, focused hardware testing using the profiles will be explored.

I hope this gives you a nice preview of what's coming. I would encourage you to have a look a the blueprints and pads for the sessions, and ask questions or volunteer to help in places you are interested. I am excited about the opportunities to continue bringing testing to the next level inside of ubuntu. I  owe many thanks to the wonderful community that continues to grow around testing. Here's to a wonderful cycle.

Read more