Canonical Voices

Posts tagged with 'testing'

pitti

I was asked to pour some love over autopilot-gtk, a GTK module to provide introspection of widget states to Autopilot. For those who don’t know, Autopilot is a QA tool to write automatic testing of GUI applications, without the race conditions and limitations that previous tools had with using only the ATK level. Please see the documentation and tutorial for more information. There are a lot of community members who do great things with it already, such as automating testing for Ubiquity or writing tests for GNOME applications like evince, gedit, nautilus, or Shotwell. This should now hopefully become easier.

Now autopilot-gtk has a proper testsuite, I triaged all bug reports, wrote reproducers for them, and fixed them all in today’s upload to Saucy. In particular, you can now do the following:

  • Access to the GtkBuilder names: Instead of having to find a particular widgets in terms of class, position, label contents, or other (sometimes) non-unique or unstable properties, you can now pick it by its unique and stable GtkBuilder name, which is the ID that most upstream code uses to manipulate widgets: b = self.app.select_single(BuilderName='entry_searchquery')
  • GtkTextBuffer type GObject properties are now translated into plain strings, which allows you to access the textual contents of a GtkTextView widget with my_textview.buffer (both for simple property access as well as for selecting by buffer contents).
  • GEnum and GFlags properties are now accessible. Enums are translated to strings (self.app.select_many('GtkButton', relief='GTK_RELIEF_HALF') or self.assertEqual(btn_greet.resize_mode, 'GTK_RESIZE_PARENT')), and flags are represented as a simple integer (like my_widget.events)); in theory we could represent them as string like FLAG_FOO | FLAG_BAR, but this becomes too unwieldy; for reliable identity matching one would always need to take care to sort them alphabetically, keep a consistent spacing, etc.
  • Please let me know if you need access to other types of properties, it is now quite easy to support more (as long as there is a reasonable way of mapping them to a standard D-BUS data type). So please report bugs.

    Read more
Nicholas Skaggs

I couldn't help but start with one of the core apps I consider essential (to me anyway!) on my phone, a terminal. The terminal app being developed for ubuntu has some wonderful features built with a touch interface in mind. One of the biggest issues with touch is having a terminal ready keyboard with things like page up and down, arrow keys, and not to mention being able to use keyboard shortcuts like ctrl+d, ctrl+z, ctrl+c, etc. This has been handled rather elegantly with a long tap menu as you can see below, in addition to a panel that optionally appears at the top of the application.


Dmitry Zagnoyko has already landed a few tests for some of the features present, as you can see below. Execellent work Dmitry! A basic testcase exists now for each of the panels and the circle menu.



Help Dmitry and the terminal app team make sure all the features work properly for you upon release. Get involved and add a test. The initial setup work has already been done, and there are existing testcases already written. Grab the terminal branch, add a testcase from the list of needs, follow the tutorial for help if needed, and propose a merge.


Read more
Nicholas Skaggs

As a quality community team, we've been continuing to make progress this cycle on automating our testcases, especially the new applications that are being written for ubuntu touch. These 'core apps' are being written by other community members for the next generation of ubuntu.

We're also making progress on our desktop applications and automating the ubiquity installer. With that in mind, I'm going to start a little blog series highlighting a package a day for automating. I'll dub it rather unoriginally "Testing All The Things". My goal is to showcase the wonderful work going on with testing this cycle in ubuntu, but also to encourage you dear reader to get involved in helping us. All areas of ubuntu (flavors too!) can benefit from some robot friends helping test the packages they work on and utilize.

But you don't need to wait to see your favorite app hit the list. Hit up the tutorials below for information to dive in and help us!


Core Apps Test Wiki
Writing an autopilot test for ubuntu sdk applications
QML Autopilot Tutorial with example application

Autopilot Tests Project
Writing an autopilot test for desktop applications 


Read more
Michael

logo-jujuHave you ever wished you could just declare the installed state of your juju charm like this?

deploy_user:
    group.present:
        - gid: 1800
    user.present:
        - uid: 1800
        - gid: 1800
        - createhome: False
        - require:
            - group: deploy_user

exampleapp:
    group.present:
        - gid: 1500
    user.present:
        - uid: 1500
        - gid: 1500
        - createhome: False
        - require:
            - group: exampleapp


/srv/{{ service_name }}:
    file.directory:
        - group: exampleapp
        - user: exampleapp
        - require:
            - user: exampleapp
        - recurse:
            - user
            - group


/srv/{{ service_name }}/{{ instance_type }}-logs:
    file.directory:
        - makedirs: True

While writing charms for Juju a long time ago, one of the things that I struggled with was testing the hook code – specifically the install hook code where the machine state is set up (ie. packages installed, directories created with correct permissions, config files setup etc.) Often the test code would be fragile – at best you can patch some attributes of your module (like “code_location = ‘/srv/example.com/code’”) to a tmp dir and test the state correctly, but at worst you end up testing the behaviour of your code (ie. os.mkdir was called with the correct user/group etc.). Either way, it wasn’t fun to write and iterate those tests.

But support has improved over the past year with the charmhelpers library. And recently I landed a branch adding support for declaring saltstack states in yaml, like the above example. That means that the install hook of your hooks.py can be reduced to something like:

import charmhelpers.core.hookenv
import charmhelpers.payload.execd
import charmhelpers.contrib.saltstack


hooks = charmhelpers.core.hookenv.Hooks()


@hooks.hook()
def install():
    """Setup the machine dependencies and installed state."""
    charmhelpers.contrib.saltstack.install_salt_support()
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/dependencies.yaml')
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/installed.yaml')


# Other hooks...

if __name__ == "__main__":
    hooks.execute(sys.argv)

…letting you focus on testing and writing the actual hook functionality (like relation-set’s etc. I’d like to add some test helpers that will automatically check the syntax of the state yaml files and template rendering output, but haven’t yet).

Hopefully we can add similar support for puppet and Ansible soon too, so that the charmer can choose the tools they want to use to declare the local machine state.

A few other things that I found valuable while writing my charm:

  • Use a branch for charmhelpers – this way you can make improvements to the library as you develop and not be dependent on your changes landing straight away to deploy (thanks Sidnei – I think I just copied that idea from one of his charms). The easiest way that I found for that was to install the branch into mycharm/lib so that it’s included in both dev and when you deploy (with a small snippet in your hooks.py.
  • Make it easy to deploy your local charm from the branch… the easiest way I found was a link-test-juju-repo make target – I’m not sure what other people do here?
  • In terms of writing actual hook functionality (like relation-set events etc), I found the easiest way to develop the charm was to iterate within a debug-hook session. Something like:
    1. write new test+code then juju upgrade-charm or add-relation
    2. run the hook and if it fails…
    3. fix and test right there within the debug-hook
    4. put the code back into my actual charm branch and update the test
    5. restore the system state in debug hook
    6. then juju upgrade-charm again to ensure it works, if it fails, iterate from 3.
  • Use the built-in support of template rendering from saltstack for rendering any config files that you need.

I don’t think I’d really appreciated the beauty of what juju is doing until, after testing my charm locally together with a gunicorn charm and a solr backend, I then setup a config using juju-deployer to create a full stack with an Apache front-end, a cache load balancer for multiple squid caches, as well as a load balancer in front of potentially multiple instances of my charms wsgi app, then a back-end loadbalancer in between my app and the (multiple) solr backends… and it just works.


Filed under: juju, python, testing

Read more
Michael

logo-jujuHave you ever wished you could just declare the installed state of your juju charm like this?

deploy_user:
    group.present:
        - gid: 1800
    user.present:
        - uid: 1800
        - gid: 1800
        - createhome: False
        - require:
            - group: deploy_user

exampleapp:
    group.present:
        - gid: 1500
    user.present:
        - uid: 1500
        - gid: 1500
        - createhome: False
        - require:
            - group: exampleapp


/srv/{{ service_name }}:
    file.directory:
        - group: exampleapp
        - user: exampleapp
        - require:
            - user: exampleapp
        - recurse:
            - user
            - group


/srv/{{ service_name }}/{{ instance_type }}-logs:
    file.directory:
        - makedirs: True

While writing charms for Juju a long time ago, one of the things that I struggled with was testing the hook code – specifically the install hook code where the machine state is set up (ie. packages installed, directories created with correct permissions, config files setup etc.) Often the test code would be fragile – at best you can patch some attributes of your module (like “code_location = ‘/srv/example.com/code’”) to a tmp dir and test the state correctly, but at worst you end up testing the behaviour of your code (ie. os.mkdir was called with the correct user/group etc.). Either way, it wasn’t fun to write and iterate those tests.

But support has improved over the past year with the charmhelpers library. And recently I landed a branch adding support for declaring saltstack states in yaml, like the above example. That means that the install hook of your hooks.py can be reduced to something like:

import charmhelpers.core.hookenv
import charmhelpers.payload.execd
import charmhelpers.contrib.saltstack


hooks = charmhelpers.core.hookenv.Hooks()


@hooks.hook()
def install():
    """Setup the machine dependencies and installed state."""
    charmhelpers.contrib.saltstack.install_salt_support()
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/dependencies.yaml')
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/installed.yaml')


# Other hooks...

if __name__ == "__main__":
    hooks.execute(sys.argv)

…letting you focus on testing and writing the actual hook functionality (like relation-set’s etc. I’d like to add some test helpers that will automatically check the syntax of the state yaml files and template rendering output, but haven’t yet).

Hopefully we can add similar support for puppet and Ansible soon too, so that the charmer can choose the tools they want to use to declare the local machine state.

A few other things that I found valuable while writing my charm:

  • Use a branch for charmhelpers – this way you can make improvements to the library as you develop and not be dependent on your changes landing straight away to deploy (thanks Sidnei – I think I just copied that idea from one of his charms). The easiest way that I found for that was to install the branch into mycharm/lib so that it’s included in both dev and when you deploy (with a small snippet in your hooks.py.
  • Make it easy to deploy your local charm from the branch… the easiest way I found was a link-test-juju-repo make target – I’m not sure what other people do here?
  • In terms of writing actual hook functionality (like relation-set events etc), I found the easiest way to develop the charm was to iterate within a debug-hook session. Something like:
    1. write new test+code then juju upgrade-charm or add-relation
    2. run the hook and if it fails…
    3. fix and test right there within the debug-hook
    4. put the code back into my actual charm branch and update the test
    5. restore the system state in debug hook
    6. then juju upgrade-charm again to ensure it works, if it fails, iterate from 3.
  • Use the built-in support of template rendering from saltstack for rendering any config files that you need.

I don’t think I’d really appreciated the beauty of what juju is doing until, after testing my charm locally together with a gunicorn charm and a solr backend, I then setup a config using juju-deployer to create a full stack with an Apache front-end, a cache load balancer for multiple squid caches, as well as a load balancer in front of potentially multiple instances of my charms wsgi app, then a back-end loadbalancer in between my app and the (multiple) solr backends… and it just works.


Filed under: juju, python, testing

Read more
Nicholas Skaggs

Given all recent love and excitement for autopilot I wanted to share the QA community's progress on writing autopilot tests for, celebrate our successes and let everyone know where we still need help.

First let's talk about the ubuntu-autopilot-tests project. As part of the hackfests held at the end of May/early June we were able to complete the transition to autopilot 1.3 of the ubuntu desktop autopilot tests. Thanks to all of the contributors and hackers for helping on this! In addition, we now have a production branch, and the canonical platform QA team is working on adding them to the official smoke testing each day, Great work everyone! That said, tests are still needed, and in some cases the testcases are still basic and not covering many of the application features. There is still room for you to be invovled! Of note is the on-going work to automated our image testing via the UI.

Next, let's talk about the core apps. Last Thursday we held a hackfest to help kickstart testcases for all of these projects. So let's take a look at how far we've come in a week. As a reminder, testcase contributions to any of the core apps is very much appreciated -- there is still a need for you to come alongside and help write tests!

Calculator
There are already several testcases merged in with the main branch, but as one of the most feature complete applications, work and help is still needed in this area. There are currently 6 open bugs for tests needed here. This is a great application to contribute to for someone new to autopilot!

Calendar
There are two pending merge requests and the work is underway towards knocking out the rest of the testcases needed.

Clock/Alarm
The clock team has jumped in headfirst to help with testcases.  You can view the status of the remaining tests needed here.

Doc Viewer
I started on a branch for this and the basic infrastructure is in place. Branch the application. Grab a copy of the emulator, pick a bug and write your test. This app needs you!

File Manager
The first merge and test is in review. But there's still more tests to be written. Have a look at the list of needed tests.


RSS Reader
Ready and waiting! Check out the list of bugs and have at it! The basic structure is already in place. Simply grab a copy of the emulator, pick a bug and write your test. This app needs you!

Terminal
The first merge request has just been approved and landed for terminal autopilot tests. But there's more features to be tested in this awesome app. Grab something off the list and go. The setup work is already done.

Music
Ready and waiting! Check out the list of bugs and have at it! The basic structure is already in place. Simply grab a copy of the emulator, pick a bug and write your test. This app needs you!

Weather
Half of the initial testcases have been started and the first merges are being proposed. Rock on Martin!

Remember you can always view the big master list of all the open tests here. We've got a bit of work ahead of us! Be a part of the team. Grab an open bug from the list above or contact me for help and I'll make sure you get invovled!

Read more
Nicholas Skaggs

A couple weeks ago we announced the initiative to drive up our autopilot (that is, automated) tests for our ubuntu touch core apps. The core apps are being made with the ubuntu sdk, and thus share the same language (QML) and toolkit (ubuntusdk).

With this in mind I wanted to provide an emulator, which in autopilot speak, is a utility class for writing autopilot tests that use the ubuntu SDK. The goal is to help accelerate the process for getting the testcases written, as well as standardizing best practices for testing common features. At the moment the emulator contains useful functions like tab switching, selecting from popovers, opening and closing the toolbar and clicking toolbar buttons. Please, take a look and utilize the emulator when you are contributing new tests for the ubuntu touch applications. For the moment, the emulator can be found here:

lp:~nskaggs/+junk/ubuntusdk_autopilot_emulator

The future home is hopefully in the SDK itself, but for now consider that branch your source for emulator goodness. Now, a quick FAQ.

Is it ready for use?
Yes, it's ready and tested on several core apps now including clock, calendar, terminal, and file manager. That said if you find an issue, simply contact me or propose an improvement!

How do I use it?
Inside your autopilot test subfolder, add an emulators folder if it's not already present. Next, branch my source above -- it will add ubuntusdk.py to the folder. Simply incorporate it into your __init__.py or testcase itself and call the utility functions with ubuntusdk.*. For an example check out the ubuntu-terminal-app and the merge request from today. It shows adding autopilot tests to an empty branch. In addition, the emulator (albeit an earlier version) was used in the tutorial on the ubuntu app developer portal.

Will it be updated?
Yes! Expect refinements and tweaks as we go along. Hopefully a true "stable and complete" version will appear in the not too distant feature when the emulator itself has a proper home. In the meantime, use it and as more complex tests are added, expect to update the emulator in the source branch you are working in.

Go forth and write tests!

Read more
Nicholas Skaggs

QATracker Survey + bonus mockup

Hot on the heels of our first cadence week, I wanted to take the opportunity to collect feedback about the tools we as a community utilize. Specifically the QATracker which we heavily rely on for managing our work, testcases and results. From the wiki, "The QATracker is the master repository for all our our testing within ubuntu QA. It holds our testcases, records our results, and helps coordinate our testing events."

This is a link to a brief survey asking a few simple questions about how you've used the tool. All your responses are anonymous, but I will publish the aggregate question information and share it with the community once completed. The goal is to help ensure the tool is meeting our needs and is being utilized.

I'll leave the survey up until June 24th. My hope is to encourage more folks to help test as well as make it more enjoyable for those already taking part. I want to ensure our tools and processes continue to evolve, strengthen and become more robust for everyone as we continue on our mission. Part of that is making sure the tools we use are enjoyable!

Thanks in advance everyone!

As a bonus, Pasi, aka knome, has put together some mockups on how we might be able to switch what the results page looks like. This is perhaps the most utilized page of the site, so without further ado, here's a mockup of some changes proposed to make it more usable:

Old Site
New Site Mockup


What a change eh? The add test results has been moved to the sidebar and simplified, the bugs listing has been written out, and the results have been moved to the top. Finally the links have also been moved to the sidebar and Pasi has updated the icons ;-)

SO, what does everyone think about the changes? Many thanks to Pasi for putting this together! Leave a comment, a message on the mailing list, or reflect your thoughts in the survey.

Read more
Nicholas Skaggs

Join the ubuntu quality community team's effort this week! As a community we test different things about every ~2 weeks in ubuntu, and share the results to flesh out bugs and problem areas.

So what's up for testing this week? The daily images, the default applications in ubuntu and a new version of the sound stack for testing.

Ready to help? Full details are here.

Need some help on how to contribute? Have a look at this page and the walkthroughs listed. Of particular interest is the ISO testing and Cadence Week testing walkthroughs.

Do note that you don't need anything special to participate in cadence week testing! Both an installed version of the development branch of ubuntu (aka saucy) in a VM or on a real box, or even a live session of the latest daily image will work. For more information on how to use a live session to test, check out the Cadence Week testing walkthrough or watch the youtube video of the same.
Happy Testing!

Read more
pitti

I released umockdev 0.2.6. Most importantly, this now fully works on ARM platforms, as we want to use it to write tests for/on the Ubuntu phone. I tested it on my Nexus 7, and the tests also succeed on the ARM Ubuntu builder (which are Panda boards). Fixing this revealed some interesting issues in recorded ioctl traces (as they are platform specific in some cases due to different word length) as well as kernel bugs in the Tegra drivers.

This version also fixes compatibility with older automake versions again, so that the daily builds for raring should work again.

I also have a new gvfs test case ready to commit which uses umockdev (if available) to test functionality of the gphoto backend. But that needs the new UMockdevTestbed.clear() API in 0.2.6, so I was holding that back. I will land it soon in upstream git now.

Read more
Nicholas Skaggs

A few months ago the ubuntu touch core apps project was launched. For those of you following along with Michael's regular updates have gotten to see these applications grow up rather quickly.

Autopilot Says: How can I help?
Now it's time to add some more testing around these applications as they have reached a basic functional level of usability. Automated testing via autopilot to the rescue!

To help kickstart this process we've put together a recipe for writing autopilot tests specific to QML applications and added it to developer.ubuntu.com. In addition, we'll be hosting a hackfest next week on June 13th to help add basic autopilot testcases for each of the core apps. Folks will be on-hand ready to field your questions and hack together on the autopilot testcases needed for the applications. Join us and help support the wonderful community of application developers making awesome applications for ubuntu!

So how can you help? 
  1. First, go read through the recipe on writing autopilot tests for QML applications. It's also a good idea to have a look through the official tutorial for autopilot and bookmark the API reference link so it's handy.
  2. Armed with your new knowledge, start hacking on some autopilot tests for the core apps. Here's a list of core applications along with the status of autopilot tests. Choose something that looks interesting to you and add some tests.
  3. Follow the contributing guide to help you get your work contributed into the ubuntu touch core application project you chose.
  4. Finally come out to the hackfest! It's your chance to share your work, ask questions, get your tests sorted and merged and socialize and meet other members of the community.
  5. Don't forget there is a wonderful quality community you can be a part of and get help from if you get stuck! There's a mailing list for ubuntu-touch, and ubuntu-quality as well as IRC channels #ubuntu-touch, #ubuntu-autopilot and #ubuntu-quality. Use these resources to help you!
See you next week and happy testing!

Read more
pitti

I did a 0.2.2 maintenance release for umockdev to fix building with Vala 0.16.1, gcc 4.8 (the changed sizeof behaviour caused segfaults), and current udev releases (umockdev-record stumbled over the new “link priority” fields of udevadm). There are also a couple of bug fixes, but no new features.

Read more
Jussi Pakkanen

Let’s talk about revision control for a while. It’s great. Everyone uses it. People love the power and flexibility it provides.

However, if you read about happenings from over ten years ago or so, we find that the situation was quite different. Seasoned developers were against revision control. They would flat out refuse to use it and instead just put everything on a shared network drive or used something crazier, such as the revision control shingle.

Thankfully we as a society have gone forwards. Not using revision control is a firing offense. Most people would flat out refuse to accept a job that does not use revision control regardless of anything short of a few million euros in cash up front. Everyone accepts that revision control is the building block of quality. This is good.

It is unfortunate that this view is severely lacking in other aspects of software development. Let’s take as an example tests. There are actually people, in visible places, that publicly and vocally speak against writing tests. And for some reason we as a whole sort of accept that rather and not immediately flag that out as ridiculous nonsense.

A first example was told to me by a friend working on a quite complex piece of mathematical code. When he discovered that there were no tests at all measuring that it worked he was replied this: “If you are smart enough to be hired to work on this code, you are smart enough not to need tests.” I really wish this were an isolated incident, but in my heart I know that is not the case.

The second example is a posting made a while back by a well known open source developer. It had a blanket statement saying that test driven development is bad and harmful. The main point seemed to be a false dichotomy between good software with no tests and poor software with tests.

Even if testing is done, the implementation may be just a massive bucketful of fail. As an example, here you can read how people thought audio codecs should be tested.

As long as this kind of thinking is tolerated, no matter how esteemed a person says it, we are in the same place as medicine was during the age of bloodletting and leeches. This is why software is considered to be unreliable, buggy piece of garbage that costs hundreds of millions. And the only way out of it is a change of collective attitude. Unfortunately those often take quite a long time to happen, but a man can dream, can he not?

Read more

UPDATE: -s $ANDROID_SERIAL is optional.

If you ran jenkins and had a device hooked up, this is sort of pseudo code you would run:

phablet-flash -s $ANDROID_SERIAL -u http://cdimages.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/pending
sleep 5
adb -s $ANDROID_SERIAL wait-for-device
sudo phablet-network-setup -s $ANDROID_SERIAL -i -n WAP_conf_file
phablet-test-run s $ANDROID_SERIAL -i
phablet-test-run -s $ANDROID_SERIAL -n -p 'camera-app-autopilot' camera_app

What does each thing do? Well here goes

phablet-flash
We install whatever is on /pending in cdimage using the -u option to specify a url.
phablet-network-setup
After the device is flashed, we are going to need networking to set it up. The -n specifies the configuration file to use on that device that would successfully connect us to the WAP whilst the -i installs some packages such as openssh-server and sets up our public key on the device.
phablet-test-run
There are two calls here, one just sets up autopilot with the -i and it could very well be part of the next call. That next call, installs the test package and runs the autopilot tests for that device. If the shell interferes with the tests you can stop it with -n. Adding a -a and -o grabs the xml results from the test.

So that's it. Some gotchas are that autopilot is in transition right now. This is using the current fork of what we have that works on devices. The next autopilot release 1.3 was supposed to fix and integrate everything, but there is an API breakage that needs to be solved.

Since this phablet autopilot was a quick fork and when this was done there was no way to detect resolution so it's hard coded to maguro's resolution and may be a cause of issue when running on other devices (this as well is in the new autopilot, and if migration takes too long we might bring it in).

Read more
Nicholas Skaggs

Consider this text your giant disclaimer. Just a reminder these images are not intended for end-users; please don't go flashing your device thinking you'll have a replacement for android. These images are intended for developers, enthusiasts and testers who want to help. If this describes you, please read on!

I'm happy to announce the ubuntu touch images are now available for testing on the isotracker. And further, the images are now raring based! As such, the ubuntu touch team is asking for folks to try out the new images on there devices and ensure they are no regressions or other issues.




There are 4 product listings representing each of the officially supported devices; grouper (nexus 7), maguro (galaxy nexus), mako (nexus 4), and manta (nexus 10). You can help by installing the new images following the installation instructions, and then reporting your results on the isotracker. If your device has never run a developer preview image for ubuntu touch, you might need to read and follow the steps on the touch wiki first.


There are handy links for download and bug information at the top of the testcases to help you out. If you do find a bug, please use the instructions to report it and add it to your result. Never used the tracker before? Take a look at this handy guide or watch the youtube version.

Once all the kinks and potential issues are worked out (your feedback requested!) the raring based images will become the default, and moving forward, the team will continue to provide daily images and participate in testing milestones as part of the 's' cycle.

As always please contact me if you run into issues, or have a question.
Thank you in advance for your help, and happy testing everyone!

Read more
Nicholas Skaggs

Filling the Gaps

I wanted to post briefly about the work that has been going on at the end of the cycle in the ubuntu quality team. Yes, we're testing the final images! Yes, it's been a wild ride that is nearing the finish! Yes, you can help contribute results! (And as we'll see below, you can help write tools too!)

But more than all of that, several team members have stepped out of there comfort zones and went to work on one of the testing tools we as a team utilize. The tool is called "Testdrive" and is written in python. Now, one of the great things I love to espouse on about with QA is the opportunity to work on many different things. There are needs to fit all interests, and if you are willing, the capability to learn.

In this instance, there is an opportunity to learn a little python and to work with a new team to help keep a testing tool alive. I'm happy to see that the same tool that was rendered broken in January by updates is now alive and well, with brand new contributors, fresh patches and even a release! Many thanks to smartboyhw, noskcaj, SergioMeneses, phillw, and the others who have reached out to ensure the tool that ships in raring still works. Thanks as well to the testdrive development team for engaging with us, reviewing merge proposals, and helping to ensure testdrive still works.

I look forward to a bright feature of new and improved testing tools. Specifically to those who contributed patches, with your new coding abilities, I can't wait to see what will happen next cycle! *wink, wink*

Read more
Nicholas Skaggs

The quality team invites you to a testing event for the final beta iso images. We'll be providing real-time help (IRC, or even one on one video hangouts if needed), encouraging you to download the final beta images, install, upgrade and test them out with us. You only need yourself, a machine (virtual or real!) and a bit of willingness to learn. We'll even be broadcasting for part of the event on ubuntuonair. So here's the details you need to know:

Tuesday April 2nd, 2013

  • 1800 UTC - 2200 UTC 
    • Quality team members are dedicated to hanging out in #ubuntu-quality executing testcases and helping answer questions
  •  2000 UTC: 
    • We'll be streaming live on ubuntuonair doing live testing demos and offering help
      • Basic iso test install
      • More exotic examples -- netboot, server, non-english
      • Your requests!


Interested? Great, mark the time and date on your calendar and check out the tutorial here to get a leg up on what you'll be doing during the event.


Can't make the 4 four window? Don't worry! Give testing a whirl anyways, and feel free to ask on #ubuntu-quality, and our mailing list for help.

See you on Tuesday!





Read more
pitti

I just pushed out a new python-dbusmock release 0.6.

Calling a method on the mock now emits a MethodCalled signal on the org.freedesktop.DBus.Mock interface. In some cases this is easier to track than parsing the mock’s log or using GetMethodCalls. Thanks to Lars Uebernickel for this.

DBusMockObject.AddTemplate() and DBusTestCase.spawn_server_template() can now load local templates from your own project by specifying a path to a *.py file as template name. Thanks to Lucas De Marchi for this feature.

I also wrote a quite comprehensive template for systemd’s logind. It stubs out the power management functionality as well as user/seat/session objects, and is convincing enough for loginctl. Some bits like AttachDevice is missing, as this sounds unlikely to be required for D-BUS mock tests, but please let me know if you need anything else.

The mock processes now terminate automatically if their connected D-BUS goes down, as advertised in the documentation.

You can get the new tarball from Launchpad, and I uploaded it to Debian experimental now.

Enjoy!

Read more
Nicholas Skaggs

As discussed and planned, Smart Scopes have landed! Unity 7 too is landing, with many more features around getting 100 scopes installed, privacy, and dash improvements. For details on what Unity 7 is bringing, check out this post.

In support of the Unity changes, the Unity development team is asking for some extra testing on these specific features. So, we've updated and added a new testcase to our unity suite for these smart scopes. Pay attention to the cases marked mandatory and optional. The testcases relating to the smart scopes have all been marked as mandatory, and are the essential tests to run. That said, it doesn't hurt to run through the optional cases if you have time. We don't like regressions either :-)

So, here's what you need to know!

Never done a call for testing before? Read/Watch this first!; Call for testing walkthrough

Install the new unity from a ppa; Installation Instructions
 
Load the testcases and select one; Unity 7 Testing

Read the testcase, perform the actions listed and record your results.

If you run into any issues, please file a bug

Finally, please note the changelogs and build status found on the tracker, as well as any known bugs while testing. New builds will continue to trickle in over the next few days with new changes coming in. I'd encourage you to test and then re-test later in the week to follow-up on bugs you find, or test the new things that land.

As always please contact me if you run into issues, or have a question.
Thank you in advance for your help, and happy testing everyone!

Read more
Nicholas Skaggs

I wanted to write a post about the excitement of the new platform and the wonderful new challenges we face ahead of us. Now, given that this platform is being written right now from the ground up, those with a nose for quality instantly perk up. We love well tested applications, and developing with tests in mind from the start is much easier than attempting to retrofit. Seeing the first fruits of the developer effort is very exciting -- good work everyone!

So with that in mind, I started looking at some of the excellent work the core apps teams are doing with there applications. They've been working with the design community to turn the nice mockups into reality. I took the liberty of checking out and running some of the first versions of these applications. The calculator is one that stood out to me as already closing in on its specification. So armed with some of the design conversation for the calculator, I started a branch to create a set of manual tests for ubuntu touch applications, starting with the calculator. If you are interested in quality, now is the time to be involved! The applications can all be installed and run on your phone or even a ubuntu desktop.

So what can you do?

If you're a tester;

If you're a developer and have questions on writing tests for your application, feel free to contact me. I would love to see not only nice unit test driven code, but also some end user tests via autopilot, and I want to make sure you as a developer have the resources to do so. In addition, we as a quality community are happy to help test with you and write some manual tests to do so for your application.

I'm helping!


Read more