Canonical Voices

Posts tagged with 'testing'

Nicholas Skaggs

Recently we've been on a campaign to help increase the amount of automated tests we have for ubuntu. Specifically the effort is focused around helping out our community developers on the core apps project. The core apps project is building the core applications for ubuntu touch. Excellent stuff, all being done by the community!

The "testing all the things" blog series is currently covering each of these core applications and ends with a call to help the development teams. I've linked to tutorials like this and this on autopilot providing what you need to know. But sometimes seeing is understanding, and a helping hand can go a long way.

With that in mind, I am announcing a series of workshops to help you gather the skills needed to write automated tests. You can help contribute with just your ubuntu pc, writing and running tests without needing phone hardware! We're going to focus on autopilot, and for the moment the ubuntu core apps. I'll try and alternate to host them at timezone friendly times for everyone (granted I do have to sleep at some point too!). Here's the schedule, with links to the event on G+ page.

Tomorrow!, Wednesday July 3rd at 1800 UTC
Friday July 5th at 1300 UTC
Tuesday July 9th at 1800 UTC
Thursday July 11th at 2200 UTC

The workshops will take place in #ubuntu-quality and will all last an hour (but I won't leave you hanging if we need more time!). I'll host g+ hangouts and provide one on one help as needed to anyone writing tests. See you at the workshops!

Read more
Nicholas Skaggs

In honor of the closing of google reader, I thought I would highlight another core application that needs some attention; namely the RSS Reader, with a proper name Shorts. If your already bored and yawning (RSS is dead, long live RSS), have a look at the design's recently shared by the design team as well as the original post with the user stories. Seems like RSS might not be so dead (or look it!)!

Yes, I still use RSS feeds, mainly as a news aggregator. In many ways honestly RSS feeds have long replaced my idea of bookmarking things. Bookmarks are general stale old content that never updates, is never refreshed and is eventually just purged. The ideas shown in the design of Shorts are great and the development team has a wonderful task ahead of them of implementing them.


With the development team focused on getting the code written, it's our opportunity to help out by adding testcases for there work. For instance, simple things like adding, editing, and removing a rss feed all need tested. The testcases are ready and waiting for you to add a test!

Consider helping the shorts developers get everything in shape. Grab the rss reader branchadd a testcase from the list of needsfollow the tutorial for help if needed, and propose a merge. Thanks for helping to ensure quality for ubuntu touch!

Read more
Nicholas Skaggs

Coming off a lovely weekend, it's time we turned our attention to an app on the lighter side. Anyone up for a game of sudoku?


Sudoku is an example of a simple logic game that can be learned easily enough yet has the staying power to intrigue me to continue to play it. Dinko Osmankovic and the rest of the Sudoku Touch developers have created a version for ubuntu touch to fill those critical mundane moments of the day -- waiting for a train or having your morning coffee. Or perhaps if your like me, fighting insomnia (yikes!).

Apparently using the show hints button to play the entire game makes me a cheat.
So, while it seems the game is smart enough to slap me for trying to cheat my way through, it needs some testcases! Looking at the buglist, there are seven tasty bugs with your potential name on them. This is testing at it's finest! It's rare to count playing a game as helping ubuntu -- but in this case, you would be right!
Themes support!
Consider helping the sudoku touch developers as the game and it's features mature.  Grab the sudoku branch, add a testcase from the list of needs, follow the tutorial for help if needed, and propose a merge. Thanks for helping to ensure quality for ubuntu touch!

Read more
Nicholas Skaggs

With the announcement of MIR being the default display server for 13.10, many folks rightly wonder if it will be stable and ready to ship by then. Well, as part of ensuring that will be the case, I'd like to announce that XMir will become part of the bi-weekly cadence testing we as a community team undertake. Week 2 begins this Saturday and will include XMir testing.

The testcases we'll be using at the moment are basic smoke tests to check the overall state of XMir. As the cycle wears on our goal is to perform a full regression test for XMir against the normal Xorg server to make sure everything is super smooth. The goal is for someone running ubuntu saucy to not even realize something has changed with the display server. So ready to help?

Check out this page to learn about how to install Mir. Those same instructions are linked from the testcase itself. Run through the tests listed and report your results. Make sure your logged in or you won't be able to report anything :-)

New to cadence testing or the tracker? Here's some links you might find useful:

Understanding how to use the QATracker
Understanding how to perform a cadence test

There's video versions too!

QATracker
Cadence Testing
 
Thanks for helping test ubuntu! Your willingness to live on the edge and test ensure others of proper functioning software in the stable release.

Read more
Nicholas Skaggs

A good calendar is essential to me. I'm liable to forget almost everything about my day except eating :-) Things like day of the week and month are important details I definitely rely on a calendar for (I can usually get the month right!).

Fortunately for me (and you!), there is a core app that provides a handy Calendar. Michael Hall featured this application a few weeks ago on his blog covering a development rundown of the application. It covers the list of features nicely. In a word, there's a lot of neat stuff to test in there.

Looking at the buglist of needs there are only 2 showing in-progress -- plenty of room for someone to help out by testing each one of the different views. Monthly, daily, weekly; accessed via swiping.

" Swiping left and right on the month will take you back or forward a month at a time.  Swiping left or right on the bottom half will take you back and forward a day at a time.

Pull the event area down and let it go, and the month will collapse down into a single week. Now swiping left and right there will move you back and forward a week at a time.  Pull down and let it go again and it will snap back to showing the full month.
Finally, you have an option in the toolbar (swipe up from the bottom edge) to switch from an event list to a timeline view of your events."

Are you dizzy yet?

These seamless transitions could use some cool testcases! At the moment, the app is seeing it's first merge requests being made by Carla Sella and Kunal Parmar. The team has faced some issues with uncovering some unique requirements for autopilot, which have now been fixed. Excellent work both of you!

Consider helping Carla, Kunal and the ubuntu calendar developers as the application and it's features mature.  Grab the calendar branch, add a testcase from the list of needs, follow the tutorial for help if needed, and propose a merge. Thanks for helping be a part of ubuntu!


Read more
pitti

I was asked to pour some love over autopilot-gtk, a GTK module to provide introspection of widget states to Autopilot. For those who don’t know, Autopilot is a QA tool to write automatic testing of GUI applications, without the race conditions and limitations that previous tools had with using only the ATK level. Please see the documentation and tutorial for more information. There are a lot of community members who do great things with it already, such as automating testing for Ubiquity or writing tests for GNOME applications like evince, gedit, nautilus, or Shotwell. This should now hopefully become easier.

Now autopilot-gtk has a proper testsuite, I triaged all bug reports, wrote reproducers for them, and fixed them all in today’s upload to Saucy. In particular, you can now do the following:

  • Access to the GtkBuilder names: Instead of having to find a particular widgets in terms of class, position, label contents, or other (sometimes) non-unique or unstable properties, you can now pick it by its unique and stable GtkBuilder name, which is the ID that most upstream code uses to manipulate widgets: b = self.app.select_single(BuilderName='entry_searchquery')
  • GtkTextBuffer type GObject properties are now translated into plain strings, which allows you to access the textual contents of a GtkTextView widget with my_textview.buffer (both for simple property access as well as for selecting by buffer contents).
  • GEnum and GFlags properties are now accessible. Enums are translated to strings (self.app.select_many('GtkButton', relief='GTK_RELIEF_HALF') or self.assertEqual(btn_greet.resize_mode, 'GTK_RESIZE_PARENT')), and flags are represented as a simple integer (like my_widget.events)); in theory we could represent them as string like FLAG_FOO | FLAG_BAR, but this becomes too unwieldy; for reliable identity matching one would always need to take care to sort them alphabetically, keep a consistent spacing, etc.
  • Please let me know if you need access to other types of properties, it is now quite easy to support more (as long as there is a reasonable way of mapping them to a standard D-BUS data type). So please report bugs.

    Read more
Nicholas Skaggs

I couldn't help but start with one of the core apps I consider essential (to me anyway!) on my phone, a terminal. The terminal app being developed for ubuntu has some wonderful features built with a touch interface in mind. One of the biggest issues with touch is having a terminal ready keyboard with things like page up and down, arrow keys, and not to mention being able to use keyboard shortcuts like ctrl+d, ctrl+z, ctrl+c, etc. This has been handled rather elegantly with a long tap menu as you can see below, in addition to a panel that optionally appears at the top of the application.


Dmitry Zagnoyko has already landed a few tests for some of the features present, as you can see below. Execellent work Dmitry! A basic testcase exists now for each of the panels and the circle menu.



Help Dmitry and the terminal app team make sure all the features work properly for you upon release. Get involved and add a test. The initial setup work has already been done, and there are existing testcases already written. Grab the terminal branch, add a testcase from the list of needs, follow the tutorial for help if needed, and propose a merge.


Read more
Nicholas Skaggs

As a quality community team, we've been continuing to make progress this cycle on automating our testcases, especially the new applications that are being written for ubuntu touch. These 'core apps' are being written by other community members for the next generation of ubuntu.

We're also making progress on our desktop applications and automating the ubiquity installer. With that in mind, I'm going to start a little blog series highlighting a package a day for automating. I'll dub it rather unoriginally "Testing All The Things". My goal is to showcase the wonderful work going on with testing this cycle in ubuntu, but also to encourage you dear reader to get involved in helping us. All areas of ubuntu (flavors too!) can benefit from some robot friends helping test the packages they work on and utilize.

But you don't need to wait to see your favorite app hit the list. Hit up the tutorials below for information to dive in and help us!


Core Apps Test Wiki
Writing an autopilot test for ubuntu sdk applications
QML Autopilot Tutorial with example application

Autopilot Tests Project
Writing an autopilot test for desktop applications 


Read more
Michael

logo-jujuHave you ever wished you could just declare the installed state of your juju charm like this?

deploy_user:
    group.present:
        - gid: 1800
    user.present:
        - uid: 1800
        - gid: 1800
        - createhome: False
        - require:
            - group: deploy_user

exampleapp:
    group.present:
        - gid: 1500
    user.present:
        - uid: 1500
        - gid: 1500
        - createhome: False
        - require:
            - group: exampleapp


/srv/{{ service_name }}:
    file.directory:
        - group: exampleapp
        - user: exampleapp
        - require:
            - user: exampleapp
        - recurse:
            - user
            - group


/srv/{{ service_name }}/{{ instance_type }}-logs:
    file.directory:
        - makedirs: True

While writing charms for Juju a long time ago, one of the things that I struggled with was testing the hook code – specifically the install hook code where the machine state is set up (ie. packages installed, directories created with correct permissions, config files setup etc.) Often the test code would be fragile – at best you can patch some attributes of your module (like “code_location = ‘/srv/example.com/code’”) to a tmp dir and test the state correctly, but at worst you end up testing the behaviour of your code (ie. os.mkdir was called with the correct user/group etc.). Either way, it wasn’t fun to write and iterate those tests.

But support has improved over the past year with the charmhelpers library. And recently I landed a branch adding support for declaring saltstack states in yaml, like the above example. That means that the install hook of your hooks.py can be reduced to something like:

import charmhelpers.core.hookenv
import charmhelpers.payload.execd
import charmhelpers.contrib.saltstack


hooks = charmhelpers.core.hookenv.Hooks()


@hooks.hook()
def install():
    """Setup the machine dependencies and installed state."""
    charmhelpers.contrib.saltstack.install_salt_support()
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/dependencies.yaml')
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/installed.yaml')


# Other hooks...

if __name__ == "__main__":
    hooks.execute(sys.argv)

…letting you focus on testing and writing the actual hook functionality (like relation-set’s etc. I’d like to add some test helpers that will automatically check the syntax of the state yaml files and template rendering output, but haven’t yet).

Hopefully we can add similar support for puppet and Ansible soon too, so that the charmer can choose the tools they want to use to declare the local machine state.

A few other things that I found valuable while writing my charm:

  • Use a branch for charmhelpers – this way you can make improvements to the library as you develop and not be dependent on your changes landing straight away to deploy (thanks Sidnei – I think I just copied that idea from one of his charms). The easiest way that I found for that was to install the branch into mycharm/lib so that it’s included in both dev and when you deploy (with a small snippet in your hooks.py.
  • Make it easy to deploy your local charm from the branch… the easiest way I found was a link-test-juju-repo make target – I’m not sure what other people do here?
  • In terms of writing actual hook functionality (like relation-set events etc), I found the easiest way to develop the charm was to iterate within a debug-hook session. Something like:
    1. write new test+code then juju upgrade-charm or add-relation
    2. run the hook and if it fails…
    3. fix and test right there within the debug-hook
    4. put the code back into my actual charm branch and update the test
    5. restore the system state in debug hook
    6. then juju upgrade-charm again to ensure it works, if it fails, iterate from 3.
  • Use the built-in support of template rendering from saltstack for rendering any config files that you need.

I don’t think I’d really appreciated the beauty of what juju is doing until, after testing my charm locally together with a gunicorn charm and a solr backend, I then setup a config using juju-deployer to create a full stack with an Apache front-end, a cache load balancer for multiple squid caches, as well as a load balancer in front of potentially multiple instances of my charms wsgi app, then a back-end loadbalancer in between my app and the (multiple) solr backends… and it just works.


Filed under: juju, python, testing

Read more
Michael

logo-jujuHave you ever wished you could just declare the installed state of your juju charm like this?

deploy_user:
    group.present:
        - gid: 1800
    user.present:
        - uid: 1800
        - gid: 1800
        - createhome: False
        - require:
            - group: deploy_user

exampleapp:
    group.present:
        - gid: 1500
    user.present:
        - uid: 1500
        - gid: 1500
        - createhome: False
        - require:
            - group: exampleapp


/srv/{{ service_name }}:
    file.directory:
        - group: exampleapp
        - user: exampleapp
        - require:
            - user: exampleapp
        - recurse:
            - user
            - group


/srv/{{ service_name }}/{{ instance_type }}-logs:
    file.directory:
        - makedirs: True

While writing charms for Juju a long time ago, one of the things that I struggled with was testing the hook code – specifically the install hook code where the machine state is set up (ie. packages installed, directories created with correct permissions, config files setup etc.) Often the test code would be fragile – at best you can patch some attributes of your module (like “code_location = ‘/srv/example.com/code’”) to a tmp dir and test the state correctly, but at worst you end up testing the behaviour of your code (ie. os.mkdir was called with the correct user/group etc.). Either way, it wasn’t fun to write and iterate those tests.

But support has improved over the past year with the charmhelpers library. And recently I landed a branch adding support for declaring saltstack states in yaml, like the above example. That means that the install hook of your hooks.py can be reduced to something like:

import charmhelpers.core.hookenv
import charmhelpers.payload.execd
import charmhelpers.contrib.saltstack


hooks = charmhelpers.core.hookenv.Hooks()


@hooks.hook()
def install():
    """Setup the machine dependencies and installed state."""
    charmhelpers.contrib.saltstack.install_salt_support()
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/dependencies.yaml')
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/installed.yaml')


# Other hooks...

if __name__ == "__main__":
    hooks.execute(sys.argv)

…letting you focus on testing and writing the actual hook functionality (like relation-set’s etc. I’d like to add some test helpers that will automatically check the syntax of the state yaml files and template rendering output, but haven’t yet).

Hopefully we can add similar support for puppet and Ansible soon too, so that the charmer can choose the tools they want to use to declare the local machine state.

A few other things that I found valuable while writing my charm:

  • Use a branch for charmhelpers – this way you can make improvements to the library as you develop and not be dependent on your changes landing straight away to deploy (thanks Sidnei – I think I just copied that idea from one of his charms). The easiest way that I found for that was to install the branch into mycharm/lib so that it’s included in both dev and when you deploy (with a small snippet in your hooks.py.
  • Make it easy to deploy your local charm from the branch… the easiest way I found was a link-test-juju-repo make target – I’m not sure what other people do here?
  • In terms of writing actual hook functionality (like relation-set events etc), I found the easiest way to develop the charm was to iterate within a debug-hook session. Something like:
    1. write new test+code then juju upgrade-charm or add-relation
    2. run the hook and if it fails…
    3. fix and test right there within the debug-hook
    4. put the code back into my actual charm branch and update the test
    5. restore the system state in debug hook
    6. then juju upgrade-charm again to ensure it works, if it fails, iterate from 3.
  • Use the built-in support of template rendering from saltstack for rendering any config files that you need.

I don’t think I’d really appreciated the beauty of what juju is doing until, after testing my charm locally together with a gunicorn charm and a solr backend, I then setup a config using juju-deployer to create a full stack with an Apache front-end, a cache load balancer for multiple squid caches, as well as a load balancer in front of potentially multiple instances of my charms wsgi app, then a back-end loadbalancer in between my app and the (multiple) solr backends… and it just works.


Filed under: juju, python, testing

Read more
Nicholas Skaggs

Given all recent love and excitement for autopilot I wanted to share the QA community's progress on writing autopilot tests for, celebrate our successes and let everyone know where we still need help.

First let's talk about the ubuntu-autopilot-tests project. As part of the hackfests held at the end of May/early June we were able to complete the transition to autopilot 1.3 of the ubuntu desktop autopilot tests. Thanks to all of the contributors and hackers for helping on this! In addition, we now have a production branch, and the canonical platform QA team is working on adding them to the official smoke testing each day, Great work everyone! That said, tests are still needed, and in some cases the testcases are still basic and not covering many of the application features. There is still room for you to be invovled! Of note is the on-going work to automated our image testing via the UI.

Next, let's talk about the core apps. Last Thursday we held a hackfest to help kickstart testcases for all of these projects. So let's take a look at how far we've come in a week. As a reminder, testcase contributions to any of the core apps is very much appreciated -- there is still a need for you to come alongside and help write tests!

Calculator
There are already several testcases merged in with the main branch, but as one of the most feature complete applications, work and help is still needed in this area. There are currently 6 open bugs for tests needed here. This is a great application to contribute to for someone new to autopilot!

Calendar
There are two pending merge requests and the work is underway towards knocking out the rest of the testcases needed.

Clock/Alarm
The clock team has jumped in headfirst to help with testcases.  You can view the status of the remaining tests needed here.

Doc Viewer
I started on a branch for this and the basic infrastructure is in place. Branch the application. Grab a copy of the emulator, pick a bug and write your test. This app needs you!

File Manager
The first merge and test is in review. But there's still more tests to be written. Have a look at the list of needed tests.


RSS Reader
Ready and waiting! Check out the list of bugs and have at it! The basic structure is already in place. Simply grab a copy of the emulator, pick a bug and write your test. This app needs you!

Terminal
The first merge request has just been approved and landed for terminal autopilot tests. But there's more features to be tested in this awesome app. Grab something off the list and go. The setup work is already done.

Music
Ready and waiting! Check out the list of bugs and have at it! The basic structure is already in place. Simply grab a copy of the emulator, pick a bug and write your test. This app needs you!

Weather
Half of the initial testcases have been started and the first merges are being proposed. Rock on Martin!

Remember you can always view the big master list of all the open tests here. We've got a bit of work ahead of us! Be a part of the team. Grab an open bug from the list above or contact me for help and I'll make sure you get invovled!

Read more
Nicholas Skaggs

A couple weeks ago we announced the initiative to drive up our autopilot (that is, automated) tests for our ubuntu touch core apps. The core apps are being made with the ubuntu sdk, and thus share the same language (QML) and toolkit (ubuntusdk).

With this in mind I wanted to provide an emulator, which in autopilot speak, is a utility class for writing autopilot tests that use the ubuntu SDK. The goal is to help accelerate the process for getting the testcases written, as well as standardizing best practices for testing common features. At the moment the emulator contains useful functions like tab switching, selecting from popovers, opening and closing the toolbar and clicking toolbar buttons. Please, take a look and utilize the emulator when you are contributing new tests for the ubuntu touch applications. For the moment, the emulator can be found here:

lp:~nskaggs/+junk/ubuntusdk_autopilot_emulator

The future home is hopefully in the SDK itself, but for now consider that branch your source for emulator goodness. Now, a quick FAQ.

Is it ready for use?
Yes, it's ready and tested on several core apps now including clock, calendar, terminal, and file manager. That said if you find an issue, simply contact me or propose an improvement!

How do I use it?
Inside your autopilot test subfolder, add an emulators folder if it's not already present. Next, branch my source above -- it will add ubuntusdk.py to the folder. Simply incorporate it into your __init__.py or testcase itself and call the utility functions with ubuntusdk.*. For an example check out the ubuntu-terminal-app and the merge request from today. It shows adding autopilot tests to an empty branch. In addition, the emulator (albeit an earlier version) was used in the tutorial on the ubuntu app developer portal.

Will it be updated?
Yes! Expect refinements and tweaks as we go along. Hopefully a true "stable and complete" version will appear in the not too distant feature when the emulator itself has a proper home. In the meantime, use it and as more complex tests are added, expect to update the emulator in the source branch you are working in.

Go forth and write tests!

Read more
Nicholas Skaggs

QATracker Survey + bonus mockup

Hot on the heels of our first cadence week, I wanted to take the opportunity to collect feedback about the tools we as a community utilize. Specifically the QATracker which we heavily rely on for managing our work, testcases and results. From the wiki, "The QATracker is the master repository for all our our testing within ubuntu QA. It holds our testcases, records our results, and helps coordinate our testing events."

This is a link to a brief survey asking a few simple questions about how you've used the tool. All your responses are anonymous, but I will publish the aggregate question information and share it with the community once completed. The goal is to help ensure the tool is meeting our needs and is being utilized.

I'll leave the survey up until June 24th. My hope is to encourage more folks to help test as well as make it more enjoyable for those already taking part. I want to ensure our tools and processes continue to evolve, strengthen and become more robust for everyone as we continue on our mission. Part of that is making sure the tools we use are enjoyable!

Thanks in advance everyone!

As a bonus, Pasi, aka knome, has put together some mockups on how we might be able to switch what the results page looks like. This is perhaps the most utilized page of the site, so without further ado, here's a mockup of some changes proposed to make it more usable:

Old Site
New Site Mockup


What a change eh? The add test results has been moved to the sidebar and simplified, the bugs listing has been written out, and the results have been moved to the top. Finally the links have also been moved to the sidebar and Pasi has updated the icons ;-)

SO, what does everyone think about the changes? Many thanks to Pasi for putting this together! Leave a comment, a message on the mailing list, or reflect your thoughts in the survey.

Read more
Nicholas Skaggs

Join the ubuntu quality community team's effort this week! As a community we test different things about every ~2 weeks in ubuntu, and share the results to flesh out bugs and problem areas.

So what's up for testing this week? The daily images, the default applications in ubuntu and a new version of the sound stack for testing.

Ready to help? Full details are here.

Need some help on how to contribute? Have a look at this page and the walkthroughs listed. Of particular interest is the ISO testing and Cadence Week testing walkthroughs.

Do note that you don't need anything special to participate in cadence week testing! Both an installed version of the development branch of ubuntu (aka saucy) in a VM or on a real box, or even a live session of the latest daily image will work. For more information on how to use a live session to test, check out the Cadence Week testing walkthrough or watch the youtube video of the same.
Happy Testing!

Read more
pitti

I released umockdev 0.2.6. Most importantly, this now fully works on ARM platforms, as we want to use it to write tests for/on the Ubuntu phone. I tested it on my Nexus 7, and the tests also succeed on the ARM Ubuntu builder (which are Panda boards). Fixing this revealed some interesting issues in recorded ioctl traces (as they are platform specific in some cases due to different word length) as well as kernel bugs in the Tegra drivers.

This version also fixes compatibility with older automake versions again, so that the daily builds for raring should work again.

I also have a new gvfs test case ready to commit which uses umockdev (if available) to test functionality of the gphoto backend. But that needs the new UMockdevTestbed.clear() API in 0.2.6, so I was holding that back. I will land it soon in upstream git now.

Read more
Nicholas Skaggs

A few months ago the ubuntu touch core apps project was launched. For those of you following along with Michael's regular updates have gotten to see these applications grow up rather quickly.

Autopilot Says: How can I help?
Now it's time to add some more testing around these applications as they have reached a basic functional level of usability. Automated testing via autopilot to the rescue!

To help kickstart this process we've put together a recipe for writing autopilot tests specific to QML applications and added it to developer.ubuntu.com. In addition, we'll be hosting a hackfest next week on June 13th to help add basic autopilot testcases for each of the core apps. Folks will be on-hand ready to field your questions and hack together on the autopilot testcases needed for the applications. Join us and help support the wonderful community of application developers making awesome applications for ubuntu!

So how can you help? 
  1. First, go read through the recipe on writing autopilot tests for QML applications. It's also a good idea to have a look through the official tutorial for autopilot and bookmark the API reference link so it's handy.
  2. Armed with your new knowledge, start hacking on some autopilot tests for the core apps. Here's a list of core applications along with the status of autopilot tests. Choose something that looks interesting to you and add some tests.
  3. Follow the contributing guide to help you get your work contributed into the ubuntu touch core application project you chose.
  4. Finally come out to the hackfest! It's your chance to share your work, ask questions, get your tests sorted and merged and socialize and meet other members of the community.
  5. Don't forget there is a wonderful quality community you can be a part of and get help from if you get stuck! There's a mailing list for ubuntu-touch, and ubuntu-quality as well as IRC channels #ubuntu-touch, #ubuntu-autopilot and #ubuntu-quality. Use these resources to help you!
See you next week and happy testing!

Read more
pitti

I did a 0.2.2 maintenance release for umockdev to fix building with Vala 0.16.1, gcc 4.8 (the changed sizeof behaviour caused segfaults), and current udev releases (umockdev-record stumbled over the new “link priority” fields of udevadm). There are also a couple of bug fixes, but no new features.

Read more
Jussi Pakkanen

Let’s talk about revision control for a while. It’s great. Everyone uses it. People love the power and flexibility it provides.

However, if you read about happenings from over ten years ago or so, we find that the situation was quite different. Seasoned developers were against revision control. They would flat out refuse to use it and instead just put everything on a shared network drive or used something crazier, such as the revision control shingle.

Thankfully we as a society have gone forwards. Not using revision control is a firing offense. Most people would flat out refuse to accept a job that does not use revision control regardless of anything short of a few million euros in cash up front. Everyone accepts that revision control is the building block of quality. This is good.

It is unfortunate that this view is severely lacking in other aspects of software development. Let’s take as an example tests. There are actually people, in visible places, that publicly and vocally speak against writing tests. And for some reason we as a whole sort of accept that rather and not immediately flag that out as ridiculous nonsense.

A first example was told to me by a friend working on a quite complex piece of mathematical code. When he discovered that there were no tests at all measuring that it worked he was replied this: “If you are smart enough to be hired to work on this code, you are smart enough not to need tests.” I really wish this were an isolated incident, but in my heart I know that is not the case.

The second example is a posting made a while back by a well known open source developer. It had a blanket statement saying that test driven development is bad and harmful. The main point seemed to be a false dichotomy between good software with no tests and poor software with tests.

Even if testing is done, the implementation may be just a massive bucketful of fail. As an example, here you can read how people thought audio codecs should be tested.

As long as this kind of thinking is tolerated, no matter how esteemed a person says it, we are in the same place as medicine was during the age of bloodletting and leeches. This is why software is considered to be unreliable, buggy piece of garbage that costs hundreds of millions. And the only way out of it is a change of collective attitude. Unfortunately those often take quite a long time to happen, but a man can dream, can he not?

Read more

UPDATE: -s $ANDROID_SERIAL is optional.

If you ran jenkins and had a device hooked up, this is sort of pseudo code you would run:

phablet-flash -s $ANDROID_SERIAL -u http://cdimages.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/pending
sleep 5
adb -s $ANDROID_SERIAL wait-for-device
sudo phablet-network-setup -s $ANDROID_SERIAL -i -n WAP_conf_file
phablet-test-run s $ANDROID_SERIAL -i
phablet-test-run -s $ANDROID_SERIAL -n -p 'camera-app-autopilot' camera_app

What does each thing do? Well here goes

phablet-flash
We install whatever is on /pending in cdimage using the -u option to specify a url.
phablet-network-setup
After the device is flashed, we are going to need networking to set it up. The -n specifies the configuration file to use on that device that would successfully connect us to the WAP whilst the -i installs some packages such as openssh-server and sets up our public key on the device.
phablet-test-run
There are two calls here, one just sets up autopilot with the -i and it could very well be part of the next call. That next call, installs the test package and runs the autopilot tests for that device. If the shell interferes with the tests you can stop it with -n. Adding a -a and -o grabs the xml results from the test.

So that's it. Some gotchas are that autopilot is in transition right now. This is using the current fork of what we have that works on devices. The next autopilot release 1.3 was supposed to fix and integrate everything, but there is an API breakage that needs to be solved.

Since this phablet autopilot was a quick fork and when this was done there was no way to detect resolution so it's hard coded to maguro's resolution and may be a cause of issue when running on other devices (this as well is in the new autopilot, and if migration takes too long we might bring it in).

Read more
Nicholas Skaggs

Consider this text your giant disclaimer. Just a reminder these images are not intended for end-users; please don't go flashing your device thinking you'll have a replacement for android. These images are intended for developers, enthusiasts and testers who want to help. If this describes you, please read on!

I'm happy to announce the ubuntu touch images are now available for testing on the isotracker. And further, the images are now raring based! As such, the ubuntu touch team is asking for folks to try out the new images on there devices and ensure they are no regressions or other issues.




There are 4 product listings representing each of the officially supported devices; grouper (nexus 7), maguro (galaxy nexus), mako (nexus 4), and manta (nexus 10). You can help by installing the new images following the installation instructions, and then reporting your results on the isotracker. If your device has never run a developer preview image for ubuntu touch, you might need to read and follow the steps on the touch wiki first.


There are handy links for download and bug information at the top of the testcases to help you out. If you do find a bug, please use the instructions to report it and add it to your result. Never used the tracker before? Take a look at this handy guide or watch the youtube version.

Once all the kinks and potential issues are worked out (your feedback requested!) the raring based images will become the default, and moving forward, the team will continue to provide daily images and participate in testing milestones as part of the 's' cycle.

As always please contact me if you run into issues, or have a question.
Thank you in advance for your help, and happy testing everyone!

Read more