Canonical Voices

Posts tagged with 'qa'

pitti

Our current autopkgtest machinery uses Jenkins (a private and a public one) and lots of “rsync state files between hosts”, both of which have reached a state where they fall over far too often. It’s flakey, hard to maintain, and hard to extend with new test execution slaves (e. g. for new architectures, or using different test runners). So I’m looking into what it would take to replace this with something robust, modern, and more lightweight.

In our new Continuous Integration world the preferred technologies are RabbitMQ for doing the job distribution (which is delightfully simple to install and use from Python), and OpenStack’s swift for distributed data storage. We have a properly configured swift in our data center, but for local development and experimentation I really just want a dead simple throw-away VM or container which gives me the swift API. swift is quite a bit more complex, and it took me several hours of reading and exercising various tutorials, debugging connection problems, and reading stackexchange to set it up. But now it’s working, and I condensed the whole setup into a single setup-swift.sh shell script.

You can run this in a standard ubuntu container or VM as root:

sudo apt-get install lxc
sudo lxc-create -n swift -t ubuntu -- -r trusty
sudo lxc-start -n swift
# log in as ubuntu/ubuntu, and wget or scp setup-swift.sh
sudo ./setup-swift.sh

Then get swift’s IP from sudo lxc-ls --fancy, install the swift client locally, and talk to it:

$ sudo apt-get install python-swiftclient
$ swift -A http://10.0.3.134:8080/auth/v1.0 -U testproj:testuser -K testpwd stat

Caveat: Don’t use this for any production machine! It’s configured to maximum insecurity, with static passwords and everything.

I realize this is just poor man’s juju, but juju-local is currently not working for me (I only just analyzed that). There is a charm for swift as well, but I haven’t tried that yet. In any case, it’s dead simple now, and maybe useful for someone else.

Read more
pitti

Today’s autopilot release provides a new feature for test case writers. Unless the widget you want to test has a direct object name (GtkBuilder ID/Qt objectName), it is often not that easy to find a widget in a deeply nested hierarchy in autopilot vis.

With the new version, if you have some parent widget (like the containing dialog) w in your test, you can now call w.print_tree() to dump the paths and properties of that widget and all its children to stdout. That’s easy enough to grep, so provides a “poor man’s full tree search”. You can also specify a different output sink, like a file object or a file name: w.print_tree('/tmp/dump.txt').

This is a first step towards making it easier to find widgets and properties you are interested in. Arguably this is mostly just a crutch, but I found it to be rather effective. Before this feature I often wrote little snippets like in LP#1241312, now this becomes much easier. A better solution for this would certainly be a “full tree search” in vis itself, but that’s not that easy to implement. It is on the roadmap for this cycle, though.

I am also currently working on a real-time property change monitor for autopilot-gtk, which may also help in some cases. Unfortunately we cannot build such a thing for autopilot-qt, as due to the nature of Qt object properties, changes of them cannot be monitored.

Read more
pitti

I recently created a test for digicam photo import for Shotwell (using autopilot and umockdev), and made that run as an autopkgtest. It occurred to me that this might be interesting for other desktop applications as well.

The community QA team has written some autopkgtests for desktop applications such as evince, nautilus, or Firefox. We run them regularly in Jenkins on real hardware in a full desktop environment, so that they can use the full desktop integration (3D, indicators, D-BUS services, etc). But of course for those the application already needs to be in Ubuntu.

If you only want to test functionality from the application itself and don’t need 3D, a proper window manager, etc., you can also call your autopilot tests from autopkgtest with a wrapper script like this:

#!/bin/sh
set -e

# start X
(Xvfb :5 >/dev/null 2>&1 &)
XVFB_PID=$!
export DISPLAY=:5

# start local session D-BUS
eval `dbus-launch`
trap "kill $DBUS_SESSION_BUS_PID $XVFB_PID" 0 TERM QUIT INT
export DBUS_SESSION_BUS_ADDRESS
export XAUTHORITY=/dev/null

# change to the directory where your autopilot tests live, and run them
cd `dirname $0`
autopilot run autopilot_tests

This will set up the bare minimum: Xvfb and a session D-BUS, and then run your autopilot tests. Your debian/tests/control should have Depends: yourapp, xvfb, dbus-x11, autopilot-desktop, libautopilot-gtk for this to work. (Note: I didn’t manage to get this running with xvfb-run; any hints to how to simplify this appreciated, but please test that it actually works.)

Please note that this does not replace the “run in full desktop session” tests I mentioned earlier, but it’s a nice addition to check that your package has correct dependencies and to automatically block new libraries/dependencies which break your package from entering Ubuntu.

Read more
pitti

umockdev 0.3 introduced the notion of an “umockdev script”, i. e. recording the read()s and write()s that happen on a device node such as ttyUSB0. With that one can successfully run ModemManager in an umockdev testbed to pretend that one has e. g. an USB 3G stick.

However, this didn’t yet apply to the Ubuntu phone stack, where ofonod talks to Android’s “rild” (Radio Interface Layer Daemon) through the Unix socket /dev/socket/rild. Thus over the last days I worked on extending umockdev’s script recording and replaying to Unix sockets as well (which behave quite different and quite a bit more complex than ordinary files and character devices). This is released in 0.4, however you should actually get 0.4.1 if you want to package it.

So you now can make a script from ofonod how it makes a phone call (or other telephony action) through rild, and later replay that in an umockdev testbed without having to have a SIM card, or even a phone. This should help with reproducing and testing bugs like ofonod goes crazy when roaming: It’s enough to record the communication for a person who is in a situation to reproduce the bug, then a developer can study what’s going wrong independent of harware and mobile networks.

How does it work? If you have used umockdev before, the pattern should be clear now: Start ofonod under umockdev-record and tell it to record the communication on /dev/socket/rild:

  sudo pkill ofonod; sudo umockdev-record -s /dev/socket/rild=phonecall.script -- ofonod -n -d

Now launch the phone app and make a call, send a SMS, or anything else you want to replay later. Press Control-C when you are done. After that you can run ofonod in a testbed with the mocked rild:

  sudo pkill ofonod; sudo umockdev-run -u /dev/socket/rild=phonecall.script -- ofonod -n -d

Note the new --unix-stream/-u option which will create /tmp/umockdev.XXXXXX/dev/socket/rild, attach some server threads to accept client connections, and replay the script on each connection.

But wait, that fails with some

   ERROR **: ScriptRunner op_write[/dev/socket/rild]: data mismatch; got block '...', expected block '...'

error! Apparently ofono’s messages are not 100% predictable/reproducible, I guess there are some time stamps or bits of uninitialized memory involved. Normally umockdev requires that the program under test sticks to the previously recorded write() parts of the script, to ensure that the echoed read()s stay in sync and everything works as expected. But for cases like these were some fuzz is expected, umockdev 0.4 introduces setting a “fuzz percentage” in scripts. To allow 5% byte value mismatches, i. e. in a block of n bytes there can be n*0.05 bytes which are different than the script, you’d put a line

  f 5 -

before the ‘w’ block that will get jitter, or just put it at the top of the file to allow it for all messages. Please see the script format documentation for details.

After doing that, ofonod works, and you can do the exact same operations that you recorded, with e. g. the phone app. Doing other operations will fail, of course.

As always, umockdev-run -u is of course just a CLI convenience wrapper around the umockdev API. If you want to do the replay in a C test suite, you can call

   umockdev_testbed_load_socket_script(testbed, "/dev/socket/rild",
                                       SOCK_STREAM, "path/to/phonecall.script", &error);

or the equivalent in Python or Vala, as usual.

If you are an Ubuntu phone developer and want to use this, please don’t hesitate to talk to me. This is all in saucy now, so on the Ubuntu phone it’s a mere “sudo apt-get install umockdev” away.

Read more
pitti

I’m happy to announce a new release 0.3 of umockdev.

The big new feature is the ability to fake character devices and provide recording and replaying of communications on them. This work is driven by our need to create automatic tests for the Ubuntu phone stack, i. e. pretending that we have a 3G or phone driver and ensuring that the higher level stacks behaves as expected without actually having to have a particular modem. I don’t currently have a phone capable of running Ubuntu, so I tested this against the standard ModemManager daemon which we use in the desktop. But the principle is the same, it’s “just” capturing and replaying read() and write() calls from/to a device node.

In principle it ought to work in just the same way for other device nodes than tty, e. g. input devices or DRI control; but that will require some slight tweaks in how the fake device nodes are set up; please let me know if you are intested in a particular use case (preferably as a bug report).

With just using the command line tools, this is how you would capture ModemManager’s talking to an USB 3G stick which creates /dev/ttyUSB{0,1,2}. The communication gets recorded into a text file, which umockdev calls “script” (yay my lack of imagination for names!):

# Dump the sysfs device and udev properties
$ umockdev-record /dev/ttyUSB* > huawei.umockdev

# Record the communication
$ umockdev-record -s /dev/ttyUSB0=0.script -s /dev/ttyUSB1=1.script \
     -s /dev/ttyUSB2=2.script -- modem-manager --debug

The –debug option for ModemManager is not necessary, but it’s nice to see what’s going on. Note that you should shut down the running system instance for that, or run this on a private D-BUS.

Now you can disconnect the stick (not necessary, just to clearly prove that the following does not actually talk to the stick), and replay in a test bed:

$ umockdev-run -d huawei.umockdev -s /dev/ttyUSB0=0.script -s /dev/ttyUSB1=1.script \
    -s /dev/ttyUSB2=2.script -- modem-manager --debug

Please note that the CLI options of umockdev-record and umockdev-run changed to be more consistent and fit the new features.

If you use the API, you can do the same with the new umockdev_testbed_load_script() method, which will spawn a thread that replays the script on the faked device node (which is just a PTY underneath).

If you want full control, you can also do all the communication from your test cases manually: umockdev_testbed_get_fd("/dev/mydevice") will give you a (bidirectional) file descriptor of the “master” end, so that whenever your program under test connects to /dev/mydevice you can directly talk to it and pretend that you are an actual device driver. You can look at the t_tty_data() test case for how this looks like (that’s the test for the Vala binding, but it works in just the same way in C or the GI bindings).

I’m sure that there are lots of open ends here still, but as usual this work is use case driven; so if you want to do something with this, please let me know and we can talk about fine-tuning this.

In other news, with this release you can also cleanly remove mocked devices (umockdev_testbed_remove_device()), a feature requested by the Mir developers. Finally there are a couple of bug fixes; see the release notes for details.

I’ll upload this to Saucy today. If you need it for earlier Ubuntu releases, you can have a look into my daily builds PPA.

Let’s test!

Read more
pitti

I was asked to pour some love over autopilot-gtk, a GTK module to provide introspection of widget states to Autopilot. For those who don’t know, Autopilot is a QA tool to write automatic testing of GUI applications, without the race conditions and limitations that previous tools had with using only the ATK level. Please see the documentation and tutorial for more information. There are a lot of community members who do great things with it already, such as automating testing for Ubiquity or writing tests for GNOME applications like evince, gedit, nautilus, or Shotwell. This should now hopefully become easier.

Now autopilot-gtk has a proper testsuite, I triaged all bug reports, wrote reproducers for them, and fixed them all in today’s upload to Saucy. In particular, you can now do the following:

  • Access to the GtkBuilder names: Instead of having to find a particular widgets in terms of class, position, label contents, or other (sometimes) non-unique or unstable properties, you can now pick it by its unique and stable GtkBuilder name, which is the ID that most upstream code uses to manipulate widgets: b = self.app.select_single(BuilderName='entry_searchquery')
  • GtkTextBuffer type GObject properties are now translated into plain strings, which allows you to access the textual contents of a GtkTextView widget with my_textview.buffer (both for simple property access as well as for selecting by buffer contents).
  • GEnum and GFlags properties are now accessible. Enums are translated to strings (self.app.select_many('GtkButton', relief='GTK_RELIEF_HALF') or self.assertEqual(btn_greet.resize_mode, 'GTK_RESIZE_PARENT')), and flags are represented as a simple integer (like my_widget.events)); in theory we could represent them as string like FLAG_FOO | FLAG_BAR, but this becomes too unwieldy; for reliable identity matching one would always need to take care to sort them alphabetically, keep a consistent spacing, etc.
  • Please let me know if you need access to other types of properties, it is now quite easy to support more (as long as there is a reasonable way of mapping them to a standard D-BUS data type). So please report bugs.

    Read more
pitti

I released umockdev 0.2.6. Most importantly, this now fully works on ARM platforms, as we want to use it to write tests for/on the Ubuntu phone. I tested it on my Nexus 7, and the tests also succeed on the ARM Ubuntu builder (which are Panda boards). Fixing this revealed some interesting issues in recorded ioctl traces (as they are platform specific in some cases due to different word length) as well as kernel bugs in the Tegra drivers.

This version also fixes compatibility with older automake versions again, so that the daily builds for raring should work again.

I also have a new gvfs test case ready to commit which uses umockdev (if available) to test functionality of the gphoto backend. But that needs the new UMockdevTestbed.clear() API in 0.2.6, so I was holding that back. I will land it soon in upstream git now.

Read more
pitti

I did a 0.2.2 maintenance release for umockdev to fix building with Vala 0.16.1, gcc 4.8 (the changed sizeof behaviour caused segfaults), and current udev releases (umockdev-record stumbled over the new “link priority” fields of udevadm). There are also a couple of bug fixes, but no new features.

Read more
pitti

I just released umockdev 0.2.

The big new feature of this release is support for evdev ioctls. I. e. you can now record what e. g. X.org is doing to touchpads, touch screens, etc.:

  $ umockdev-record /dev/input/event15 > /tmp/touchpad.umockdev
  # umockdev-record -i /tmp/touchpad.ioctl /dev/input/event15 -- Xorg -logfile /dev/null

and load that back into a testbed with X.org using the dummy driver:

  cat <<EOF > xorg-dummy.conf
  Section "Device"
        Identifier "test"
        Driver "dummy"
  EndSection
  EOF

  $ umockdev-run -l /tmp/touchpad.umockdev -i /dev/input/event15=/tmp/touchpad.ioctl -- \
       Xorg -config xorg-dummy.conf -logfile /tmp/X.log :5

Then e. g. DISPLAY=:5 xinput will recognize the simulated device. Note that Xvfb won’t work as that does not use udev for device discovery, but only adds the XTest virtual devices and nothing else, so you need to use the real X.org with the dummy driver to run this as a normal user.

This enables easier debugging of new kinds of input devices, as well as writing tests for handling multiple touchscreens/monitors, integration tests of Wacom devices, and so on.

This release now also works with older automakes and Vala 0.16, so that you can use this from Ubuntu 12.04 LTS. The daily PPA now also has packages for that.

Attention: This version does not work any more with recorded ioctl files from version 0.1.

More detailled list of changes:

  • umockdev-run: Fix running of child program to keep stdin.
  • preload: Fix resolution of “/dev” and “/sys”
  • ioctl_tree: Fix endless loop when the first encountered ioctl was unknown
  • preload: Support opening a /dev node multiple times for ioctl emulation (issue #3)
  • Fix parallel build (issue #2)
  • Support xz compressed ioctl files in umockdev_testbed_load_ioctl().
  • Add example umockdev and ioctl files for a gphoto camera and an MTP capable mobile phone.
  • Fix building with automake 1.11.3 and Vala 0.16.
  • Generalize ioctl recording and emulation for ioctls with simple structs, i. e. no pointer fields. This makes it much easier to add more ioctls in the future.
  • Store return values of ioctls in records, as they are not always 0 (like EVIOCGBIT)
  • Add support for ioctl ranges (like EVIOCGABS) and ioctls with variable length (like EVIOCGBIT).
  • Add all reading evdev ioctls, for recording and mocking input devices like touch pads, touch screens, or keyboards. (issue #1)

Read more
pitti

What is this?

umockdev is a set of tools and a library to mock hardware devices for programs that handle Linux hardware devices. It also provides tools to record the properties and behaviour of particular devices, and to run a program or test suite under a test bed with the previously recorded devices loaded.

This allows developers of software like gphoto or libmtp to receive these records in bug reports and recreate the problem on their system without having access to the affected hardware, as well as writing regression tests for those that do not need any particular privileges and thus are capable of running in standard make check.

After working on it for several weeks and lots of rumbling on G+, it’s now useful and documented enough for the first release 0.1!

Component overview

umockdev consists of the following parts:

  • The umockdev-record program generates text dumps (conventionally called *.umockdev) of some specified, or all of the system’s devices and their sysfs attributes and udev properties. It can also record ioctls that a particular program sends and receives to/from a device, and store them into a text file (conventionally called *.ioctl).
  • The libumockdev library provides the UMockdevTestbed GObject class which builds sysfs and /dev testbeds, provides API to generate devices, attributes, properties, and uevents on the fly, and can load *.umockdev and *.ioctl records into them. It provides VAPI and GI bindings, so you can use it from C, Vala, and any programming language that supports introspection. This is the API that you should use for writing regression tests. You can find the API documentation in docs/reference in the source directory.
  • The libumockdev-preload library intercepts access to /sys, /dev/, the kernel’s netlink socket (for uevents) and ioctl() and re-routes them into the sandbox built by libumockdev. You don’t interface with this library directly, instead you need to run your test suite or other program that uses libumockdev through the umockdev-wrapper program.
  • The umockdev-run program builds a sandbox using libumockdev, can load *.umockdev and *.ioctl files into it, and run a program in that sandbox. I. e. it is a CLI interface to libumockdev, which is useful in the “debug a failure with a particular device” use case if you get the text dumps from a bug report. This automatically takes care of using the preload library, i. e. you don’t need umockdev-wrapper with this. You cannot use this program if you need to simulate uevents or change attributes/properties on the fly; for those you need to use libumockdev directly.

Example: Record and replay PtP/MTP USB devices

So how do you use umockdev? For the “debug a problem” use case you usually don’t want to write a program that uses libumockdev, but just use the command line tools. Let’s capture some runs from libmtp tools, and replay them in a mock environment:

  • Connect your digital camera, mobile phone, or other device which supports PtP or MTP, and locate it in lsusb. For example
      Bus 001 Device 012: ID 0fce:0166 Sony Ericsson Xperia Mini Pro
  • Dump the sysfs device and udev properties:
      $ umockdev-record /dev/bus/usb/001/012 > mobile.umockdev
  • Now record the dynamic behaviour (i. e. usbfs ioctls) of various operations. You can store multiple different operations in the same file, which will share the common communication between them. For example:
      $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-detect
      $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-emptyfolders
  • Now you can disconnect your device, and run the same operations in a mocked testbed. Please note that /dev/bus/usb/001/012 merely echoes what is in mobile.umockdev and it is independent of what is actually in the real /dev directory. You can rename that device in the generated *.umockdev files and on the command line.
      $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-detect
      $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-emptyfolders

Example: using the library to fake a battery

If you want to write regression tests, it’s usually more flexible to use the library instead of calling everything through umockdev-run. As a simple example, let’s pretend we want to write tests for upower.

Batteries, and power supplies in general, are simple devices in the sense that userspace programs such as upower only communicate with them through sysfs and uevents. No /dev nor ioctls are necessary. docs/examples/ has two example programs how to use libumockdev to create a fake battery device, change it to low charge, sending an uevent, and running upower on a local test system D-BUS in the testbed, with watching what happens with upower --monitor-detail. battery.c shows how to do that with plain GObject in C, battery.py is the equivalent program in Python that uses the GI binding. You can just run the latter like this:

  umockdev-wrapper python3 docs/examples/battery.py

and you will see that upowerd (which runs on a temporary local system D-BUS in the test bed) will report a single battery with 75% charge, which gets down to 2.5% a second later.

The gist of it is that you create a test bed with

  UMockdevTestbed *testbed = umockdev_testbed_new ();

and add a device with certain sysfs attributes and udev properties with

    gchar *sys_bat = umockdev_testbed_add_device (
            testbed, "power_supply", "fakeBAT0", NULL,
            /* attributes */
            "type", "Battery",
            "present", "1",
            "status", "Discharging",
            "energy_full", "60000000",
            "energy_full_design", "80000000",
            "energy_now", "48000000",
            "voltage_now", "12000000",
            NULL,
            /* properties */
            "POWER_SUPPLY_ONLINE", "1",
            NULL);

You can then e. g. change an attribute and synthesize a “change” uevent with

  umockdev_testbed_set_attribute (testbed, sys_bat, "energy_now", "1500000");
  umockdev_testbed_uevent (testbed, sys_bat, "change");

With Python or other introspected languages, or in Vala it works the same way, except that it looks a bit leaner due to “proper” object semantics.

Packages

I have a packaging branch for Ubuntu and a recipe to do daily builds with the latest upstream code into my daily builds PPA (for 12.10 and raring). I will soon upload it to Raring proper, too.

What’s next?

The current set of features should already get you quite far for a range of devices. I’d love to get feedback from you if you use this for anything useful, in particular how to improve the API, the command line tools, or the text dump format. I’m not really happy with the split between umockdev (sys/dev) and ioctl files and the relatively complicated CLI syntax of umockdev-record, so any suggestion is welcome.

One use case that I have for myself is to extend the coverage of ioctls for input devices such as touch screens and wacom tablets, so that we can write some tests for gnome-settings-daemon plugins.

I also want to find a way to pass ioctls back to the test suite/calling program instead of having to handle them all in the preload library, which would make it a lot more flexible. However, due to the nature of the ioctl ABI this is not easy.

Where to go to

The code is hosted on github in the umockdev project; this started out as a systemd branch to add this functionality to libudev, but after a discussion with Kay we decided to keep it separate. But I kept it in git anyway, given how popular it is today. For the bzr lovers, Launchpad has an import at lp:umockdev.

Release tarballs will be on Launchpad as well. Please file bugs and enhancement requests in the git hub tracker.

Finally, if you have questions or want to discuss something, you can always find me on IRC (pitti on Freenode or GNOME).

Thanks for your attention and happy testing!

Read more
Daniel Holbach

We all want more quality. We all wasted too many hours trying to fix broken software and we all know that new users struggle the most when facing crashes or other unexpected results. We probably all also agree that testing is a good idea and if it’s automated, then that’s even better.

Automatically exercising large parts of some software’s functionality helps a lot in guaranteeing that things still work, even if the code or some underlying foundations change. The idea is to write the test-case once and have it do its work whenever bits change and let us know if things break unexpectedly – especially before users run into bugs.

Tomorrow, 1st February 2013, we are going to hang out in #ubuntu-quality on irc.freenode.net to have a Hackfest about Automated Testing.

So what’s going to happen there?

  • We are going to have seasoned Ubuntu developers who will introduce you to autopilot (for UI testing) and autopkgtest (for integrating tests with the package in a more general sense).
  • We have a list of tests we want to work on together (but you can work on your own tests if you like as well).
  • We are going to have lots of fun and make Ubuntu a better place.

If you are interested, that’s great, because this is one of the coolest contributions to Ubuntu you can make. For autopkgtest it might be good to have at least a bit experience with scripting or programming, for autopilot less so. Be curious, be there, make Ubuntu better!

Check out our docs here and see you tomorrow!

 

Read more
pitti

When writing system integration tests it often happens that I want to mount some tmpfses over directories like /etc/postgresql/ or /home, and run the whole script with an unshared mount namespace so that (1) it does not interfere with the real system, and (2) is guaranteed to clean up after itself (unmounting etc.) after it ends in any possible way (including SIGKILL, which breaks usual cleanup methods like “trap”, “finally”, “def tearDown()”, “atexit()” and so on).

In gvfs’ and postgresql-common’s tests, which both have been around for a while, I prepare a set of shell commands in a variable and pipe that into unshare -m sh, but that has some major problems: It doesn’t scale well to large programs, looks rather ugly, breaks syntax highlighting in editors, and it destroys the real stdin, so you cannot e. g. call a “bash -i” in your test for interactively debugging a failed test.

I just changed postgresql-common’s test runner to use unshare/tmpfses as well, and needed a better approach. What I eventually figured out preserves stdin, $0, and $@, and still looks like a normal script (i. e. not just a single big string). It still looks a bit hackish, but I can live with that:

#!/bin/sh
set -e
# call ourselves through unshare in a way that keeps normal stdin, $0, and CLI args
unshare -uim sh -- -c "`tail -n +7 $0`" "$0" "$@"
exit $?

# unshared program starts here
set -e
echo "args: $@"
echo "mounting tmpfs"
mount -n -t tmpfs tmpfs /etc
grep /etc /proc/mounts
echo "done"

As Unix/Linux’ shebang parsing is rather limited, I didn’t find a way to do something like

#!/usr/bin/env unshare -m sh

If anyone knows a trick which avoids the “tail -n +7″ hack and having to pay attention to passing around “$@”, I’d appreciate a comment how to simplify this.

Read more
pitti

I just released Apport 2.7.

The main new feature is supporting foreign architectures in apport-retrace. If apport-retrace works in sandbox mode and works on a crash that was not produced on the same architecture as apport-retrace is running on, it will now build a sandbox for the report’s architecture and invoke gdb with the necessary magic options to produce a proper stack trace (and the other gdb information).
Right now this works for i386, x86_64, and ARMv7, but if someone is interested in making this work for other architectures, please ping me.

This is rolled out to the Launchpad retracers, see for example Bug #1088428. So from now on you can report your armhf crashes to Launchpad and they ought to be processed. Note that I did a mass-cleanup of old armhf crash bugs this morning, as the existing ones were way too old to be retraced.

For those who are running their own retracers for their project: You need to add an armhf specific apt sources list your per-release configuration directory, e. g. Ubuntu 12.04/armhf/sources.list as armhf is on ports.ubuntu.com instead of archive.ubuntu.com. Also, you need to add an armhf crash database to your crashdb.conf and add a cron job for the new architecture. You can see how all this looks like in the configuration files for the Launchpad retracers.

The other improvement concerns package hooks. So far, when a package hook crashed the exception was only printed to stderr, where most people would never see them when using the GTK or KDE frontend. With 2.7 these exceptions are also added to the report itself (HookError_filename), so that they appear in the bug reports.

The release also fixes a couple of bugs, see the release notes for details.

Read more
pitti

With python-dbusmock you can provide mocks for arbitrary D-BUS services for your test suites or if you want to reproduce a bug.

However, when writing actual tests for gnome-settings-daemon etc. I noticed that it is rather cumbersome to always have to set up the “skeleton” of common services such as UPower. python-dbusmock 0.2 now introduces the concept of “templates” which provide those skeletons for common standard services so that your code only needs to set up the particular properties and specific D-BUS objects that you need. These templates can be parameterized for common customizations, and they can provide additional convenience methods on the org.freedesktop.DBus.Mock interface to provide more abstract functionality like “add a battery”.

So if you want to pretend you have one AC and a half-charged battery, you can now simply do

  def setUp(self):
     (self.p_mock, self.obj_upower) = self.spawn_server_template('upower', {})

  def test_ac_bat(self):
     self.obj_upower.AddAC('mock_AC', 'Mock AC')
     self.obj_upower.AddChargingBattery('mock_BAT', 'Mock Battery', 50.0, 1200)

Or, if your code is not in Python, use the CLI/D-BUS interface, like in shell:

  # start a fake system bus
  eval `dbus-launch`
  export DBUS_SYSTEM_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS

  # start mock upower on the fake bus
  python3 -m dbusmock --template upower &

  # add devices
  gdbus call --system -d org.freedesktop.UPower -o /org/freedesktop/UPower \
      -m org.freedesktop.DBus.Mock.AddAC mock_ac 'Mock AC'
  gdbus call --system -d org.freedesktop.UPower -o /org/freedesktop/UPower \
      -m org.freedesktop.DBus.Mock.AddChargingBattery mock_bat 'Mock Bat' 50.0 1200

In both cases upower --dump or gnome-power-statistics will show you the expected devices (of course you need to run that within the environment of the fake $DBUS_SYSTEM_BUS_ADDRESS, or run the mock on the real system bus as root).

Iftikhar Ahmad contributed a template for NetworkManager, which allows you to easily set up ethernet and wifi devices and wifi access points. See pydoc3 dbusmock.templates.networkmanager for details and the test cases for how this looks like in practice.

I just released python-dbusmock 0.2.1 and uploaded the new version to Debian experimental. I will sync it into Ubuntu Raring in a few hours.

Read more
pitti

For writing tests for GVFS (current tests, proposed improvements) I want to run Samba as normal user, so that we can test gvfs’ smb backend without root privileges and thus can run them safely and conveniently in a “make check” environment for developers and in JHBuild for continuous integration testing. Before these tests could only run under gvfs-testbed, which needs root.

Unlike other servers such as ssh or ftp, this turned out surprisingly non-obvious and hard, so I want to document it in this blog post for posterity’s benefit.

Running the server

Running smbd itself is mainly an exercise of figuring out all the options that you need to set; Alex Larsson and I had some fun figuring out all the quirks and hiccups that happen between Ubuntu’s and Fedora’s packaging and 3.6 vs. 4.0, but finally arrived at something working.

First, you need to create an empty directory where smbd can put all its databases and state files in. For tests you would use mkdtemp(), but for easier reading I just assume mkdir /tmp/samba here.

The main knowledge is in the Samba configuration file, let’s call it /tmp/smb.conf:

[global]
workgroup = TESTGROUP
interfaces = lo 127.0.0.0/8
smb ports = 1445
log level = 2
map to guest = Bad User
passdb backend = smbpasswd
smb passwd file = /tmp/smbpasswd
lock directory = /tmp/samba
state directory = /tmp/samba
cache directory = /tmp/samba
pid directory = /tmp/samba
private dir = /tmp/samba
ncalrpc dir = /tmp/samba

[public]
path = /tmp/public
guest ok = yes

[private]
path = /tmp/private
read only = no

For running this as a normal user you need to set a port > 1024, so this uses 1445 to resemble the original (privileged) port 445. The map to guest line makes anonymous logins work on Fedora/Samba 4.0 (I’m not sure whether it’s a distribution or a version issue). Don’t ask about “dir” vs. “directory”, that’s an inconsistency in Samba; with above names it works on both 3.6 and 4.0.

We use the old “smbpasswd” backend as shipping large tdb files is usually too inconvenient and brittle for test suites. I created an smbpasswd file by running smbpasswd on a “real” Samba installation, and then using pdbedit to convert it to a smbpasswd file:

sudo smbpasswd -a martin
sudo pdbedit -i tdbsam:/var/lib/samba/passdb.tdb -e smbpasswd:/tmp/smbpasswd

The result for password “foo” is

myuser:0:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:AC8E657F83DF82BEEA5D43BDAF7800CC:[U          ]:LCT-507C14C7:

which you are welcome to copy&paste (you can replace “myuser” with any valid user name, of course).

This also defines two shares, one public, one authenticated. You need to create the directories and populate them a bit:

mkdir /tmp/public /tmp/private
echo hello > /tmp/public/hello.txt
echo secret > /tmp/private/myfile.txt

Now you can run the server with

smbd -iFS -s /tmp/smb.conf

The main problem with this approach is that smbd exits (“Server exit (failed to receive smb request)”) after a client terminates, so you need to write your tests in a way to only run one connection/request per test, or to start smbd in a loop.

Running the client

If you merely use the smbclient command line tool, this is rather simple: It has a -p option for specifying the port:

$ smbclient -p 1445 //localhost/private
Enter martin's password: [enter "foo" here]
Domain=[TESTGROUP] OS=[Unix] Server=[Samba 3.6.6]
smb: \> dir
  .                                   D        0  Wed Oct 17 08:28:23 2012
  ..                                  D        0  Wed Oct 17 08:31:24 2012
  myfile.txt                                   7  Wed Oct 17 08:28:23 2012

In the case of gvfs it wasn’t so simple, however. Surprisingly, libsmbclient does not have an API to set the port, it always assumes 445. smbclient itself uses some internal “libcli” API which does have a way to change the port, but it’s not exposed through libsmbclient. However, Alex and I found some mailing list posts ([1], [2]) that mention $LIBSMB_PROG, and it’s also mentioned in smbclient’s manpage. It doesn’t quite work as advertised in the second ML post (you can’t set it to smbd, smbd apparently doesn’t speak the socket protocol over stdin/stdout), and it’s not being used anywhere in the current Samba sources, but what does work is to use good old netcat:

export LIBSMB_PROG="nc localhost 1445"

with that, you can use smbclient or any program using libsmbclient to talk to our test smb server running as user.

Read more
pitti

I found it surprisingly hard to determine in tearDown() whether or not the test that currently ran succeeded or not. I am writing some tests for gnome-settings-daemon and want to show the log output of the daemon if a test failed.

I now cobbled together the following hack, but I wonder if there’s a more elegant way? The interwebs don’t seem to have a good solution for this either.

    def tearDown(self):
        [...]
        # collect log, run() shows it on failures
        with open(self.daemon_log.name) as f:
            self.log_output = f.read()

    def run(self, result=None):
        '''Show log output on failed tests'''

        if result:
            orig_err_fail = result.errors + result.failures
        super().run(result)
        if result and result.errors + result.failures > orig_err_fail:
            print('\n----- daemon log -----\n%s\n------\n' % self.log_output)

Read more
pitti

I was working on writing tests for gnome-settings-daemon a week or so ago, and finally got blocked on being unable to set up upower/ConsoleKit/etc. the way I need them. Also, doing so needs root privileges, I don’t want my test suite to actually suspend my machine, and using the real service is generally not suitable for test suites that are supposed to run during “make check”, in jhbuild, and the like — these do not have the polkit privileges to do all that, and may not even have a system D-Bus running in the first place.

So I wrote a little test_upower.py helper, then realized that I need another one for systemd/ConsoleKit (for the “system idle” property), also looked at the mock polkit in udisks and finally sat down for two days to generalize this and do this properly.

The result is python-dbusmock, I just released the first tarball. With this you can easily create mock objects on D-Bus from any programming language with a D-Bus binding, or even from the shell.

The mock objects look like the real API (or at least the parts that you actually need), but they do not actually do anything (or only some action that you specify yourself). You can configure their state, behaviour and responses as you like in your test, without making any assumptions about the real system status.

When using a local system/session bus, you can do unit or integration testing without needing root privileges or disturbing a running system. The Python API offers some convenience functions like “start_session_bus()“ and “start_system_bus()“ for this, in a “DBusTestCase“ class (subclass of the standard “unittest.TestCase“).

Surprisingly I found very little precedence here. There is a Perl module, but that’s not particuarly helpful for test suites in C/Vala/Python. And there is Phil’s excellent Bendy Bus, but this has a different goal: If you want to thoroughly test a particular D-BUS service, such as ensuring that it does the right thing, doesn’t crash on bad input, etc., then Bendy Bus is for you (and python-dbusmock isn’t). However, it is too much overhead and rather inconvenient if you want to test a client-side program and just need a few system services around it which you want to set up in different states for each test.

You can use python-dbusmock with any programming language, as you can run the mocker as a normal program. The actual setup of the mock (adding objects, methods, properties, etc.) all happen via D-Bus methods on the “org.freedesktop.DBus.Mock“ interface. You just don’t have the convenience D-Bus launch API.

The simplest possible example is to create a mock upower with a single Suspend() method, which you can set up like this from Python:

import dbus
import dbusmock

class TestMyProgram(dbusmock.DBusTestCase):
[...]
    def setUp(self):
        self.p_mock = self.spawn_server('org.freedesktop.UPower',
                                        '/org/freedesktop/UPower',
                                        'org.freedesktop.UPower',
                                        system_bus=True,
                                        stdout=subprocess.PIPE)

        # Get a proxy for the UPower object's Mock interface
        self.dbus_upower_mock = dbus.Interface(self.dbus_con.get_object(
            'org.freedesktop.UPower', '/org/freedesktop/UPower'),
            'org.freedesktop.DBus.Mock')

        self.dbus_upower_mock.AddMethod('', 'Suspend', '', '', '')

[...]

    def test_suspend_on_idle(self):
        # run your program in a way that should trigger one suspend call

        # now check the log that we got one Suspend() call
        self.assertRegex(self.p_mock.stdout.readline(), b'^[0-9.]+ Suspend$')

This doesn’t depend on Python, you can just as well run the mocker like this:

python3 -m dbusmock org.freedesktop.UPower /org/freedesktop/UPower org.freedesktop.UPower

and then set up the mocks through D-Bus like

gdbus call --system -d org.freedesktop.UPower -o /org/freedesktop/UPower \
      -m org.freedesktop.DBus.Mock.AddMethod '' Suspend '' '' ''

If you use it with Python, you get access to the dbusmock.DBusTestCase class which provides some convenience functions to set up and tear down local private session and system buses. If you use it from another language, you have to call dbus-launch yourself.

Please see the README for some more details, pointers to documentation and examples.

Update: You can now install this via pip from PyPI or from the daily builds PPA.

Update 2: Adjusted blog entry for version 0.0.3 API, to avoid spreading now false information too far.

Read more
pitti

I just released Apport 2.5 with a bunch of new features and some bug fixes.

By default you cannot report bugs and crashes to packages from PPAs, as they are not Ubuntu packages. Some packages like Unity or UbuntuOne define their own crash database which reports bugs against the project instead. This has been a bit cumbersome in the past, as these packages needed to ship a /etc/apport/crashdb.conf.d/ snippet. This has become much easier, package hooks can define a new crash database directly now (#551330):

def add_info(report, ui):
   if determine_whether_to_report_to_upstream:
       report['CrashDB'] = '{ "impl": "launchpad", "project": "picsaw" }'

(Documented in package-hooks.txt)

Apport now also looks for package hooks in /opt (#1020503) if the executable path or a file in the package is somewhere below /opt (it tries all intermediate directories).

With these two, we should have much better support for filing bugs against ARB packages.

This version also finally drops the usage of gksu and moves to PolicyKit. Now we only have one package left in the default install (update-notifier) which uses it. Almost there!

Read more
pitti

A few weeks ago I wrote about my new role as an upstream QA engineer. I have now officially been in that role since June. Quite expectedly I had (and still have) some backlog from my previous Desktop engineer role, but I have had plenty of time to work on automatic tests and some test technology now. If you are interested in the daily details, you can look at the ramblings on my G+ page; in a nutshell I worked on integration tests for udisks2 (mostly upstream now), a mock polkit API, and a small enhancement of the scsi_debug kernel module. On the distro QA side I got the integration tests of udisks2, upower, PostgreSQL, Apport, and ubuntu-drivers-common working and added to our Jenkins autopkgtest runner, where they are executed whenever the particular package or any of its dependencies get updated. This already uncovered a surprising number of actual bugs, so I’m happy that this system starts being useful after the initial hump of getting the tests to run properly in that environment.

In that previous blog post I mentioned that Canonical will hire a second person for an upstream QA engineer role. I am pleased that the job posting is now online, so if you are familiar with how the Linux plumbing and desktop stacks work, are frantic about testing, like to work with the Linux, plumbing, GNOME, and other FOSS communities, know your way around jhbuild, Jenkins, and similar technologies, and would like to explore new possibilities like applying static code checking or creating APIs to fake hardware, please have a look at the role description! Please feel free to contact me on IRC (pitti on Freenode) or by email if you have further questions about the role.

Read more
pitti

As I wrote two weeks ago, I consider the QA related changes in Ubuntu 12.04 a great success. But while we will continue and even extend our efforts there, this is not where the ball stops: it’s great to have the feedback cycle between “break it” and “notice the bug” reduced from potentially a few months to one day in many cases, but wouldn’t it be cool to reduce that to a few minutes, and also put the machinery right at where stuff really happens — into the upstream trunks? If for every commit to PyGObject, GTK, NetworkManager, udisks, D-BUS, telepathy, gvfs, etc. we’d immediately build and test all reverse dependencies and the committer would be told about regressions?

I have had the desire to work on automated tests in Linux Plumbing and GNOME for quite a while now. Also, after 8 years of doing distribution work of packaging and processes (tech lead, release engineering/management, stable release updates, etc.) I wanted to shift my focus towards technology development. So I’ve been looking for a new role for some time now.

It seems that time is finally there: At the recent UDS, Mark announced that we will extend our QA efforts to upstream. I am very happy that in two weeks I can now move into a role to make this happen: Developing technology to make testing easier, work with our key upstreams to set up test suites and reporting, and I also can do some general development in areas that are near and dear to my heart, like udev/systemd, udisks, pygobject, etc. This work will be following the upstream conventions for infrastructure and development policies. In particular, it is not bound by Canonical copyright license agreements.

I have a bunch of random ideas what to work on, such as:

  • Making it possible/easier to write tests with fake hardware. E. g. in the upower integration tests that I wrote a while ago there is some code to create a fake sysfs tree which should really go into udev itself, available from C and introspection and be greatly extended. Also, it’s currently not possible to simulate a uevent that way, that’s something I’d like to add. Right now you can only set up /sys, start your daemon, and check the state after the coldplugging phase.
  • Interview some GNOME developers what kind of bugs/regressions/code they have most trouble with and what/how they would like to test. Then write a test suite with a few working and one non-working case (bugzilla should help with finding these), discuss the structure with the maintainer again, find some ways to make the tests radically simpler by adding/enhancing the API available from gudev/glib/gtk, etc. E. g. in the tests for apport-gtk I noticed that while it’s possible to do automatic testing of GUI applications it is still way harder than it should and needs to be. I guess that’s the main reason why there are hardly any GUI tests in GNOME?
  • I’ve heard from several people that it would be nice to be able to generate some mock wifi/ethernet/modem adapters to be able to automatically test NetworkManager and the like. As network devices are handled specially in Linux, not in the usual /dev and sysfs manner, they are not easy to fake. It probably needs a kernel module similar to scsi_debug, which fakes enough of the properties and behaviour of particular nmetwork card devices to be useful for testing. One could certainly provide a pipe or a regular bridge device at the other end to actually talk to the application through the fake device. (NB this is just an idea, I haven’t looked into details at all yet).
  • For some GUI tests it would be much easier if there was a very simple way of providing mocks/stubs for D-BUS services like udisks or NetworkManager than having to set up the actual daemons, coerce them into some corner-case behaviour, and needing root privileges for the test due to that. There seems to be some prior art in Ruby, but I’d really like to see this in D-BUS itself (perhaps a new library there?), and/or having this in GDBus where it would even be useful for Python or JavaScript tests through gobject-introspection.
  • There are nice tools like the Clang static code analyzer out there. I’d like to play with those and see how we can use it without generating a lot of false positives.
  • Robustify jhbuild to be able to keep up with building everything largely unattended. Right now you need to blow away and rebuild your tree way too often, due to brittle autotools or undeclared dependencies, and if we want to run this automatically in Jenkins it needs to be able to recover by itself. It should be able to keep up with the daily development, automatically starting build/test jobs for all reverse dependencies for a module that just has changed (and for basic libraries like GLib or GTK that’s going to be a lot), and perhaps send out mail notifications when a commit breaks something else. This also needs some discussion first, about how/where to do the notifications, etc.

Other ideas will emerge, and I hope lots of you have their own ideas what we can do. So please talk to me! We’ll also look for a second person to work in that area, so that we have some capacity and also the possibility to bounce ideas and code reviews between each other.

Read more