Canonical Voices

Posts tagged with 'testing'

Nicholas Skaggs

The final images of what will become utopic are here! Yes, in just one short week utopic unicorn will be released into the world. Celebrate this exciting release and be among the first to run utopic by helping us test!

We need your help and test results, both positive and negative. Please head over to the milestone on the isotracker, select your favorite flavor, and perform the needed tests against the images.

If you've never submitted test results for the iso tracker, check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help.

Thank you for helping to make ubuntu better! Happy Testing!

Read more
pitti

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific “daily” image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical’s Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the “most demanding” tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an “always need this constant to select a non-default network” option that is specific to Canonical Bootstack.

Finally, let’ run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Read more
pitti

Last week’s autopkgtest 3.5 release (in Debian sid and Ubuntu Utopic) brings several new features which I’d like to announce.

Tests that reboot

For testing low-level packages like init or the kernel it is sometimes desirable to reboot the testbed in the middle of a test. For example, I added a new boot_and_services systemd autopkgtest which configures grub to boot with systemd as pid 1, reboots, and then checks that the most important services like lightdm, D-BUS, NetworkManager, and cron come up as expected. (This test will be expanded a lot in the future to cover other areas like the journal, logind, etc.)

In a testbed which supports rebooting (currently only QEMU) your test will now find an “autopkgtest-reboot” command which the test calls with an arbitrary “marker” string. autopkgtest will then reboot the testbed, save/restore any files it needs to (like the tests file tree or previously created artifacts), and then re-run the test with ADT_REBOOT_MARK=mymarker.

The new “Reboot during a test” section in README.package-tests explains this in detail with an example.

Implicit test metadata for similar packages

The Debian pkg-perl team recently discussed how to add package tests to the ~ 3.000 Perl packages. For most of these the test metadata looks pretty much the same, so they created a new pkg-perl-autopkgtest package which centralizes the logic. autopkgtest 3.5 now supports an implicit debian/tests/control control file to avoid having to modify several thousand packages with exactly the same file.

An initial run already looked quite promising, 65% of the packages pass their tests. There will be a few iterations to identify common failures and fix those in pkg-perl-autopkgtest and autopkgtestitself now.

There is still some discussion about how implicit test control files go together with the DEP-8 specification, as other runners like sadt do not support them yet. Most probably we’ll declare those packages XS-Testsuite: autopkgtest-pkg-perl instead of the usual autopkgtest.

In the same vein, Debian’s Ruby maintainer (Antonio Terceiro) added implicit test control support for Ruby packages. We haven’t done a mass test run with those yet, but their structure will probably look very similar.

Read more
Nicholas Skaggs

Autopilot Test Runners

In my last next post, I discussed will discuss notable autopilot features and talk about how autopilot has matured since it became an independent project.

In the meantime I would be remiss if I didn't also talk about the different test runners commonly used with autopilot tests. In addition to the autopilot binary which can be executed to run the tests, different tools have cropped up to make running tests easier.

autopilot-sandbox-run
This tool ships with autopilot itself and was developed as a way to run autopilot test suites on your desktop in a sane manner. Run the autopilot3-sandbox-run command with --help to see all the options available. By default, the tests will run in an Xvfb server, all completely behind the scenes with the results being reported to you upon completion. This is a great way to run tests with no interference on your desktop. If you are a visual person like me, you may instead wish to pass -X to enable the test runs to occur in a Xephyr window allowing you to see what's happening, but still retaining control of your mouse and keyboard.

I need this tool!
sudo apt-get install python3-autopilot

I want to run tests on my desktop without losing control of my mouse!
autopilot3-sandbox-run my_testsuite_name

I want to run tests on my desktop without losing control of my mouse, but I still want to see what's happening!
autopilot3-sandbox-run -X my_testsuite_name

Autopkgtest
Autopkgtest was developed as a means to automatically test Debian packages, "as-installed". Recently support was added to also test click packages and to run on phablet devices. Autopkgtest will take care of dependencies, setting up autopilot, and unlocking the device. You can literally plug in a device and wait for the results. You should really checkout the README pages, including those on running tests. That said, here's a quick primer on running tests using autopkgtest.

I need this tool!
sudo apt-get install autopkgtest
If you are on trusty, grab and install the utopic deb from here.

I want to run tests for a click package installed on my device!
Awesome. This one is simple. Connect the device and then run:
adt-run --click my.click.name --- ssh -s adb

For example,
adt-run --click com.ubuntu.music --- ssh -s adb

will run the tests for the installed version of the music app on your device. You don't need to do anything else. For the curious, this works by reading the manifest file all click packages have. Read more here.

I want to run the tests I wrote/modified against an installed click package!
For this you need to also pass your local folder containing the tests. You will also want to make sure you installed the new version of the click package if needed.

adt-run my-folder/ --click my.click.name --- ssh -s adb

Autopkgtest can also run in a lxc container, QEMU, a chroot, and other fun targets. In the examples above, I passed --- ssh -s adb as the target, instructing autopkgtest to use ssh and adb and thus run the tests on a connected phablet device. If you want to run autopilot tests on a phablet device, I recommend using autopkgtest as it handles everything for you.

phablet-test-run
This tool is part of the greater phablet-tools package. It was originally developed as an easy way to execute tests on your phablet device. Note however that copying the tests and any dependencies to the phablet device is left to you. The phablet-tools package provides some other useful utilities to help you with this (checkout phablet-click-test-setup for example).

I need this tool!
sudo apt-get install phablet-tools

I want to run the tests I wrote/modified against an installed click package!
First copy the tests to the device. You can use the ubuntu sdk or click-buddy for this, or even do it manually via adb. Then run phablet-test-run. It takes the same arguments as autopilot itself.

phablet-test-run -v my_testsuite

Note the tools looks for the testsuite and any dependencies of the testsuite inside the /home/phablet/autopilot folder. It's up to you to make sure everything that is needed to run your tests are located there or else it will fail.

other ways
There are of course other possible test runners that wrap around autopilot to make executing tests easier. Perhaps you've written a script yourself. Just remember at the end of the day the autopilot binary will be running the tests. It simply needs to be able to find the testsuite and all of it's dependencies in order to run. For this reason, don't be afraid to execute autopilot3 and run the tests yourself. Happy test runs!

Read more
pitti

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$ cat adt_sid
--output-dir=/tmp/out
-s
---
schroot
sid

$ adt-run libpng @adt_sid

Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]

This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

Read more
Nicholas Skaggs

A new test runner approaches

The problem
How acceptance tests are packaged and run has morphed over time. When autopilot was originally conceived the largest user was the unity project and debian packaging was the norm. Now that autopilot has moved well beyond that simple view to support many types of applications running across different form factors, it was time to address the issue of how to run and package these high-level tests.

While helping develop testsuites for the core apps targeting ubuntu touch, it became increasingly difficult for developers to run their application's testsuites. This gave rise to further integration points inside qtcreator, enhancements to click and its manifest files, and tools like the phablet-tools suite and click-buddy. All of these tools operate well within the confines they are intended, but none truly meets the needs for test provisioning and execution.

A solution?
With these thoughts in mind I opened the floor for discussion a couple months ago detailing the need for a proper tool that could meet all of my needs, as well as those of the application developer, test author and CI folks. In a nutshell, a workflow to setup a device as well as properly manage dependencies and resolve them was needed.

Autopkg tests all the things
I'm happy to report that as of a couple weeks ago such a tool now exists in autopkgtest. If the name sounds familar, that's because it is. Autopkgtest already runs all of our automated testing at the archive level. New package uploads are tested utilizing its toolset.

So what does this mean? Utilizing the format laid out by autopkgtest, you can now run your autopilot testsuite on a phablet device in a sane manner. If you have test dependencies, they can be defined and added to the click manifest as specified. If you don't have any test dependencies, then you can run your testsuite today without any modifications to the click manifest.

Yes, but what does this really mean?
This means you can now run a testsuite with adt-run in a similar manner to how debian packages are tested. The runner will setup the device, copy the tests, resolve any dependencies, run them, and report the results back to you.

Some disclaimers
Support for running tests this way is still new. If you do find a bug, please file it!

To use the tool first install autopkgtest. If you are running trusty, the version in the archive is old. For now download the utopic deb file and install it manually. A proper backport still needs to be done.

Also as of this writing, I must caution you that you may run into this bug. If the application fails to download dependencies (you see 404 errors during setup), update your device to the latest image and try again. Note, the latest devel image might be too old if a new image hasn't been promoted in a long time.

I want to see it!
Go ahead, give it a whirl with the calendar application (or your favorite core app). Plug in a device, then run the following on your pc.

bzr branch lp:ubuntu-calendar-app
adt-run ubuntu-calendar-app --click=com.ubuntu.calendar --- ssh -s /usr/share/autopkgtest/ssh-setup/adb

Autopkgtest will give you some output along the way about what is happening. The tests will be copied, and since --click= was specified, the runner will use the click from the device, install the click in our temporary environment, and read the click manifest file for dependencies and install those too. Finally, the tests will be executed with the results returned to you.

Feedback please!
Please try running your autopilot testsuites this way and give feedback! Feel free to contact myself, the upstream authors (thanks Martin Pitt for adding support for this!), or simply file a bug. If you run into trouble, utilize the -d and the --shell switches to get more insight into what is happening while running.

Read more
Nicholas Skaggs

We're having our first hackfest of the utopic cycle this week on Tuesday, July 15th. You can catch us live in a hangout on ubuntuonair.com starting at 1900 UTC. Everything you need to know can be found on the wiki page for the event.

During the hangout, we'll be demonstrating writing a new manual testcase, as well as reviewing writing automated testcases. We'll be answering any questions you have as well about contributing a testcase.

We need your help to write some new testcases! We're targeting both manual and automated testcase, so everyone is welcome to pitch in.

We are looking at writing and finishing some testcases for ubuntu studio and some other flavors. All you need is some basic tester knowledge and the ability to write in English.

If you know python, we are also going to be hacking on the toolkit helper for autopilot for the ubuntu sdk. That's a mouthful! Specifically it's the helpers that we use for writing autopilot tests against ubuntu-sdk applications. All app developers make use of these helpers, and we need more of them to ensure we have good coverage for all components developers use. 

Don't worry about getting stuck, we'll be around to help, and there's guides to well, guide you!

Hope to see everyone there!

Read more
Nicholas Skaggs

The first testing day of the utopic cycle is coming this week on Thursday, July 10th. You can catch us live in a hangout on ubuntuonair.com starting at 1900 UTC. We'll be demonstrating running and testing the development release of ubuntu, reporting test results, reporting bugs, and doing triage work. We'll also be availible to answer your questions and help you get started testing as well.

Please join us in testing utopic and helping the next release of ubuntu become the best it can be. Hope to see everyone there!

P.S. We have a team calendar that can help you keep track of the release schedule along with this and other events. Check it out!

Read more
pitti

We currently use completely different methods and tools of building test beds and running tests for Debian vs. Click packages, for normal uploads vs. CI airline landings vs. upstream project merge proposal testing, and keep lots of knowledge about Click package test metadata external and not easily accessible/discoverable.

Today I released autopkgtest 3.0 (and 3.0.1 with a few minor updates) which is a major milestone in unifying how we run package tests both locally and in production CI. The goals of this are:

  • Keep all test metadata, such as test dependencies, commands to run the test etc., in the project/package source itself instead of external. We have had that for a long time for Debian packages with DEP-8 and debian/tests/control, but not yet for Ubuntu’s Click packages.
  • Use the same tools for Debian and Click packages to simplify what developers have to know about and to reduce the amount of test infrastructure code to maintain.
  • Use the exact same testbeds and test runners in production CI than what developers use locally, so that you can reproduce and investigate failures.
  • Re-use the existing autopkgtest capabilities for using various kinds of testbeds, and conversely, making all new testbed types immediately available to all package formats.
  • Stop putting tests into the Ubuntu archive as packages (such as mediaplayer-app-autopilot). This just adds packaging and archive space overhead and also makes updating tests a lot harder and taking longer than it should.

So, let’s dive into the new features!

New runner: adt-virt-ssh

We want to run tests on real hardware such as a laptop of a particular brand with a particular graphics card, or an Ubuntu phone. We also want to restructure our current CI machinery to run tests on a real OpenStack cloud and gradually get rid of our hand-maintained QA lab with its test machines. While these use cases seem rather different, they both have in common that there is an already existing machine which is pretty much only accessible with ssh. Once you have an ssh connection, they look pretty much the same, you just need different initial setup (like fiddling with adb, calling nova boot, etc.) to prepare them.

So the new adt-virt-ssh runner factorizes all the common bits such as communicating with adt-run, auto-detecting sudo availability, doing SSH connection sharing etc., and delegates the target specific bits to a “setup script”. E. g. we could specify --setup-script ssh-setup-nova or --setup-script ssh-setup-adb which would then get called with open at the appropriate time by adt-run; it calls the nova commands to create a VM, or run a few adb commands to install/start ssh and install the public key. Then autopkgtest does its thing, and eventually calls the script with cleanup again. The actual protocol is a bit more involved (see manpage), but that’s the general idea.

autopkgtest now ships readymade scripts for these two use cases. So you could e. g. run the libpng tests in a temporary cloud VM:

# if you don't have one, create it with "nova keypair-create"
$ nova keypair-list
[...]
| pitti | 9f:31:cf:78:50:4f:42:04:7a:87:d7:2a:75:5e:46:56 |

# find a suitable image
$ nova image-list 
[...]
| ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 | Ubuntu Server 14.04 LTS (amd64 20140607.1) - Partner Image                         | ACTIVE |  

$ nova flavor-list 
[...]
| 100 | standard.xsmall  | 1024      | 10   | 10        |      | 1     | 1.0         | N/A       |

# now run the tests: please be patient, this takes a few mins!
$ adt-run libpng --setup-commands="apt-get update" --- ssh -s /usr/share/autopkgtest/ssh-setup/nova -- \
   -f standard.xsmall -i ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 -k pitti
[...]
adt-run [16:23:16]: test build:  - - - - - - - - - - results - - - - - - - - - -
build                PASS
adt-run: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ tests done.

Please see man adt-virt-ssh for details how to use it and how to write setup scripts. There is also a commented /usr/share/autopkgtest/ssh-setup/SKELETON template for writing your own for your use cases. You can also not use any setup script and just specify user and host name as options, but please remember that the ssh runner cannot clean up after itself, so never use this on important machines which you can’t reset/reinstall!

Test dependency installation without apt/root

Ubuntu phones with system images have a read-only file system where you can’t install test dependencies with apt. A similar case is using the “null” runner without root. When apt-get install is not available, autopkgtest now has a reduced fallback mode: it downloads the required test dependencies, unpacks them into a temporary directory, and runs the tests with $PATH, $PYTHONPATH, $GI_TYPELIB_PATH, etc. pointing to the unpacked temp dir. Of course this only works for packages which are relocatable in that way, i. e. libraries, Python modules, or command line tools; it will totally fail for things which look for config files, plugins etc. in hardcoded directory paths. But it’s good enough for the purposes of Click package testing such as installing autopilot, libautopilot-qt etc.

Click package support

autopkgtest now recognizes click source directories and *.click package arguments, and introduces a new test metadata specification syntax in a click package manifest. This is similar in spirit and capabilities to DEP-8 debian/tests/control, except that it’s using JSON:

    "x-test": {
        "unit": "tests/unittests",
        "smoke": {
            "path": "tests/smoketest",
            "depends": ["shunit2", "moreutils"],
            "restrictions": ["allow-stderr"]
        },
        "another": {
            "command": "echo hello > /tmp/world.txt"
        }
    }

For convenience, there is also some magic to make running autopilot tests particularly simple. E. g. our existing click packages usually specify something like

    "x-test": {
        "autopilot": "ubuntu_calculator_app"
    }

which is enough to “do what I mean”, i. e. implicitly add the autopilot test depends and run autopilot with the specified test module name. You can specify your own dependencies and/or commands, and restrictions etc., of course.

So with this, and the previous support for non-apt test dependencies and the ssh runner, we can put all this together to run the tests for e. g. the Ubuntu calculator app on the phone:

$ bzr branch lp:ubuntu-calculator-app
# built straight from that branch; TODO: where is the official" download URL?
$ wget http://people.canonical.com/~pitti/tmp/com.ubuntu.calculator_1.3.283_all.click
$ adt-run ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- \
      ssh -s /usr/share/autopkgtest/ssh-setup/adb
[..]
Traceback (most recent call last):
  File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 93, in test_divide_with_infinity_length_result_number
    self._assert_result("0.33333333")
  File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 63, in _assert_result
    self.main_view.get_result, Eventually(Equals(expected_result)))
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 406, in assertThat
    raise mismatch_error
testtools.matchers._impl.MismatchError: After 10.0 seconds test failed: '0.33333333' != '0.3'

Ran 33 tests in 295.586s
FAILED (failures=1)

Note that the current adb ssh setup script deals with some things like applying the autopilot click AppArmor hooks and disabling screen dimming, but it does not do the first-time setup (connecting to network, doing the gesture intro) and unlocking the screen. These are still on the TODO list, but I need to find out how to do these properly. Help appreciated!

Click app tests in schroot/containers

But, that’s not the only thing you can do! autopkgtest has all these other runners, so why not try and run them in a schroot or container? To emulate the environment of an Ubuntu Touch session I wrote a --setup-commands script:

adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
    ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- schroot utopic

This will actually work in the sense of running (and succeeding) the autopilot tests, but it will fail due to a lot of libust[11345/11358]: Error: Error opening shm /lttng-ust-wait... warnings on stderr. I don’t know what these mean, just that I also see them on the phone itself occasionally.

I also wrote another setup-commands script which emulates “read-only apt”, so that you can test the “unpack only” fallback. So you could prepare a container with click and the App framework preinstalled (so that it doesn’t always take ages to install them), starting from a standard adt-build-lxc container:

$ sudo lxc-clone -o adt-utopic -n click
$ sudo lxc-start -n click
  # run "sudo apt-get install click ubuntu-sdk-libs ubuntu-app-launch-tools" there
  # then "sudo powerdown"

# current apparmor profile doesn't allow remounting something read-only
$ echo "lxc.aa_profile = unconfined" | sudo tee -a /var/lib/lxc/click/config

Now that container has enough stuff preinstalled to be reasonably fast to set up, and the remaining test dependencies (mostly autopilot) work fine with the unpack/$*_PATH fallback:

$ adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
          --setup-commands /usr/share/autopkgtest/setup-commands/ro-apt \
          ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click \
          --- lxc -es click

This will successfully run all the tests, and provided you have apt-cacher-ng installed, it only takes a few seconds to set up. This might be a nice thing to do on merge proposals, if you don’t have an actual phone at hand, or don’t want to clutter it up.

autopkgtest 3.0.1 will be available in Utopic tomorrow (through autosyncs). If you can’t wait to try it out, download it from my people.c.c page ☺.

Feedback appreciated!

Read more
Nicholas Skaggs

Building click packages should be easy. And to a reasonable extent, qtcreator and click-buddy do make it easy. Things however can get a bit more complicated when you need to build a package that needs to run on an armhf device (you know like your phone!). Since your pc is almost certainly based on x86, you need to use, create or fake an armhf environment for building the package.

So then what options exist for getting a proper build of a project that will install properly on your device?

A phone can be more than a phone
It can also be a development environment!? Although it's not my recommendation, you can always use the source device to compile the package with. The downsides of this is namely speed and storage space. Nevertheless, it will build a click.

  1. shell into your device (adb shell / ssh mydevice)
  2. checkout the code (bzr branch lp:my-project)
  3. install the needed dependencies and sdk (apt-get install ubuntu-sdk)
  4. build with click-buddy (click-buddy --dir .)
Chroot to the rescue
The click tools contain a handy way to build a chroot expressly suited for use with click-buddy to build things. Basically, we can create a nice fake environment and pretend it's armhf, even though we're not running that architecture.

sudo click chroot -a armhf -f ubuntu-sdk-14.04 create
click-buddy --dir . --arch armhf

Most likely your package will require extra dependencies, which for now will need to be specified and passed in with the --extra-deps argument. These arguments are packages names, just like you would apt-get. Like so;

click-buddy --dir . --arch armhf --extra-deps "libboost-dev:armhf libssl-dev:armhf"

Notice we specified the arch as well, armhf. If we also add a --maint-mode, our extra installed packages will persist. This is handy if you will only ever be building a single project and don't want to constantly update the base chroot with your build dependencies.

Qtcreator build it for me!
Cmake makes all things possible. Qt Creator can not only build the click for you, it can also hold your hand through creating a chroot1. To create a chroot in qtcreator, do the following:
  1. Open Qt Creator
  2. Navigate to Tools > Options > Ubuntu > Click
  3. Click on Create Click Target
  4. After the click target is finished, add the dependencies needed for building. You can do this by clicking the maintain button.  
  5. Apt-get add what you need or otherwise setup the environment. Once ready, exit the chroot.
Now you can use this chroot for your project
  1. Open qt creator and open the project
  2. Select armhf when prompted
    1. You can also manually add the chroot to the project via Projects > Add kit and then select the UbuntuSDK armhf kit.
  3. Navigate to Projects tab and ensure the UbuntuSDK for armhf kit is selected.
  4. Build!
Rolling your own chroot
So, click can setup a chroot for you, and qt creator can build and manage one too. And these are great options for building one project. However if you find yourself building a plethora of packages or you simply want more control, I recommend setting up and using your own chroot to build. For my own use, I've picked pbuilder, but you can setup the chroot using other tools (like schroot which Qt Creator uses).

sudo apt-get install qemu-user-static ubuntu-dev-tools
pbuilder-dist trusty armhf create
pbuilder-dist trusty armhf login --save-after-login


Then, from inside the chroot shell, install a couple things you will always want available; namely the build tools and bzr/git/etc for grabbing the source you need. Be careful here and don't install too much. We want to maintain an otherwise pristine environment for building our packages. By default changes you make inside the chroot will be wiped. That means those package specific dependencies we'll install each time to build something won't persist.

apt-get install ubuntu-sdk bzr git phablet-tools
exit

By exiting, you'll notice pbuilder will update the base tarball with our changes. Now, when you want to build something, simply do the following:

pbuilder-dist trusty armhf login
bzr branch lp:my-project
apt-get install build-dependencies-you-need

Now, you can build as usual using click tools, so something like

click-buddy --dir .

works as expected. You can even add the --provision to send the resulting click to your device. If you want to grab the resulting click, you'll need to copy it before exiting the chroot, which is mounted on your filesystem under /var/cache/pbuilder/build/. Look for the last line after you issue your login command (pbuilder-dist trusty armhf login). You should see something like, 

File extracted to: /var/cache/pbuilder/build//26213

If you cd to this directory on your local machine, you'll see the environment chroot filesystem. Navigate to your source directory and grab a copy of the resulting click. Copy it to a safe place (somewhere outside of the chroot) before exiting the chroot or you will lose your build! 

But wait, there's more!
Since you have access to the chroot while it's open (and you can login several times if you wish to create several sessions from the base tarball), you can iteratively build packages as needed, hack on code, etc. The chroot is your playground.

Remember, click is your friend. Happy hacking!

1. Thanks to David Planella for this info

Read more
pitti

Hot on the heels of my previous annoucement of my systemd PPA for trusty, I’m now happy to announce that the latest systemd 204-10ubuntu1 just landed in Utopic, after sorting out enough of the current uninstallability in -proposed. The other fixes (bluez, resolvconf, lightdm, etc.) already landed a few days ago. Compared to the PPA these have a lot of other fixes and cleanups, due to the excellent hackfest that we held last weekend.

So, upgrade today and let us know about problems in bugs tagged “systemd-boot”.

I think systemd in current utopic works well enough to not break a developer’s day to day workflow, so we can now start parallelizing the work of identifying packages which only have upstart jobs and provide corresponding systemd units (or SysV script). Also, this hasn’t yet been tested on the phone at all, I’m sure that it’ll require quite some work (e. g. lxc-android-config has a lot of upstart jobs). To clarify, there is nofixed date/plan/deadline when this will be done, in particular it might well last more than one release cycle. So we’ll “release” (i. e. switch to it as a default) when it’s ready :-)

Read more
pitti

On the last UDS we talked about migrating from upstart to systemd to boot Ubuntu, after Mark announced that Ubuntu will follow Debian in that regard. There’s a lot of work to do, but it parallelizes well once developers can run systemd on their workstations or in VMs easily and the system boots up enough to still be able to work with it.

So today I merged our systemd package with Debian again, dropped the systemd-services split (which wasn’t accepted by Debian and will be unnecessary now), and put it into my systemd PPA. Quite surprisingly, this booted a fresh 14.04 VM pretty much right away (of course there’s no Plymouth prettiness). The main two things which were missing were NetworkManager and lightdm, as these don’t have an init.d script at all (NM) or it isn’t enabled (lightdm). Thus the PPA also contains updated packages for these two which provide a proper systemd unit. With that, the desktop is pretty much fully working, except for some details like cron not running. I didn’t go through /etc/init/*.conf with a small comb yet to check which upstart jobs need to be ported, that’s now part of the TODO list.

So, if you want to help with that, or just test and tell us what’s wrong, take the plunge. In a 14.04 VM (or real machine if you feel adventurous), do

  sudo add-apt-repository ppa:pitti/systemd
  sudo apt-get update
  sudo apt-get dist-upgrade

This will replace systemd-services with systemd, update network-manager and lightdm, and a few libraries. Up to now, when you reboot you’ll still get good old upstart. To actually boot with systemd, press Shift during boot to get the grub menu, edit the Ubuntu stanza, and append this to the linux line: init=/lib/systemd/systemd.

For the record, if pressing shift doesn’t work for you (too fast, VM, or similar), enable the grub menu with

  sudo sed -i '/GRUB_HIDDEN_TIMEOUT/ s/^/#/' /etc/default/grub
  sudo update-grub

Once you are satisfied that your system boots well enough, you can make this permanent by adding the init= option to /etc/default/grub (and possibly remove the comment sign from the GRUB_HIDDEN_TIMEOUT lines) and run sudo update-grub again. To go back to upstart, just edit the file again, remove the init=sudo update-grub again.

I’ll be on the Debian systemd/GNOME sprint next weekend, so I feel reasonably well prepared now. :-)

Update: As the comments pointed out, this bricked /etc/resolv.conf. I now uploaded a resolvconf package to the PPA which provides the missing unit (counterpart to the /etc/init/resolvconf.conf upstart job) and this now works fine. If you are in that situation, please boot with upstart, and do the following to clean up:

  sudo rm /etc/resolv.conf
  sudo ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf

Then you can boot back to systemd.

Update 2: If you want to help testing, please file bugs with a systemd-boot tag. See the list of known bugs when booting with systemd.

Read more
Nicholas Skaggs

As promised, here is your reminder that we are indeed fast approaching the final image for trusty. It's release week, which means it's time to put your energy and focus into finding and getting the remaining bugs documented or fixed in time for the release.

We need you!
The images are a culmination of effort from everyone. I know many have already tested and installed trusty and reported any issues encountered. Thank you! If you haven't yet tested, we need to hear from you!

How to help
The final milestone and images are ready; click here to have a look.

Execute the testcases for ubuntu and your favorite flavor images. Install or upgrade your machine and keep on the lookout for any issues you might find, however small.

I need a guide!
Sound scary? It's simpler than you might think. Checkout the guide and other links at the top of the tracker for help.

I got stuck!
Help is a simple email away, or for real-time help try #ubuntu-quality on freenode. Here are all the ways of getting ahold of the quality team who would love to help you.

Community
Plan to help test and verify the images for trusty and take part in making ubuntu! You'll join a community of people who do there best everyday to ensure ubuntu is an amazing experience. Here's saying thanks, from me and everyone else in the community for your efforts. Happy testing!

Read more
Nicholas Skaggs

Time to test trusty!

Say that three times fast. Time to test trusty,
time to test trusty, time to test trusty!
Ahh it's my favorite time of the cycle. This is the part were we all get serious, go a little bit crazy, and end super excited to release a new version of ubuntu into the world. This time it's even more special as the new version is a brand new LTS, which we look forward to supporting for the next 5 years.


The developers and early adopters have been working hard all cycle to put forth the best version of ubuntu to date. For you! For all of us! It's time to fix bugs, do last minute polish and prepare for the release candidate which will occur around April 11th.

We need you!
This is were you dear reader come in. You see despite their good looks and wonderful sense of humor and charm, the release team doesn't like to release final images of ubuntu that haven't been thoroughly tested.

The release team is ready to pounce on untested images
We need testing, and further, we need the results of that testing! We need to hear from you. Passing test results matter just as much as failures. The way to record these results is via the isotracker; we can't read your mind sadly!

How to help
Mark your calendars now for April 11th - April 16th. Pick a good date for you and plan to download and test the release candidate image. You'll see a new milestone on the tracker, and an announcement here as well when the image is ready. I won't let you forget, promise!

Execute the testcases for ubuntu and your favorite flavor images. Install or upgrade your machine and keep on the lookout for any issues you might find, however small.

I need a guide!
Sound scary? It's simpler than you might think. Checkout the guide and other links at the top of the tracker for help.

I got stuck!
Help is a simple email away, or for realtime help try #ubuntu-quality on freenode. Here's all the ways of getting ahold of the quality team who would love to help.

Community
Plan to help test and verify the images for trusty and take part in making ubuntu! You'll join a community of people who do there best everyday to ensure ubuntu is an amazing experience. Here's saying thanks, from me and everyone else in the community for your efforts. Happy testing!

Read more
David Murphy (schwuk)

Today I was adding tox and Travis-CI support to a Django project, and I ran into a problem: our project doesn’t have a setup.py. Of course I could have added one, but since by convention we don’t package our Django projects (Django applications are a different story) – instead we use virtualenv and pip requirements files – I wanted to see if I could make tox work without changing our project.

Turns out it is quite easy: just add the following three directives to your tox.ini.

In your [tox] section tell tox not to run setup.py:

skipsdist = True

In your [testenv] section make tox install your requirements (see here for more details):

deps = -r{toxinidir}/dev-requirements.txt

Finally, also in your [testenv] section, tell tox how to run your tests:

commands = python manage.py test

Now you can run tox, and your tests should run!

For reference, here is a the complete (albeit minimal) tox.ini file I used:

[tox]
envlist = py27
skipsdist = True

[testenv]
deps = -r{toxinidir}/dev-requirements.txt
setenv =
    PYTHONPATH = {toxinidir}:{toxinidir}
commands = python manage.py test

Read more
pitti

Our current autopkgtest machinery uses Jenkins (a private and a public one) and lots of “rsync state files between hosts”, both of which have reached a state where they fall over far too often. It’s flakey, hard to maintain, and hard to extend with new test execution slaves (e. g. for new architectures, or using different test runners). So I’m looking into what it would take to replace this with something robust, modern, and more lightweight.

In our new Continuous Integration world the preferred technologies are RabbitMQ for doing the job distribution (which is delightfully simple to install and use from Python), and OpenStack’s swift for distributed data storage. We have a properly configured swift in our data center, but for local development and experimentation I really just want a dead simple throw-away VM or container which gives me the swift API. swift is quite a bit more complex, and it took me several hours of reading and exercising various tutorials, debugging connection problems, and reading stackexchange to set it up. But now it’s working, and I condensed the whole setup into a single setup-swift.sh shell script.

You can run this in a standard ubuntu container or VM as root:

sudo apt-get install lxc
sudo lxc-create -n swift -t ubuntu -- -r trusty
sudo lxc-start -n swift
# log in as ubuntu/ubuntu, and wget or scp setup-swift.sh
sudo ./setup-swift.sh

Then get swift’s IP from sudo lxc-ls --fancy, install the swift client locally, and talk to it:

$ sudo apt-get install python-swiftclient
$ swift -A http://10.0.3.134:8080/auth/v1.0 -U testproj:testuser -K testpwd stat

Caveat: Don’t use this for any production machine! It’s configured to maximum insecurity, with static passwords and everything.

I realize this is just poor man’s juju, but juju-local is currently not working for me (I only just analyzed that). There is a charm for swift as well, but I haven’t tried that yet. In any case, it’s dead simple now, and maybe useful for someone else.

Read more
Nicholas Skaggs

Continuing our discussion of the testing within ubuntu, today's post will talk about how you can help ubuntu stay healthy by manually testing the images produced. No amount of robots or automated testing in the world can replace you (well, at least not yet, heh), and more specifically your workflow and usage patterns.

As discussed, everyday new images are produced for ubuntu for all supported architecture types. I would encourage you to follow along and watch the progression of the OS through these images and your testing. Every data point matters and testing on a regular basis is helpful. So how to get started?

Settle in with a nice cup of tea while testing!

The Desktop
For the desktop images everything you need is on the image tracker. There is a wonderful video and text tutorial for helping you get started. You report your results on the tracker itself in a simple web form, so you'll need a launchpad account if you don't have one.

The secondary way to help keep the desktop images in good shape is to install and run the development version of ubuntu on your machine. Each day you can check for updates and update your machine to stay in sync. Use your pc as you normally would, and when you find a bug, report it! Bugs found before the release are much easier to fix than after release.

Phablet
Now for the phablet images you will need a device capable of running the image. Check out the list. Grab your device and go through the installation process as described on the wiki. Make sure to select the '-proposed' channel when you install so you will see updates to get the latest images being worked on every time they are built. From there you can update everyday. Use the device and when you find a bug, report it! Here's a wiki page to help guide your testing and help you understand how and where to report bugs.

Don't forget there's a whole team of people within ubuntu dedicated to testing just like you. And they would love to have you join them!

Read more
Nicholas Skaggs

Recently the ubuntu core app developers and myself have been on an adventure towards adopting cmake for the all the core applications. While some of the applications are pure qml it's still been useful to embark on adopting a singular build system for all of the projects. Now that (most) of the pain of transitioning is gone, I'm going to talk about one of the useful features of setting up cmake for your project; click-buddy!

Click-buddy is an evolving tool that helps you build and deploy click packages to your phablet device. In addition, it has the ability to setup the device to run your autopilot test suite. So, rather than writing anything further, let's cover an example. You are going to need phablet-tools installed for this to work. I'm going to branch the clock app, build the click package, install it on my device, and finally run the tests.

bzr branch lp:ubuntu-clock-app
cd ubuntu-clock-app
click-buddy --dir . --provision
phablet-test-run -v ubuntu_clock_app

Click-buddy is also gaining the ability to build your project, even it involves a plugin and you are interested in building for your phablet device (armhf). Once landed you will be able to run something like this for non-qml applications.

sudo click chroot -a armhf create
click-buddy --arch armhf --provision

This will setup a chroot automagically for you and compile and build your application. Give it a try!

Note, as of this writing, emulator support for the ubuntu-ui-toolkit emulator is not yet built in. If your tests fail with a module import, run this line from your connected pc. It will copy over the ubuntu-ui-toolkit emulator (provided you have it installed on your pc :-) ) and your tests should now properly run.

adb push /usr/lib/python2.7/dist-packages/ubuntuuitoolkit /home/phablet/autopilot/ubuntuuitoolkit

Read more
pitti

Today’s autopilot release provides a new feature for test case writers. Unless the widget you want to test has a direct object name (GtkBuilder ID/Qt objectName), it is often not that easy to find a widget in a deeply nested hierarchy in autopilot vis.

With the new version, if you have some parent widget (like the containing dialog) w in your test, you can now call w.print_tree() to dump the paths and properties of that widget and all its children to stdout. That’s easy enough to grep, so provides a “poor man’s full tree search”. You can also specify a different output sink, like a file object or a file name: w.print_tree('/tmp/dump.txt').

This is a first step towards making it easier to find widgets and properties you are interested in. Arguably this is mostly just a crutch, but I found it to be rather effective. Before this feature I often wrote little snippets like in LP#1241312, now this becomes much easier. A better solution for this would certainly be a “full tree search” in vis itself, but that’s not that easy to implement. It is on the roadmap for this cycle, though.

I am also currently working on a real-time property change monitor for autopilot-gtk, which may also help in some cases. Unfortunately we cannot build such a thing for autopilot-qt, as due to the nature of Qt object properties, changes of them cannot be monitored.

Read more
Anthony Dillon

I was recently asked to attend a cloud sprint in San Francisco as a front-end developer for the new Juju GUI product. I had the pleasure of finally meeting the guys that I have collaboratively worked with and ultimately been helped by on the project.

Here is a collection of things I learnt during my week overseas.

Mocha testing

Mocha is a JavaScript test framework that tests asynchronously in a browser. Previously I found it difficult to imagine a use case when developing a site, but I now know that any interactive element of a site could benefit from Mocha testing.

This is by no means a full tutorial or features set of Mocha but my findings from a week with the UI engineering team.

Breakdown small elements of your app or website its logic test

If you take a system like a user’s login and register, it is much easier to test each function of the system. For example, if the user hits the signup button you should test the registration form is then visible to the user. Then work methodically through each step of the process, testing as many different inputs you can think of.

Saving your bacon

Testing undoubtedly slows down initial development but catches a lot of mistakes and flaws in the system before anything lands in the main code base. It also means if a test fails you don’t have to manually check each test again by hand — you simply run the test suite and see the ticks roll in.

Speeds up bug squashing

Bug fixing becomes easier to the reporter and the developer. If the reporter submits a test that fails due to a bug, the developer will get the full scope of the issue and once the test passes the developer and reporter can be confident the problem no longer exists.

Linting

While I have read a lot about linting in the past but have not needed to use it on any projects I have worked on to date. So I was very happy to use and be taught the linting performed by the UI engineering team.

Enforces a standard coding syntax

I was very impressed with the level of code standards it enforces. It requires all code to be written in a certain way, from indenting and commenting to unused variables. This results in anyone using the code, being able to pick up it up and read it as if created by one person when in fact it may have contributed by many.

Code reviews

In my opinion code reviews should be performed on all front-end work to discourage sloppy code and encourage shared knowledge.

Mark up

Mark up should be very semantic. This can be a case of opinion, but shared discussion will get the team to an agreed solution, which will then be reused again by others in the similar situations.

CSS

CSS can be difficult as there are different ways to achieve a similar result, but with a code review the style used will be common practise within the team.

JavaScript

A perfect candidate as different people have different methods of coding. With a review, it will catch any sloppy or short cuts in the code. A review makes sure  your code is refactored to best-practise the first time.

Conclusion

Test driven development (TDD) does slow the development process down but enforces better output from your time spend on the code and less bugs in the future.

If someone writes a failing test for your code which is expected to pass, working on the code to produce a passing test is a much easier way to demonstrate the code now works, along with all the other test for that function.

I truly believe in code reviews now. Previously I was sceptical about them. I used to think that  “because my code is working” I didn’t need reviews and it would slow me down. But a good reviewer will catch things like “it works but didn’t you take a shortcut two classes ago which you meant to go back and refactor”. We all want our code to be perfect and to learn from others on a daily basis. That is what code reviews give us.

Read more