Canonical Voices

Posts tagged with 'debian'

pitti

What is this?

umockdev is a set of tools and a library to mock hardware devices for programs that handle Linux hardware devices. It also provides tools to record the properties and behaviour of particular devices, and to run a program or test suite under a test bed with the previously recorded devices loaded.

This allows developers of software like gphoto or libmtp to receive these records in bug reports and recreate the problem on their system without having access to the affected hardware, as well as writing regression tests for those that do not need any particular privileges and thus are capable of running in standard make check.

After working on it for several weeks and lots of rumbling on G+, it’s now useful and documented enough for the first release 0.1!

Component overview

umockdev consists of the following parts:

  • The umockdev-record program generates text dumps (conventionally called *.umockdev) of some specified, or all of the system’s devices and their sysfs attributes and udev properties. It can also record ioctls that a particular program sends and receives to/from a device, and store them into a text file (conventionally called *.ioctl).
  • The libumockdev library provides the UMockdevTestbed GObject class which builds sysfs and /dev testbeds, provides API to generate devices, attributes, properties, and uevents on the fly, and can load *.umockdev and *.ioctl records into them. It provides VAPI and GI bindings, so you can use it from C, Vala, and any programming language that supports introspection. This is the API that you should use for writing regression tests. You can find the API documentation in docs/reference in the source directory.
  • The libumockdev-preload library intercepts access to /sys, /dev/, the kernel’s netlink socket (for uevents) and ioctl() and re-routes them into the sandbox built by libumockdev. You don’t interface with this library directly, instead you need to run your test suite or other program that uses libumockdev through the umockdev-wrapper program.
  • The umockdev-run program builds a sandbox using libumockdev, can load *.umockdev and *.ioctl files into it, and run a program in that sandbox. I. e. it is a CLI interface to libumockdev, which is useful in the “debug a failure with a particular device” use case if you get the text dumps from a bug report. This automatically takes care of using the preload library, i. e. you don’t need umockdev-wrapper with this. You cannot use this program if you need to simulate uevents or change attributes/properties on the fly; for those you need to use libumockdev directly.

Example: Record and replay PtP/MTP USB devices

So how do you use umockdev? For the “debug a problem” use case you usually don’t want to write a program that uses libumockdev, but just use the command line tools. Let’s capture some runs from libmtp tools, and replay them in a mock environment:

  • Connect your digital camera, mobile phone, or other device which supports PtP or MTP, and locate it in lsusb. For example
      Bus 001 Device 012: ID 0fce:0166 Sony Ericsson Xperia Mini Pro
  • Dump the sysfs device and udev properties:
      $ umockdev-record /dev/bus/usb/001/012 > mobile.umockdev
  • Now record the dynamic behaviour (i. e. usbfs ioctls) of various operations. You can store multiple different operations in the same file, which will share the common communication between them. For example:
      $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-detect
      $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-emptyfolders
  • Now you can disconnect your device, and run the same operations in a mocked testbed. Please note that /dev/bus/usb/001/012 merely echoes what is in mobile.umockdev and it is independent of what is actually in the real /dev directory. You can rename that device in the generated *.umockdev files and on the command line.
      $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-detect
      $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-emptyfolders

Example: using the library to fake a battery

If you want to write regression tests, it’s usually more flexible to use the library instead of calling everything through umockdev-run. As a simple example, let’s pretend we want to write tests for upower.

Batteries, and power supplies in general, are simple devices in the sense that userspace programs such as upower only communicate with them through sysfs and uevents. No /dev nor ioctls are necessary. docs/examples/ has two example programs how to use libumockdev to create a fake battery device, change it to low charge, sending an uevent, and running upower on a local test system D-BUS in the testbed, with watching what happens with upower --monitor-detail. battery.c shows how to do that with plain GObject in C, battery.py is the equivalent program in Python that uses the GI binding. You can just run the latter like this:

  umockdev-wrapper python3 docs/examples/battery.py

and you will see that upowerd (which runs on a temporary local system D-BUS in the test bed) will report a single battery with 75% charge, which gets down to 2.5% a second later.

The gist of it is that you create a test bed with

  UMockdevTestbed *testbed = umockdev_testbed_new ();

and add a device with certain sysfs attributes and udev properties with

    gchar *sys_bat = umockdev_testbed_add_device (
            testbed, "power_supply", "fakeBAT0", NULL,
            /* attributes */
            "type", "Battery",
            "present", "1",
            "status", "Discharging",
            "energy_full", "60000000",
            "energy_full_design", "80000000",
            "energy_now", "48000000",
            "voltage_now", "12000000",
            NULL,
            /* properties */
            "POWER_SUPPLY_ONLINE", "1",
            NULL);

You can then e. g. change an attribute and synthesize a “change” uevent with

  umockdev_testbed_set_attribute (testbed, sys_bat, "energy_now", "1500000");
  umockdev_testbed_uevent (testbed, sys_bat, "change");

With Python or other introspected languages, or in Vala it works the same way, except that it looks a bit leaner due to “proper” object semantics.

Packages

I have a packaging branch for Ubuntu and a recipe to do daily builds with the latest upstream code into my daily builds PPA (for 12.10 and raring). I will soon upload it to Raring proper, too.

What’s next?

The current set of features should already get you quite far for a range of devices. I’d love to get feedback from you if you use this for anything useful, in particular how to improve the API, the command line tools, or the text dump format. I’m not really happy with the split between umockdev (sys/dev) and ioctl files and the relatively complicated CLI syntax of umockdev-record, so any suggestion is welcome.

One use case that I have for myself is to extend the coverage of ioctls for input devices such as touch screens and wacom tablets, so that we can write some tests for gnome-settings-daemon plugins.

I also want to find a way to pass ioctls back to the test suite/calling program instead of having to handle them all in the preload library, which would make it a lot more flexible. However, due to the nature of the ioctl ABI this is not easy.

Where to go to

The code is hosted on github in the umockdev project; this started out as a systemd branch to add this functionality to libudev, but after a discussion with Kay we decided to keep it separate. But I kept it in git anyway, given how popular it is today. For the bzr lovers, Launchpad has an import at lp:umockdev.

Release tarballs will be on Launchpad as well. Please file bugs and enhancement requests in the git hub tracker.

Finally, if you have questions or want to discuss something, you can always find me on IRC (pitti on Freenode or GNOME).

Thanks for your attention and happy testing!

Read more
pitti

When writing system integration tests it often happens that I want to mount some tmpfses over directories like /etc/postgresql/ or /home, and run the whole script with an unshared mount namespace so that (1) it does not interfere with the real system, and (2) is guaranteed to clean up after itself (unmounting etc.) after it ends in any possible way (including SIGKILL, which breaks usual cleanup methods like “trap”, “finally”, “def tearDown()”, “atexit()” and so on).

In gvfs’ and postgresql-common’s tests, which both have been around for a while, I prepare a set of shell commands in a variable and pipe that into unshare -m sh, but that has some major problems: It doesn’t scale well to large programs, looks rather ugly, breaks syntax highlighting in editors, and it destroys the real stdin, so you cannot e. g. call a “bash -i” in your test for interactively debugging a failed test.

I just changed postgresql-common’s test runner to use unshare/tmpfses as well, and needed a better approach. What I eventually figured out preserves stdin, $0, and $@, and still looks like a normal script (i. e. not just a single big string). It still looks a bit hackish, but I can live with that:

#!/bin/sh
set -e
# call ourselves through unshare in a way that keeps normal stdin, $0, and CLI args
unshare -uim sh -- -c "`tail -n +7 $0`" "$0" "$@"
exit $?

# unshared program starts here
set -e
echo "args: $@"
echo "mounting tmpfs"
mount -n -t tmpfs tmpfs /etc
grep /etc /proc/mounts
echo "done"

As Unix/Linux’ shebang parsing is rather limited, I didn’t find a way to do something like

#!/usr/bin/env unshare -m sh

If anyone knows a trick which avoids the “tail -n +7″ hack and having to pay attention to passing around “$@”, I’d appreciate a comment how to simplify this.

Read more

Upstart in Debian

Good news, everyone!

So as of last Sunday, this works on all Linux archs in Debian unstable and gives you a modern version of upstart:

echo 'Yes, do as I say!' | apt-get -o DPkg::options=--force-remove-essential -y --force-yes install upstart

Thanks to the ifupdown, sysvinit, and udev maintainers for their cooperation in getting upstart support in place; to the Debian release team for accomodating the late changes needed for upstart to be supported in wheezy; and to Scott for his past maintenance of upstart in Debian.

Benchmarking

One of the consequences is that it's now possible to do meaningful head-to-head comparisons of boot speed between sysvinit (with startpar), upstart, and systemd. At one time or another people have tested systemd vs. sysvinit when using bash as /bin/sh, and upstart vs. sysvinit, and systemd vs. sysvinit+startpar, and there are plenty of bootcharts floating around showing results of one init system or another on one distro or another, but I'm not aware of anyone having done a real, fair comparison of the three solutions, changing nothing but the init system.

I've done some initial comparisons in a barebones sid VM, and the results are definitely interesting. Sysvinit with startpar (the default in Debian) can boot a stock sid install, with no added services, in somewhere between 3.37 and 3.42 seconds (three runs). That's not a whole lot, but on the other hand this is a system with a single filesystem and no interesting services yet. Is this really as fast as we can boot?

No, even this minimal system can boot faster. Testing with upstart shows that upstart can do the same job in between 3.03 and 3.19 seconds (n=3, mean=3.09). This confirms what we'd already seen in Ubuntu, that it makes a difference to boot speed to have filesystem mounting handled by an integrated process that understands the whole system instead of as a group of serialized shell scripts.

What about systemd? The same test gives a boot time between 2.32 and 2.85 seconds (n=4, mean=2.48). Interesting; what would make systemd faster than upstart in sid? Well, a quick look at the system shows one possible contributing factor: the rsyslog package in Debian has a systemd unit file, but not an upstart job file. Dropping in the /etc/init/rsyslog.conf from Ubuntu has a noticeable impact, and brings the upstart boot time down nearer to that of systemd (2.78-3.03s, n=5, mean=2.92). Besides telling me that it's time to start spamming Debian maintainers with wishlist bugs asking them to include upstart jobs in their packages, this suggests that the remaining difference in boot time may be due to the outstanding init scripts in rcS.d that are made built-in no-ops by systemd but not (yet) by upstart in Debian (e.g., hwclock, hostname, udev-mtab). (In Ubuntu, /etc/rcS.d has long since been emptied out in favor of upstart jobs in the common case, since the time it takes to get to runlevel=2 is definitely a major issue for boot speed and boot parallelization.)

It also gives the lie to the claim that's been made in various places that spawning shells is a major bottleneck for upstart vs. systemd. More study is certainly needed to confirm this, but at least this naive first test suggests that in spite of the purported benefits of hard-coding boot-time policies in C code, upstart with its default degree of runtime configurability is at least in the ballpark of systemd. Indeed, when OpenSuSE switched from upstart to systemd, it seems that something else in the stack managed to nullify any benefit from improving the boot-time performance of apparmor. Contrary to what some would have you believe, systemd is not some kind of silver bullet for boot speed. Upstart, with its boot-time flexibility and its long history of real-world testing in Ubuntu, is a formidable competitor to systemd in the boot speed department - and a solid solution to the many longstanding boot-time ordering bugs in Debian, which still affect users of sysvinit.

I've published the bootcharts for the above tests here. Between the fact that Debian's bootchart package logs by default to /var/log/bootchart.tgz (thus overwriting on every boot) and the fact that these tests are in a VM, I haven't bothered to include the raw data, just the bootcharts themselves. The interested reader can probably generate more interesting boot charts of their own anyway - in particular, it will be interesting to see how the different init systems perform with more complicated filesystem layouts, or when booting a less trivial set of services.

Other musings

The boot charts have been created with the bootchart package rather than with bootchart2. For one thing, it turns out that bootchart2 includes systemd units, not init scripts; so when replacing bootchart with bootchart2, the non-matching init script is left behind and systemd in particular gets terribly confused. This is now reported as Bug #694403.

In an amusing twist, while I was experimenting with bootchart2, I also noticed that having systemd installed would slow down booting with other init systems, because systemd installs udev rules which take a noticeable amount of time to run a helper command at boot even though the helper should be a no-op. So if you're doing boot speed testing of other init systems, be sure you don't have systemd on the system at the time!

Read more
Barry Warsaw

UDS Update #1 - OAuth

For UDS-R for Raring (i.e. Ubuntu 13.04) in Copenhagen, I sponsored three blueprints.  These blueprints represent most of the work I will be doing for the next 6 months, as we're well on our way to the next LTS, Ubuntu 14.04.

I'll provide some updates to the other blueprints later, but for now, I want to talk about OAuth and Python 3.  OAuth is a protocol which allows you to programmatically interact with certain website APIs, in an authenticated manner, without having to provide your website password.  Essentially, it allows you to generate an authorization token which you can use instead, and it allows you to manage and share these tokens with applications, so that you can revoke them if you want, or decide how and which applications to trust to act on your behalf.

A good example of a site that uses OAuth is Launchpad, but many other sites also support OAuth, such as Twitter and Facebook.

There are actually two versions of OAuth out there.  OAuth version 1 is definitely the more prevelent, since it has been around for years, is relatively simple (at least on the client side), and enshrined in RFC 5849.  There are tons of libraries available that support OAuth v1, in a multitude of languages, with Python being no exception.

OAuth v2 is much less common, since it is currently only a draft specification, and has had its share of design-by-committee controversy.  Still, some sites such as Facebook do require OAuth v2.

One of the very earliest Python libraries to support OAuth v1, on both the client and server side, was python-oauth (I'll use the Debian package names in this post), and on the Ubuntu desktop, you'll find lots of scripts and libraries that use python-oauth.  There are major problems with this library though, and I highly recommend not using it.  The biggest problems are that the code is abandoned by its upstream maintainer (it hasn't be updated on PyPI since 2009), and it is not Python 3 compatible.  Because the OAuth v2 draft came after this library was abandoned, it provides no support for the successor specification.

For this reason, one of the blueprints I sponsored was specifically to survey the alternatives available for Python programmers, and make a decision about which one we would officially endorse for Ubuntu.  By "official endorsement" I mean promote the library to other Python programmers (hence this post!) and to port all of our desktop scripts from python-oauth to the agreed upon library.

After some discussion, it was unanimous by the attendees of the UDS session (both in-person and remotely), to choose the python-oauthlib as our preferred library.

python-oauthlib has a lot going for it.  It's Python 3 compatible, has an active upstream maintainer, supports both RFC 5849 for v1, and closely follows the draft for v2.  It's a well-tested, solid library, and it is available in Ubuntu for both Python 2 and Python 3.  Probably the only negative is that the library does not provide any support for the server side.  This is not a major problem for our immediate plans, since there aren't any server applications on the Ubuntu desktop requiring OAuth.  Eventually, yes, we'll need server side support, but we can punt on that recommendation for now.

Another cool thing about python-oauthlib is that it has been adopted by the python-requests library, meaning, if you want to use a modern replacement for the urllib2/httplib2 circus which supports OAuth out of the box, you can just use python-requests, provide the appropriate parameters, and you get request signing for free.

So, as you'll see from the blueprint, there are several bugs linked to packages which need porting to python-oauthlib for Ubuntu 13.04, and I am actively working on them, though contributions, as always, are welcome!  I thought I'd include a little bit of code to show you how you might port from python-oauth to python-oauthlib.  We'll stick with OAuth v1 in this discussion.

The first thing to recognize is that python-oauth uses different, older terminology that predates the RFC.  Thus, you'll see references to a token key and token secret, as well as a consumer key and consumer secret.  In the RFC, and in python-oauthlib, these terms are client key, client secret, resource owner key, and resource owner secret respectively.  After you get over that hump, the rest pretty much falls into place.  As an example, here is a code snippet from the piston-mini-client library which used the old python-oauth library:

class OAuthAuthorizer(object):
    """Authenticate to OAuth protected APIs."""
    def __init__(self, token_key, token_secret, consumer_key, consumer_secret,
                 oauth_realm="OAuth"):
        """Initialize a ``OAuthAuthorizer``.

        ``token_key``, ``token_secret``, ``consumer_key`` and
        ``consumer_secret`` are required for signing OAuth requests.  The
        ``oauth_realm`` to use is optional.
        """
        self.token_key = token_key
        self.token_secret = token_secret
        self.consumer_key = consumer_key
        self.consumer_secret = consumer_secret
        self.oauth_realm = oauth_realm

    def sign_request(self, url, method, body, headers):
        """Sign a request with OAuth credentials."""
        # Import oauth here so that you don't need it if you're not going
        # to use it.  Plan B: move this out into a separate oauth module.
        from oauth.oauth import (OAuthRequest, OAuthConsumer, OAuthToken,
                                 OAuthSignatureMethod_PLAINTEXT)
        consumer = OAuthConsumer(self.consumer_key, self.consumer_secret)
        token = OAuthToken(self.token_key, self.token_secret)
        oauth_request = OAuthRequest.from_consumer_and_token(
            consumer, token, http_url=url)
        oauth_request.sign_request(OAuthSignatureMethod_PLAINTEXT(),
                                   consumer, token)
        headers.update(oauth_request.to_header(self.oauth_realm))


The constructor is pretty simple, and it uses the old OAuth terminology.  The key thing to notice is the way the old API required you to create a consumer, a token, and then a request object, then ask the request object to sign the request.  On top of all the other disadvantages, this isn't a very convenient API.  Let's look at the snippet after conversion to python-oauthlib.

class OAuthAuthorizer(object):
    """Authenticate to OAuth protected APIs."""
    def __init__(self, token_key, token_secret, consumer_key, consumer_secret,
                 oauth_realm="OAuth"):
        """Initialize a ``OAuthAuthorizer``.

        ``token_key``, ``token_secret``, ``consumer_key`` and
        ``consumer_secret`` are required for signing OAuth requests.  The
        ``oauth_realm`` to use is optional.
        """
        # 2012-11-19 BAW: python-oauthlib requires unicodes for its tokens and
        # secrets.  Assume utf-8 values.
        # https://github.com/idan/oauthlib/issues/68
        self.token_key = _unicodeify(token_key)
        self.token_secret = _unicodeify(token_secret)
        self.consumer_key = _unicodeify(consumer_key)
        self.consumer_secret = _unicodeify(consumer_secret)
        self.oauth_realm = oauth_realm

    def sign_request(self, url, method, body, headers):
        """Sign a request with OAuth credentials."""
        # 2012-11-19 BAW: In order to preserve API backward compatibility,
        # convert empty string body to None.  The old python-oauth library
        # would treat the empty string as "no body", but python-oauthlib
        # requires None.
        if not body:
            body = None
        # Import oauthlib here so that you don't need it if you're not going
        # to use it.  Plan B: move this out into a separate oauth module.
        from oauthlib.oauth1 import Client, SIGNATURE_PLAINTEXT
        oauth_client = Client(self.consumer_key, self.consumer_secret,
                              self.token_key, self.token_secret,
                              signature_method=SIGNATURE_PLAINTEXT,
                              realm=self.oauth_realm)
        uri, signed_headers, body = oauth_client.sign(
            url, method, body, headers)
        headers.update(signed_headers)


See how much nicer this is?  You need only create a client object, essentially using all the same bits of information.  Then you ask the client to sign the request, and update the request headers with the signature.  Much easier.

Two important things to note.  If you are doing an HTTP GET, there is no request body, and thus no request content which needs to contribute to the signature.  In python-oauth, you could specify an empty body by using either None or the empty string.  piston-mini-client uses the latter, and this is embodied in its public API.  python-oauthlib however, treats the empty string as a body being present, so it would require the Content-Type header to be set even for an HTTP GET which has no content (i.e. no body).  This is why the replacement code checks for an empty string being passed in (actually, any false-ish value), and coerces that to None.

The second issue is that python-oauthlib requires the keys and secrets to be Unicode objects; they cannot be bytes objects.  In code ported straight from Python 2 however, these values are usually 8-bit strings, and so become bytes objects in Python 3.  python-oauthlib will raise a ValueError during signing if any of these are bytes objects.  Thus the use of the _unicodeify() function to decode these values to unicodes.

def _unicodeify(s):
    if isinstance(s, bytes):
        return s.decode('utf-8')
    return s


The above works in both Python 2 and Python 3.  Of course, we don't know for sure that the bytes values are UTF-8, but it's the only sane encoding to expect, and if a client of piston-mini-client were to be so insane as to use an incompatible encoding (US-ASCII is fine because it's compatible with UTF-8), it would be up to the client to just pass in unicodes in the first place.  At the time of this writing, this is under active discussion with upstream, but for now, it's not too difficult to work around.

Anyway, I hope this helps, and I encourage you to help increase the popularity of python-oauthlib on the Cheeseshop, so that we can one day finally kill off the long defunct python-oauth library.

Read more
pitti

With python-dbusmock you can provide mocks for arbitrary D-BUS services for your test suites or if you want to reproduce a bug.

However, when writing actual tests for gnome-settings-daemon etc. I noticed that it is rather cumbersome to always have to set up the “skeleton” of common services such as UPower. python-dbusmock 0.2 now introduces the concept of “templates” which provide those skeletons for common standard services so that your code only needs to set up the particular properties and specific D-BUS objects that you need. These templates can be parameterized for common customizations, and they can provide additional convenience methods on the org.freedesktop.DBus.Mock interface to provide more abstract functionality like “add a battery”.

So if you want to pretend you have one AC and a half-charged battery, you can now simply do

  def setUp(self):
     (self.p_mock, self.obj_upower) = self.spawn_server_template('upower', {})

  def test_ac_bat(self):
     self.obj_upower.AddAC('mock_AC', 'Mock AC')
     self.obj_upower.AddChargingBattery('mock_BAT', 'Mock Battery', 50.0, 1200)

Or, if your code is not in Python, use the CLI/D-BUS interface, like in shell:

  # start a fake system bus
  eval `dbus-launch`
  export DBUS_SYSTEM_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS

  # start mock upower on the fake bus
  python3 -m dbusmock --template upower &

  # add devices
  gdbus call --system -d org.freedesktop.UPower -o /org/freedesktop/UPower \
      -m org.freedesktop.DBus.Mock.AddAC mock_ac 'Mock AC'
  gdbus call --system -d org.freedesktop.UPower -o /org/freedesktop/UPower \
      -m org.freedesktop.DBus.Mock.AddChargingBattery mock_bat 'Mock Bat' 50.0 1200

In both cases upower --dump or gnome-power-statistics will show you the expected devices (of course you need to run that within the environment of the fake $DBUS_SYSTEM_BUS_ADDRESS, or run the mock on the real system bus as root).

Iftikhar Ahmad contributed a template for NetworkManager, which allows you to easily set up ethernet and wifi devices and wifi access points. See pydoc3 dbusmock.templates.networkmanager for details and the test cases for how this looks like in practice.

I just released python-dbusmock 0.2.1 and uploaded the new version to Debian experimental. I will sync it into Ubuntu Raring in a few hours.

Read more

The 12.10 release is the first version of Ubuntu that supports Secure Boot out of the box. In what is largely an accident of release timing, from what I can tell (and please correct me if I'm wrong), this actually makes Ubuntu 12.10 the first general release of any OS to support Secure Boot. (Windows 8 of course is also now available; and I'm sure Matthew Garrett, who has been a welcome collaborator throughout this process, has everything in good order for the upcoming Fedora 18 release.)

That's certainly something of a bittersweet achievement. I'm proud of the work we've done to ensure Ubuntu will continue to work out of the box on the consumer hardware of the future; in spite of the predictable accusations on the blogowebs that we've sold out, I sleep well at night knowing that this was the pragmatic decision to make, maximizing users' freedom to use their hardware. All the same, I worry about what the landscape is going to look like in a few years' time. The Ubuntu first-stage EFI bootloader is signed by Microsoft, but the key that is used for signing is one that's recommended by Microsoft, not one that's required by the Windows 8 certification requirements. Will all hardware include this key in practice? The Windows 8 requirements also say that every machine must allow the user to disable Secure Boot. Will manufacturers get this right, and will users be able to make use of it in the event the manufacture didn't include the Microsoft-recommended key? Only time will tell. But I do think the Linux community is going to have to remain engaged on this for some time to come, and possibly hold OEMs' feet to the fire for shipping hardware that will only work with Windows 8.

But that's for the future. For now, we have a technical solution in 12.10 that solves the parts of the problem that we can solve.

  • The first-stage UEFI bootloader on 12.10 install media is a build of Matthew Garrett's shim code, embedding a Canonical UEFI CA and signed by Microsoft. This is pretty much the same as the Fedora solution, just with a different key and a binary built on the Ubuntu build infrastructure rather than Fedora's. If Secure Boot is not enabled, the second-stage bootloader is booted without applying any checks. If Secure Boot is enabled, the signature is checked on the second stage before passing control, as expected.
  • The second-stage bootloader is GRUB 2. Readers may remember that there were earlier concerns about GPLv3 in the mix for some Ubuntu use cases, but these have been ironed out now in discussion with the FSF. This enables us to continue to provide a consistent boot experience across the different ways Ubuntu may be booted.
  • We are providing signed kernel images in the Ubuntu archive and using them by default. When a signed kernel is present, this allows the bootloader to pass control to the kernel without first calling ExitBootServices(), letting the kernel apply any EFI quirks it might need to (such as for bug #1065263.
  • Unlike Fedora, however, we are not enforcing kernel signature checking. If the kernel is unsigned and the system has Secure Boot turned on, the Ubuntu grub will fall back to calling ExitBootServices() first, and then booting the kernel. This way, users still have the freedom to boot their own kernels on Secure Boot-enabled hardware, possibly with a slightly degraded boot experience, while the firmware remains protected from untrusted code.
  • We are not enforcing module signing. This allows external modules, such as dkms packages, to continue to work with SB turned on. While there's potential value in being able to verify the source of kernel modules at load time, that's not something implemented for this round, and we would want such enforcement to be optional regardless.

This first release gives us preliminary support for booting on Secure Boot, but there's more work to be done to provide a full solution that's sustainable over the long term. We'll be discussing some of that this week at UDS in Copenhagen.

  • The 12.10 solution does not include support for SuSE's MOK (machine owner key) approach. Supporting this is important to us, as this helps to ensure users have freedom and control over their machines instead of being forced to choose between disabling Secure Boot and only running vendor-provided kernels. Among other things, we want to make sure people have the freedom to continue doing kernel and bootloader development, without having to muddle their way through impossible firmware menus!
  • We will be enhancing the tooling around db/dbx updates, to ensure that Ubuntu users of Secure Boot receive the same protection against malware that Windows users do. This also addresses the need for publishing revocations in the event of bugs in our grub or kernel implementations that would compromise the security of Secure Boot.
  • Netboot support is currently nonexistent. The major blocker seems to be that unlike in BIOS booting we can't rely on the firmware's PXE stack, and in practice GRUB2's tftp support doesn't seem to be very robust yet. Once we have a working GRUB2 image that successfully reads files over tftp, we can evaluate signing it for Secure Boot as well.

And as part of our committment to enabling new hardware on the current LTS release, we will be backporting this work for inclusion in 12.04.2.

It remains to be decided how Debian will approach the Secure Boot question. At DebConf 12, many people seemed to consider it a foregone conclusion that Debian would never agree to include binaries in main signed by third-party keys. I don't think that should be a given; I think allowing third-party signatures in main for hardware compatibility is consistent with Debian's principles, and refusing to make Debian compatible with this hardware out of the box does nothing to advance user freedom. I hope to see frank discussion post-wheezy about keeping Debian relevant on consumer hardware of the future.

Read more
Timo Jyrinki

UDS GTA04 Hacking

I'm sitting at the Bella Sky lobby bar while UDS people keep pouring in. I guess I have to start this UDS with some hacking (and a little beer)! I bootstrapped an Ubuntu armhf rootfs and coupled it with QtMoko's kernel already earlier after I received my GTA04 but it didn't boot right away so I had nothing to report. I wanted armhf so I chose QtMoko's 'experimental' Debian armhf rootfs + boot files as the reference to look at while working on the Ubuntu rootfs. I now went through again some of the configuration files, and voilà:


Now running apt-get install unity over SSH :) It will require OpenGL ES 2.0 hw acceleration to run, now that the support was integrated in Ubuntu 12.10. I will therefore need to tinker what kind of OMAP3 armhf binary blobs there are available, and what's again the situation with X.org DDX driver as well. I always feel that the fun starts at this point for me, when I've the device booting and I can SSH in. That's why I'm happy the work from Golden Delicious GmbH and QtMoko helped me to get here...

Read more
pitti

For writing tests for GVFS (current tests, proposed improvements) I want to run Samba as normal user, so that we can test gvfs’ smb backend without root privileges and thus can run them safely and conveniently in a “make check” environment for developers and in JHBuild for continuous integration testing. Before these tests could only run under gvfs-testbed, which needs root.

Unlike other servers such as ssh or ftp, this turned out surprisingly non-obvious and hard, so I want to document it in this blog post for posterity’s benefit.

Running the server

Running smbd itself is mainly an exercise of figuring out all the options that you need to set; Alex Larsson and I had some fun figuring out all the quirks and hiccups that happen between Ubuntu’s and Fedora’s packaging and 3.6 vs. 4.0, but finally arrived at something working.

First, you need to create an empty directory where smbd can put all its databases and state files in. For tests you would use mkdtemp(), but for easier reading I just assume mkdir /tmp/samba here.

The main knowledge is in the Samba configuration file, let’s call it /tmp/smb.conf:

[global]
workgroup = TESTGROUP
interfaces = lo 127.0.0.0/8
smb ports = 1445
log level = 2
map to guest = Bad User
passdb backend = smbpasswd
smb passwd file = /tmp/smbpasswd
lock directory = /tmp/samba
state directory = /tmp/samba
cache directory = /tmp/samba
pid directory = /tmp/samba
private dir = /tmp/samba
ncalrpc dir = /tmp/samba

[public]
path = /tmp/public
guest ok = yes

[private]
path = /tmp/private
read only = no

For running this as a normal user you need to set a port > 1024, so this uses 1445 to resemble the original (privileged) port 445. The map to guest line makes anonymous logins work on Fedora/Samba 4.0 (I’m not sure whether it’s a distribution or a version issue). Don’t ask about “dir” vs. “directory”, that’s an inconsistency in Samba; with above names it works on both 3.6 and 4.0.

We use the old “smbpasswd” backend as shipping large tdb files is usually too inconvenient and brittle for test suites. I created an smbpasswd file by running smbpasswd on a “real” Samba installation, and then using pdbedit to convert it to a smbpasswd file:

sudo smbpasswd -a martin
sudo pdbedit -i tdbsam:/var/lib/samba/passdb.tdb -e smbpasswd:/tmp/smbpasswd

The result for password “foo” is

myuser:0:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:AC8E657F83DF82BEEA5D43BDAF7800CC:[U          ]:LCT-507C14C7:

which you are welcome to copy&paste (you can replace “myuser” with any valid user name, of course).

This also defines two shares, one public, one authenticated. You need to create the directories and populate them a bit:

mkdir /tmp/public /tmp/private
echo hello > /tmp/public/hello.txt
echo secret > /tmp/private/myfile.txt

Now you can run the server with

smbd -iFS -s /tmp/smb.conf

The main problem with this approach is that smbd exits (“Server exit (failed to receive smb request)”) after a client terminates, so you need to write your tests in a way to only run one connection/request per test, or to start smbd in a loop.

Running the client

If you merely use the smbclient command line tool, this is rather simple: It has a -p option for specifying the port:

$ smbclient -p 1445 //localhost/private
Enter martin's password: [enter "foo" here]
Domain=[TESTGROUP] OS=[Unix] Server=[Samba 3.6.6]
smb: \> dir
  .                                   D        0  Wed Oct 17 08:28:23 2012
  ..                                  D        0  Wed Oct 17 08:31:24 2012
  myfile.txt                                   7  Wed Oct 17 08:28:23 2012

In the case of gvfs it wasn’t so simple, however. Surprisingly, libsmbclient does not have an API to set the port, it always assumes 445. smbclient itself uses some internal “libcli” API which does have a way to change the port, but it’s not exposed through libsmbclient. However, Alex and I found some mailing list posts ([1], [2]) that mention $LIBSMB_PROG, and it’s also mentioned in smbclient’s manpage. It doesn’t quite work as advertised in the second ML post (you can’t set it to smbd, smbd apparently doesn’t speak the socket protocol over stdin/stdout), and it’s not being used anywhere in the current Samba sources, but what does work is to use good old netcat:

export LIBSMB_PROG="nc localhost 1445"

with that, you can use smbclient or any program using libsmbclient to talk to our test smb server running as user.

Read more
Timo Jyrinki

OpenPhoenux GTA04

Postman was on a kind mood yesterday. My OpenPhoenux GTA04 arrived! I had my newer Neo FreeRunner upgraded via the service from Golden Delicious.

First, something rare to behold in phones nowadays - Made in Bavaria:






As a sidenote, also rare now - my Nokia N9 is one of the last phones that had this:

It's sad that's now a thing of the past for both hardware assembly and software! But that's just a hint for new companies to step in.

But back to GTA04 - a quick boot to the pre-installed Debian w/ LXDE:






And then as a shortcut unpacked and booted into QtMoko (well, it's also Debian) instead to test that phone functionality also works - and it does:



Next up: brewing something of my own!

Read more
pitti

I found it surprisingly hard to determine in tearDown() whether or not the test that currently ran succeeded or not. I am writing some tests for gnome-settings-daemon and want to show the log output of the daemon if a test failed.

I now cobbled together the following hack, but I wonder if there’s a more elegant way? The interwebs don’t seem to have a good solution for this either.

    def tearDown(self):
        [...]
        # collect log, run() shows it on failures
        with open(self.daemon_log.name) as f:
            self.log_output = f.read()

    def run(self, result=None):
        '''Show log output on failed tests'''

        if result:
            orig_err_fail = result.errors + result.failures
        super().run(result)
        if result and result.errors + result.failures > orig_err_fail:
            print('\n----- daemon log -----\n%s\n------\n' % self.log_output)

Read more
Timo Jyrinki

I believe I'm not the only one who thinks that use case oriented Grub2 documentation is hard to find, and a lot of the documentation is obsolete or wrong. My main cause for writing this blog post is a currently unanswered question regarding 2.00, but meanwhile it seems months have passed and still most 1.99 documentation is wrong as well, which might be interesting to some.

The aim is to prevent grub entries from being edited, while not restricting actual booting. This protection is meant for computers not having any confidential stuff, but just wanting to do some light weight security with the assumption that the computer isn't physically opened.

Common setup

You will obviously want to disable any automatically generated root access giving entries, by for example uncommenting GRUB_DISABLE_RECOVERY="true" in /etc/default/grub on Debian or Ubuntu. Also you would disable allowing any external boot devices to be used in BIOS/EFI/coreboot, which you would also have protected with a password. And that often means you need to also disable USB legacy support, since some BIOSes tend to offer all USB devices as bootable without password otherwise (note that I guess that could also cause problems accessing setup on desktop computers if your only keyboard is USB).

1.99

So to first fix the false instructions in various places - no, setting the superuser in 00_header as instructed is not enough. It might be, but does not apply if eg. old kernels are put into submenu (Ubuntu bug 718670, Fedora bug 836259). The protection from editing does not apply there. And if you remove all but one kernel so that there is no submenu, a submenu will be automatically created when there is a new kernel installed via security updates. I didn't need the submenu feature anyway, so I used to comment out the following lines in /etc/grub.d/10_linux:

  #if [ "$list" ] && ! $in_submenu; then
    #echo "submenu \"Previous Linux versions\" {"
    #in_submenu=:
  #fi
...
#if $in_submenu; then
#  echo "}"
#fi

I hope that was useful. I can imagine it causing a couple of family battles if the commonly instructed setup was the only protection used and there's for example a case of two computer savvy siblings that are eager to get to each others' computers...

2.00 & The Question

The problem with 2.00 is that the superusers setup yields a non-bootable system, ie. password is required for booting. But Google wasn't smiling at me today! Terrible. Can you help me (and others) with 2.00? The aim would be to have a 1.99-like setup where superuser password protects all entries from editing, but booting is fine without any passwords.

Update: Thanks, problem solved, see comments! Find the following line in /etc/grub.d/10_linux:

      echo "menuentry '$(echo "$os" | grub_quote)' ${CLASS} \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_indentation/"

And add --unrestricted there. Don't mix the line with the another menuentry line two lines earlier. The submenu problem doesn't exist anymore in 2.00.

Read more
pitti

I was working on writing tests for gnome-settings-daemon a week or so ago, and finally got blocked on being unable to set up upower/ConsoleKit/etc. the way I need them. Also, doing so needs root privileges, I don’t want my test suite to actually suspend my machine, and using the real service is generally not suitable for test suites that are supposed to run during “make check”, in jhbuild, and the like — these do not have the polkit privileges to do all that, and may not even have a system D-Bus running in the first place.

So I wrote a little test_upower.py helper, then realized that I need another one for systemd/ConsoleKit (for the “system idle” property), also looked at the mock polkit in udisks and finally sat down for two days to generalize this and do this properly.

The result is python-dbusmock, I just released the first tarball. With this you can easily create mock objects on D-Bus from any programming language with a D-Bus binding, or even from the shell.

The mock objects look like the real API (or at least the parts that you actually need), but they do not actually do anything (or only some action that you specify yourself). You can configure their state, behaviour and responses as you like in your test, without making any assumptions about the real system status.

When using a local system/session bus, you can do unit or integration testing without needing root privileges or disturbing a running system. The Python API offers some convenience functions like “start_session_bus()“ and “start_system_bus()“ for this, in a “DBusTestCase“ class (subclass of the standard “unittest.TestCase“).

Surprisingly I found very little precedence here. There is a Perl module, but that’s not particuarly helpful for test suites in C/Vala/Python. And there is Phil’s excellent Bendy Bus, but this has a different goal: If you want to thoroughly test a particular D-BUS service, such as ensuring that it does the right thing, doesn’t crash on bad input, etc., then Bendy Bus is for you (and python-dbusmock isn’t). However, it is too much overhead and rather inconvenient if you want to test a client-side program and just need a few system services around it which you want to set up in different states for each test.

You can use python-dbusmock with any programming language, as you can run the mocker as a normal program. The actual setup of the mock (adding objects, methods, properties, etc.) all happen via D-Bus methods on the “org.freedesktop.DBus.Mock“ interface. You just don’t have the convenience D-Bus launch API.

The simplest possible example is to create a mock upower with a single Suspend() method, which you can set up like this from Python:

import dbus
import dbusmock

class TestMyProgram(dbusmock.DBusTestCase):
[...]
    def setUp(self):
        self.p_mock = self.spawn_server('org.freedesktop.UPower',
                                        '/org/freedesktop/UPower',
                                        'org.freedesktop.UPower',
                                        system_bus=True,
                                        stdout=subprocess.PIPE)

        # Get a proxy for the UPower object's Mock interface
        self.dbus_upower_mock = dbus.Interface(self.dbus_con.get_object(
            'org.freedesktop.UPower', '/org/freedesktop/UPower'),
            'org.freedesktop.DBus.Mock')

        self.dbus_upower_mock.AddMethod('', 'Suspend', '', '', '')

[...]

    def test_suspend_on_idle(self):
        # run your program in a way that should trigger one suspend call

        # now check the log that we got one Suspend() call
        self.assertRegex(self.p_mock.stdout.readline(), b'^[0-9.]+ Suspend$')

This doesn’t depend on Python, you can just as well run the mocker like this:

python3 -m dbusmock org.freedesktop.UPower /org/freedesktop/UPower org.freedesktop.UPower

and then set up the mocks through D-Bus like

gdbus call --system -d org.freedesktop.UPower -o /org/freedesktop/UPower \
      -m org.freedesktop.DBus.Mock.AddMethod '' Suspend '' '' ''

If you use it with Python, you get access to the dbusmock.DBusTestCase class which provides some convenience functions to set up and tear down local private session and system buses. If you use it from another language, you have to call dbus-launch yourself.

Please see the README for some more details, pointers to documentation and examples.

Update: You can now install this via pip from PyPI or from the daily builds PPA.

Update 2: Adjusted blog entry for version 0.0.3 API, to avoid spreading now false information too far.

Read more
pitti

I just released PyGObject 3.3.92, for GNOME 3.5.92.

There is nothing too exciting in this release; a couple of small bug fixes and a lot of new test cases. See the detailled list of changes below.

Thanks to all contributors!

Changes:

  • release-news: Generate HTML changelog (Martin Pitt)
  • [API add] Add ObjectInfo.get_abstract method (Simon Feltman) (#675581)
  • Add deprecation warning when setting gpointers to anything other than int. (Simon Feltman) (#683599)
  • test_properties: Test accessing a property from a superclass (Martin Pitt) (#684058)
  • test_properties.py: Consistent test names (Martin Pitt)
  • test_everything: Ensure TestSignals callback does get called (Martin Pitt)
  • argument: Fix 64bit integer convertion from GValue (Nicolas Dufresne) (#683596)
  • Add Simon Feltman as a project maintainer (Martin Pitt)
  • test_signals.py: Drop global type variables (Martin Pitt)
  • test_signals.py: Consistent test names (Martin Pitt)
  • Add test cases for GValue signal arguments (Martin Pitt) (#683775)
  • Add test for GValue signal return values (Martin Pitt) (#683596)
  • Improve setting pointer fields/arguments to NULL using None (Simon Feltman) (#683150)
  • Test gint64 C signal arguments and return values (Martin Pitt)
  • Test in/out int64 GValue method arguments. (Martin Pitt) (#683596)
  • Bump g-i dependency to 1.33.10 (Martin Pitt)
  • Fix -uninstalled.pc.in file (Thibault Saunier) (#683379)

Read more
pitti

PostgreSQL 9.2 has just been released, after a series of betas and a release candidate. See for yourself what’s new, and try it out!

Packages are available in Debian experimental as well as my PostgreSQL backports PPA for Ubuntu 10.04 to 12.10, as usual.

Please note that 9.2 will not land any more in the feature frozen Debian Wheezy and Ubuntu Quantal (12.10) releases, as none of the server-side extensions are packaged for 9.2 yet.

Read more
pitti

I just released PyGObject 3.3.91, for GNOME 3.5.91.

The big new feature in this release (thanks to the release team for granting an exception) is Simon Feltman’s new Signal helper class, which makes defining custom signals a whole lot simpler and more obvious. In the past, you had to do

 class C(GObject.GObject):
    __gsignals__ = {
        'my_signal': (GObject.SIGNAL_RUN_FIRST, GObject.TYPE_NONE,
                      (GObject.TYPE_INT,))
    }

    def do_my_signal(self, arg):
        print("my_signal called with %i" % arg)

whereas now this looks like

class C(GObject.GObject):
    @GObject.Signal(arg_types=(int,))
    def my_signal(self, arg):
        print("my_signal called with %i" % arg)

or even more elegantly when using Python 3 and its new type annotations:

class C(GObject.GObject):
    @GObject.Signal
    def my_signal(self, arg:int):
        print("my_signal called with %i" % arg)

Check out the updated example and docstring for other ways how to use it.

Overrides can now be in a directory different from the one that pygobject installs itself into. These overrides need to put this into their __init__.py at the top:

from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)

and put themselves somewhere into the default PYTHONPATH. This should make it a lot easier for library packages to ship their own overrides for Python.

This new version also comes with a couple of new overrides and bug fixes. See the detailled list of changes below.

Thanks to all contributors!

  • Fix exception test case for Python 2 (Martin Pitt)
  • Bump g-i dependency to >= 1.3.9 (Martin Pitt)
  • Show proper exception when trying to allocate a disguised struct (Martin Pitt) (#639972)
  • Support marshalling GParamSpec signal arguments (Mark Nauwelaerts) (#683099)
  • Add test for a signal that returns a GParamSpec (Martin Pitt) (#683265)
  • [API add] Add Signal class for adding and connecting custom signals. (Simon Feltman) (#434924)
  • Fix pygtkcompat’s Gtk.TreeView.insert_column_with_attributes() (Martin Pitt)
  • Add override for Gtk.TreeView.insert_column_with_attributes() (Marta Maria Casetti) (#679415)
  • .gitignore: Add missing built files (Martin Pitt)
  • Ship tests/gi in tarball (Martin Pitt)
  • Split test_overrides.py (Martin Pitt) (#683188)
  • _pygi_argument_to_object(): Clean up array unmarshalling (Martin Pitt)
  • Fix memory leak in _pygi_argument_to_object() (Alban Browaeys) (#682979)
  • Fix setting pointer fields/arguments to NULL using None. (Simon Feltman) (#683150)
  • Fix for python 2.6, officially drop support for < 2.6 (Martin Pitt) (#682422)
  • Allow overrides in other directories than gi itself (Thibault Saunier) (#680913)
  • Clean up sys.path handling in tests (Simon Feltman) (#680913)
  • Fix dynamic creation of enum and flag gi types for Python 3.3 (Simon Feltman) (#682323)
  • [API add] Override g_menu_item_set_attribute (Paolo Borelli) (#682436)

Read more
pitti

The unstoppable PostgreSQL team just announced the first release candidate of 9.2, with several bug fixes since the Beta 4. If you haven’t tested 9.2 yet, now is the time! Remember that you can run a copy of your 8.4 or 9.2 cluster in parallel for testing with pg_upgradecluster.

If you use Debian, 9.2rc1 will be available in experimental in a few hours. For Ubuntu, you can get packages for all supported releases from my PostgreSQL backports PPA as usual.

Enjoy!

Read more
pitti

I just released PyGObject 3.3.90, for GNOME 3.5.90.

This is now working correctly on big-endian 64 bit machines such as powerpc64, and fixes marshalling for GParamSpec attributes and return values, as well as a few small bug fixes.

Thanks to all contributors!

Complete list of changes:

  • Implement marshalling for GParamSpec (Mathieu Duponchelle) (#681565)
  • Fix erronous import statements for Python 3.3 (Simon Feltman) (#682051)
  • Do not fail tests if pyflakes or pep8 are not installed (Martin Pitt)
  • Fix PEP-8 whitespace checking and issues in the code (Martin Pitt)
  • Fix unmarshalling of gssize (David Malcolm) (#680693)
  • Fix various endianess errors (David Malcolm) (#680692)
  • Gtk overrides: Add TreeModelSort.__init__(self, model) (Simon Feltman) (#681477)
  • Convert Gtk.CellRendererState in the pygi-convert script (Manuel Quiñones) (#681596)

Read more
pitti

I have had the pleasure of attending GUADEC in full this year. TL;DR: A lot of great presentations + lots of hall conversations about QA stuff + the obligatory be{er,ach} = ?.

Last year I just went to the hackfest, and I never made it to any previous one, so GUADEC 2012 was a kind of first-time experience for me. It was great to put some faces and personal impressions to a lot of the people I have worked with for many years, as well as catching up with others that I did meet before.

I discussed various hardware/software testing stuff with Colin Walters, Matthias Clasen, Lennart Poettering, Bertrand Lorentz, and others, so I have a better idea how to proceed with automated testing in plumbing and GNOME now. It was also really nice to meet my fellow PyGObject co-maintainer Paolo Borelli, as well as seeing Simon Schampier and Ignacio Casal Quinteiro again.

No need to replicate the whole schedule here (see for yourself on the interwebs), so I just want to point out some personal highlights in the presentations:

  • Jacob Appelbaum’s keynote about Tor brought up some surprising facts about how the project has outgrown its past performance problems and how useful it was during e. g. the Arab revolution
  • .

  • Philip Whitnall’s presented Bendy Bus, a tool to mock D-Bus services for both unit and fuzz testing. He successfully used it to find and replicate bugs in Evolution (by mocking evolution-data-server) as well as libfolks (by mocking the telepathy daemons). It should work just as well to mock system services like upower or NetworkManager to test the UI bits that use it. This is a topic which has been on my wishlist for a long time already, so I’m happy that there is already an existing solution out there. We might have to add some small features to it, but it’s by and large what I had in mind, and in the discussion afterwards Philip said he’d appreciate patches against it.
  • Christophe Fergeau showed how to easily do Windows builds and installers from GNOME tarballs with MinGW-w64, without having to actually touch/use Windows (using cross-building and running tests etc. under wine). I found it surprising how easy that actually is, and it should not be hard to integrate that in a jhbuild-like setup, so that it does not keep breaking every time.
  • Colin Walters gave an introduction to OSTree, a project to build bootable images from kernel/plumbing/desktop upstream git heads on a daily basis. This is mostly to avoid the long delays that we otherwise have with doing upstream releases, packaging them, and getting them into a form that can safely be tested by users. In an afterwards discussion we threw some ideas around how we can integrate existing and future tests into this (something in spirit like our autopkgtest). This will be the area where I’ll put most focus on in the next time.
  • Adam Dingle of yorba fame shared his thoughts about how we can crowdfunding of Free Software Projects work in practice, comparing efforts like codefoundry and kickstarter. Of course he does not have a solution for this yet, but he raised some interesting concerns and it spun off lots of good discussions over lunchtime.
  • Last but not least, the sport event on Saturday evening was awesome! In hindsight I was happy to not have signed up for soccer, as people like Bastian or Jordi played this really seriously. I participated in the Basketball competition instead, which was the right mix of fun and competition without seriously trying to hurt each other. :-)

There were a lot of other good ones, some of them technical and some amusing and enlightening, such as Frederico’s review of the history of GNOME.

On Monday I prepared and participated in a PyGObject hackfest, but I’ll blog about this separately.

I want to thank all presenters for the excellent program, as well as the tireless GUADEC organizer team for making everything so smooth and working flawlessly! Great Job, and see you again in Strasbourg or Brno!

Read more
pitti

I started to collect some easy PyGObject bugs which are appropriate for the PyGObject hackfest at GUADEC on July 30th. These are bugs which do not need a lot of previous knowlege and are excellent starters for new contributors, such as adding overrides, fixing build system issues, etc.

I also created an initial idea pool/agenda/coordination page, where participants can add or signup for things to work on.

Feel free to add your own topics! I’m really looking forward to GUADEC and the hackfest, see you there!

GUADEC 2012

Read more
Timo Jyrinki

Do you ever have those afternoons you get a ”great” idea and you've all the evening time for that task. The task is a relaxing one and won't need much attention, and you can watch a movie or something.  But then, it happens that the evening turns into night as you realize a couple of little details adding complexity to the idea, and the task turns out to be much more invasive to your evening than you thought?

In this example, I got the great idea to upgrade my Debian running NAS device (thanks Martin for everything!) to use ext4 instead of ext3. The kind of idea that takes a long time for relatively little practical benefit, but it just feels like a nice thing do when you've the extra amounts of nerd time available. It's basically just opening the NAS device up, mounting its hard disk to a laptop via external case, running the tune2fs and fsck then putting the disk back. It just takes a long time for the initial fsck (to make sure everything's intact) and then the required fsck run to get ext4 mountable.

Only in this situation, it would have been beneficial to have the ext4 support in the flashed initramfs before the migration. So... before the photo below, I've already:

  1. done the ext4 migration and fsck:s
  2. screwed the disk back to the NAS case, attached cables and found that it doesn't boot
  3. (on the laptop with the hard disk attached again tried manually unpacking initramfs and adding ext4 module... also had time to bind mount everything and chroot into the ARM system to run update-initramfs manually... also tried booting with those... until remembered the simple fact that the /boot partition is only for show and also the initramfs is loaded directly from flash)
  4. copied the main root filesystem content from the original disk to another external disk with ext3 partition
  5. attached the another disk (with same UUID:s) to the QNAP NAS device, booted, double-checked that I have now ext4 specified under /etc/initramfs-tools/modules, reconfigured the linux image that also regenerates initramfs and flashes it
And in the photo, what's happening is that:
  1. I've again the original disk reattached and system booted with the initramfs generated and flashed from the ext3 disk
  2. the NAS device is hanging in the air, cover open, from the closet where I've things stuffed in (normally secured with cable ties), and I need to support it with a knee or one hand since the 2TB disk is much heavier than the small SSD I used as the ext3 disk so the power cable and RJ-45 cable would have pretty heavy load
  3. Since I've only one hand in use and can't use a laptop, I'm logging in via my Nokia N9 and then reflashing the kernel + initramfs from this original disk, just to make sure everything is now alright and also after that flashing it still boots (it does!). Note that I feel like the setup is secure enough for non-interrupted flashing so that I can indeed support the NAS with a knee, use one hand to keep N9 and another hand to take a photo with a camera.

And so we have had a productive and educating afternoon/evening/night once again. Does this ever happen to you?

Read more