Canonical Voices

Posts tagged with 'testing'

pitti

I found it surprisingly hard to determine in tearDown() whether or not the test that currently ran succeeded or not. I am writing some tests for gnome-settings-daemon and want to show the log output of the daemon if a test failed.

I now cobbled together the following hack, but I wonder if there’s a more elegant way? The interwebs don’t seem to have a good solution for this either.

    def tearDown(self):
        [...]
        # collect log, run() shows it on failures
        with open(self.daemon_log.name) as f:
            self.log_output = f.read()

    def run(self, result=None):
        '''Show log output on failed tests'''

        if result:
            orig_err_fail = result.errors + result.failures
        super().run(result)
        if result and result.errors + result.failures > orig_err_fail:
            print('\n----- daemon log -----\n%s\n------\n' % self.log_output)

Read more
pitti

I was working on writing tests for gnome-settings-daemon a week or so ago, and finally got blocked on being unable to set up upower/ConsoleKit/etc. the way I need them. Also, doing so needs root privileges, I don’t want my test suite to actually suspend my machine, and using the real service is generally not suitable for test suites that are supposed to run during “make check”, in jhbuild, and the like — these do not have the polkit privileges to do all that, and may not even have a system D-Bus running in the first place.

So I wrote a little test_upower.py helper, then realized that I need another one for systemd/ConsoleKit (for the “system idle” property), also looked at the mock polkit in udisks and finally sat down for two days to generalize this and do this properly.

The result is python-dbusmock, I just released the first tarball. With this you can easily create mock objects on D-Bus from any programming language with a D-Bus binding, or even from the shell.

The mock objects look like the real API (or at least the parts that you actually need), but they do not actually do anything (or only some action that you specify yourself). You can configure their state, behaviour and responses as you like in your test, without making any assumptions about the real system status.

When using a local system/session bus, you can do unit or integration testing without needing root privileges or disturbing a running system. The Python API offers some convenience functions like “start_session_bus()“ and “start_system_bus()“ for this, in a “DBusTestCase“ class (subclass of the standard “unittest.TestCase“).

Surprisingly I found very little precedence here. There is a Perl module, but that’s not particuarly helpful for test suites in C/Vala/Python. And there is Phil’s excellent Bendy Bus, but this has a different goal: If you want to thoroughly test a particular D-BUS service, such as ensuring that it does the right thing, doesn’t crash on bad input, etc., then Bendy Bus is for you (and python-dbusmock isn’t). However, it is too much overhead and rather inconvenient if you want to test a client-side program and just need a few system services around it which you want to set up in different states for each test.

You can use python-dbusmock with any programming language, as you can run the mocker as a normal program. The actual setup of the mock (adding objects, methods, properties, etc.) all happen via D-Bus methods on the “org.freedesktop.DBus.Mock“ interface. You just don’t have the convenience D-Bus launch API.

The simplest possible example is to create a mock upower with a single Suspend() method, which you can set up like this from Python:

import dbus
import dbusmock

class TestMyProgram(dbusmock.DBusTestCase):
[...]
    def setUp(self):
        self.p_mock = self.spawn_server('org.freedesktop.UPower',
                                        '/org/freedesktop/UPower',
                                        'org.freedesktop.UPower',
                                        system_bus=True,
                                        stdout=subprocess.PIPE)

        # Get a proxy for the UPower object's Mock interface
        self.dbus_upower_mock = dbus.Interface(self.dbus_con.get_object(
            'org.freedesktop.UPower', '/org/freedesktop/UPower'),
            'org.freedesktop.DBus.Mock')

        self.dbus_upower_mock.AddMethod('', 'Suspend', '', '', '')

[...]

    def test_suspend_on_idle(self):
        # run your program in a way that should trigger one suspend call

        # now check the log that we got one Suspend() call
        self.assertRegex(self.p_mock.stdout.readline(), b'^[0-9.]+ Suspend$')

This doesn’t depend on Python, you can just as well run the mocker like this:

python3 -m dbusmock org.freedesktop.UPower /org/freedesktop/UPower org.freedesktop.UPower

and then set up the mocks through D-Bus like

gdbus call --system -d org.freedesktop.UPower -o /org/freedesktop/UPower \
      -m org.freedesktop.DBus.Mock.AddMethod '' Suspend '' '' ''

If you use it with Python, you get access to the dbusmock.DBusTestCase class which provides some convenience functions to set up and tear down local private session and system buses. If you use it from another language, you have to call dbus-launch yourself.

Please see the README for some more details, pointers to documentation and examples.

Update: You can now install this via pip from PyPI or from the daily builds PPA.

Update 2: Adjusted blog entry for version 0.0.3 API, to avoid spreading now false information too far.

Read more
pitti

The unstoppable PostgreSQL team just announced the first release candidate of 9.2, with several bug fixes since the Beta 4. If you haven’t tested 9.2 yet, now is the time! Remember that you can run a copy of your 8.4 or 9.2 cluster in parallel for testing with pg_upgradecluster.

If you use Debian, 9.2rc1 will be available in experimental in a few hours. For Ubuntu, you can get packages for all supported releases from my PostgreSQL backports PPA as usual.

Enjoy!

Read more
pitti

As I wrote two weeks ago, I consider the QA related changes in Ubuntu 12.04 a great success. But while we will continue and even extend our efforts there, this is not where the ball stops: it’s great to have the feedback cycle between “break it” and “notice the bug” reduced from potentially a few months to one day in many cases, but wouldn’t it be cool to reduce that to a few minutes, and also put the machinery right at where stuff really happens — into the upstream trunks? If for every commit to PyGObject, GTK, NetworkManager, udisks, D-BUS, telepathy, gvfs, etc. we’d immediately build and test all reverse dependencies and the committer would be told about regressions?

I have had the desire to work on automated tests in Linux Plumbing and GNOME for quite a while now. Also, after 8 years of doing distribution work of packaging and processes (tech lead, release engineering/management, stable release updates, etc.) I wanted to shift my focus towards technology development. So I’ve been looking for a new role for some time now.

It seems that time is finally there: At the recent UDS, Mark announced that we will extend our QA efforts to upstream. I am very happy that in two weeks I can now move into a role to make this happen: Developing technology to make testing easier, work with our key upstreams to set up test suites and reporting, and I also can do some general development in areas that are near and dear to my heart, like udev/systemd, udisks, pygobject, etc. This work will be following the upstream conventions for infrastructure and development policies. In particular, it is not bound by Canonical copyright license agreements.

I have a bunch of random ideas what to work on, such as:

  • Making it possible/easier to write tests with fake hardware. E. g. in the upower integration tests that I wrote a while ago there is some code to create a fake sysfs tree which should really go into udev itself, available from C and introspection and be greatly extended. Also, it’s currently not possible to simulate a uevent that way, that’s something I’d like to add. Right now you can only set up /sys, start your daemon, and check the state after the coldplugging phase.
  • Interview some GNOME developers what kind of bugs/regressions/code they have most trouble with and what/how they would like to test. Then write a test suite with a few working and one non-working case (bugzilla should help with finding these), discuss the structure with the maintainer again, find some ways to make the tests radically simpler by adding/enhancing the API available from gudev/glib/gtk, etc. E. g. in the tests for apport-gtk I noticed that while it’s possible to do automatic testing of GUI applications it is still way harder than it should and needs to be. I guess that’s the main reason why there are hardly any GUI tests in GNOME?
  • I’ve heard from several people that it would be nice to be able to generate some mock wifi/ethernet/modem adapters to be able to automatically test NetworkManager and the like. As network devices are handled specially in Linux, not in the usual /dev and sysfs manner, they are not easy to fake. It probably needs a kernel module similar to scsi_debug, which fakes enough of the properties and behaviour of particular nmetwork card devices to be useful for testing. One could certainly provide a pipe or a regular bridge device at the other end to actually talk to the application through the fake device. (NB this is just an idea, I haven’t looked into details at all yet).
  • For some GUI tests it would be much easier if there was a very simple way of providing mocks/stubs for D-BUS services like udisks or NetworkManager than having to set up the actual daemons, coerce them into some corner-case behaviour, and needing root privileges for the test due to that. There seems to be some prior art in Ruby, but I’d really like to see this in D-BUS itself (perhaps a new library there?), and/or having this in GDBus where it would even be useful for Python or JavaScript tests through gobject-introspection.
  • There are nice tools like the Clang static code analyzer out there. I’d like to play with those and see how we can use it without generating a lot of false positives.
  • Robustify jhbuild to be able to keep up with building everything largely unattended. Right now you need to blow away and rebuild your tree way too often, due to brittle autotools or undeclared dependencies, and if we want to run this automatically in Jenkins it needs to be able to recover by itself. It should be able to keep up with the daily development, automatically starting build/test jobs for all reverse dependencies for a module that just has changed (and for basic libraries like GLib or GTK that’s going to be a lot), and perhaps send out mail notifications when a commit breaks something else. This also needs some discussion first, about how/where to do the notifications, etc.

Other ideas will emerge, and I hope lots of you have their own ideas what we can do. So please talk to me! We’ll also look for a second person to work in that area, so that we have some capacity and also the possibility to bounce ideas and code reviews between each other.

Read more
pitti

The first Beta of the upcoming PostgreSQL 9.2 was released yesterday (see announcement). Your humble maintainer has now created packages for you to test. Please give them a whirl, and report any problems/regressions that you may see to the PostgreSQL developers, so that we can have a rock solid 9.2 release.

Remember, with the postgresql-common infrastructure you can use pg_upgradecluster to create a 9.2 cluster from your existing 8.4/9.1 cluster and run them both in parallel without endangering your data.

For Debian the package is currently waiting in the NEW queue, I expect them to go into experimental in a day or two. For Ubuntu 12.04 LTS you can get packages from my usual PostgreSQL backports PPA. Note that you need at least postgresql-common version 0.130, which is available in Debian unstable and the PPA now.

I (or rather, the postgresql-common test suite) found one regression: Upgrades do not keep the current value of sequences, but reset them to their default value. I reported this upstream and will provide updated packages as soon as this is fixed.

Read more
pitti

Half a year ago I blogged about the changed expectancies and processes to improve quality of the development release which we discussed at the UDS in Orlando: A promise that we don’t break the development version, regressions are not to be tolerated, acceptance criteria for Canonical upstreams. For that we introduced the Stable+1 team, actually did some reversions of broken packages, our QA team set up rigorous daily installation image and upgrade tests, and the code development process for Unity and related project was changed to enforce buildability and passing automatic tests with each and every change to trunk.

To be honest I was still a tad sceptic back then when this was planned. These were a lot of changes for one cycle, the stable+1 team was a considerable resource investment (starting with three people fulltime in the first few months), and not to the least our friends in the DX team felt thwarted because they had to sit down for a long time developing tests, and then changing their habits and practices for development.

So was all that effort worth it?

One word: OMGCRYOUTLOUDYES!!!!

Just a random sample of goodness that this brought:

  • It was nice to not have to sit down for an hour every cople of days to figure out how to get back my desktop after the daily dist-upgrade bricked it.
  • Unity, compiz, and friends were remarkably stable. I still remember the previous cycles where every new version got differently crashy, broke virtual workspaces, and what not. The worst thing that happened this cycle is eternally breaking keybindings (or changing them around), but at least those usually had obvious workarounds.
  • As a result of those, I think we had at least one, maybe two magnitudes more testers of the daily development release than in previous cycles. So we got a lot of good bug reports and also patch contributions for smaller issues in Precise which we otherwise would not have discovered.
  • The daily dist-upgrade tests tremendously helped to uncover packaging problems which would break real-world upgrades out there by the dozens. It took months to fix the hardest one: upgrading 10.04 LTS to 12.04 LTS with all universe packages offered in software-center. This beast takes 13 hours to run, so nobody really did manual tests like that in the past cycles.
  • Due to the daily automatic CD image builds we dramatically reduced both the cost of fixing regressions as well as the emergency hackathons during milestone preparations. It is a lot easier to unbreak e. g. LVM setup or OEM install modes on our images when the regression happened just a day before than discovering it two days before a milestone is due, as again nobody tests these less common modes very often.
  • So as a result, I really think the investments into QA and the stable+1 teams already paid off twofold by giving us more time to work on the less critical fixes, avoiding lots of user frustration about broken upgrades, and generally making the daily development a lot more enjoyable. Or, as Rick Spencer puts it: Velocity, velocity, velocity!

    Despite these improvements, there are still some improvements I’m looking forward to in the next cycles: Thanks to Colin Watson we can now use -proposed as a proper staging area, and used this feature rather extensively in the past month. From my point of view, 90% of the remaining daily dist-upgrade failures were due to packages building on different architectures at vastly different times, or failing on some, but not all architectures (“arch skew”). This is something you cannot really predict or guard against as a developer when you upload large and potentially harmful packages directly to the development release, so uploading them to the staging area and letting everything build there will reduce the breakage to zero. This was successfully demonstrated with Unity, GTK, and other packages where arch skew pretty much always causes people to hose their desktop, as well as daily CD images not working.

    I’m also looking forward to combining the staging area with lots of automatic tests against reverse dependencies (e. g. testing the installer against a new GTK or pygobject before it lands), something we just barely tipped our toes in.

    I can’t imagine how we were ever able to develop our new releases the old way. :-)

    Precise Pangolin^W^WUbuntu 12.04, I’m proud of you! Go out and amaze people!

    Read more
pitti

I’m the release engineer in charge for Precise Alpha 1 which is currently being prepared. I must say, this has been a real joy! The fruits of the new QA paradigm and strategy and the new Stable+1 maintenance team have already achieved remarkable results:

  • The archive consistency reports like component-mismatches, uninstallability, etc. now appear about 20 minutes earlier than in oneiric.
  • CD image builds can now happen 30 minutes earlier after the publisher start, and are much quicker now due to moving to newer machines. We can now build an i386 or amd64 CD image in 8 minutes! Currently they still need to wait for the slow powerpc buildd, but moving to a faster machine there is in progress. These improvements lead to much faster image rebuild turnarounds.
  • Candidate CDs now get automatically posted to the new ISO tracker as soon as they appear.
  • Whenever a new Ubuntu image is built (daily or candidate), they automatically get smoke-tested, so we know that the installer works under some standard scenarios and produces an install which actually boots.
  • Due to the new discipline and the stable+1 team, we had working daily ISOs pretty much every day. In previous Alphas, the release engineer(s) pretty much had to work fulltime for a day or two to fix the worst uninstallability etc., all of this now went away.

All this meant that as a release engineer almost all of the hectic and rather dull work like watching for finished ISO builds and posting them or getting the archive into a releasable state completely went away. We only had to decide when it was a good time for building a set of candidate images, and trigger them, which is just copy&pasting some standard commands.

So I could fully concentrate on the interesting bits like actually investigating and debugging bug reports and regressions. As the Law of Conservation of Breakage dictates, taking away work from the button pushing side just caused the actual bugs to be much harder and earned us e. g. this little gem which took Jean-Baptiste, Andy, and me days to even reproduce properly, and will take much more to debug and fix.

In summary, I want to say a huge “Thank you!” to the Canonical QA team, in particular Jean-Baptiste Lallement for setting up the auto-testing and Jenkins integration, and the stable+1 team (Colin Watson, Mike Terry, and Mathieu Trudel-Lapierre in November) for keeping the archive in such excellent shape and improving our tools!

Read more
pitti

12.04: Testing FTW

I arrived back home in Augsburg, from last week’s Ubuntu Developer Summit in Orlando, FL. As this is a quality/LTS cycle, we pretty much already knew in advance what to do (bug fixing, bug fixing, some boot speed, and did I mention bug fixing?), but still we had many highly interesting and exciting sessions this time, not so much about what we are going to do, but how we are going to build 12.04.

So far our common practice has been to toss everything new into the development release until Feature Freeze and then try and clean up most of the fallout. Me and many other developers have always cried for having more time for fixing long-standing bugs and not introducing breakage in the first place. It seems that now with 12.04, Ubuntu/Canonical are actually getting serious about it.

(Any resemblance to that postcard from the Kennedy Space Center which I went to last Sunday is of course absolutely unintended and purely coincidental :-) ).

The mission statement is now to have working ISOs, stable ? development, and daily intra-development upgrades every day, quick and regular cleanup of uninstallable packages, component-mismatches, NBS etc., backed by a new “stable +1″ team backed by three people on a rotational shift.

QA team is now setting up daily automatic smoketesting of the installer and other packages which have tests. For the latter we’ll convert some packages to the DEP-8, the proposed format for running autopkgtest on (I’ll do udisks, postgresql-common, pygobject, apport, and jockey soon).

We’ll try do put uploads which might break something (like new libraries) to a staging area first, against which we can run test suites of reverse dependencies before it lands in the new release. As doing this on a large scale still requires infrastructure to be created, we’ll only exercise it for a few packages by uploading to precise-proposed first, but this has a high potential for extension.

We want to commit to fixing major breakage within 3 hours of development time, or otherwise revert the faulty package to the previous version (unless that aggravates problems, such as file conflicts).

Finally, for Canonical upstreams we are introducing “acceptance criteria”, which will hopefully significantly raise the quality and lower the regressions of each Unity etc. release.

So, the mission is clear. In practice we’ll probably have to make some real-life concessions, and Murphy’s law dictates that there still will be some breakage, but we can learn from that as we go.

Let’s build 12.04 LTS!

Read more
pitti

Just took the plunge, using the excellent bandwidth and local mirror at UDS:

$ lsb_release -irc
Distributor ID: Ubuntu
Release: 12.04
Codename: precise

Nothing blew up in my face, so it seems today is a good day to die^Wupgrade.

Read more
pitti

Hot on the heels of the Announcement of the second 9.1 Beta release there are now packages for it in Debian experimental and backports for Ubuntu 10.04 LTS, 10.10. and 11.04 in my PostgreSQL backports for stable Ubuntu releases PPA.

Warning for upgrades from Beta 1: The on-disk database format changed since Beta-1. So if you already have the beta-1 packages installed, you need to pg_dumpall your 9.1 clusters (if you still need them), and pg_dropcluster all 9.1 clusters before the upgrade. I added a check to the pre-install script to make the postgresql-9.1 package fail early to upgrade if you still have existing 9.1 clusters to avoid data loss.

Read more
pitti

Two weeks ago, PostgreSQL announced the first beta version of the new major 9.1 version, with a lot of anticipated new features like synchronous replication or better support for multilingual databases. Please see the release announcement for details.

Due to my recent moving and the Ubuntu Developer Summit it took me a bit to package them for Debian and Ubuntu, but here they are at last. I uploaded postgresql-9.1 to Debian experimental; currently they are sitting in the NEW queue, but I’m sure our restless Debian archive admins will get to it in a few days. I also provided builds for Ubuntu 10.04 LTS, 10.10. and 11.04 in my PostgreSQL backports for stable Ubuntu releases PPA.

I provided full postgresql-common integration, i. e. you can use all the usual tools like pg_createcluster, pg_upgradecluster etc. to install 9.1 side by side with your 8.4/9.0 instances, attempt an upgrade of your existing instances to 9.1 without endangering the running clusters, etc. Fortunately this time there were no deprecated configuration options, so pg_upgradecluster does not actually have to touch your postgresql.conf for the 9.0 ?9.1 upgrade.

They pass upstream’s and postgresql-common’s integration test suite, so should be reasonably working. But please let me know about everything that doesn’t, so that we can get them in perfect shape in time for the final release.

I anticipate that 9.1 will be the default (and only supported) version in the next Debian release (wheezy), and will most likely be the one shipped in the next Ubuntu LTS (in 12.04). It might be that the next Ubuntu release 11.10 will still ship with 9.0, but that pretty much depends on how many extensions get ported to 9.1 by feature freeze.

Read more
pitti

For a test suite I need to create a local SSL-enabled HTTPS server in my Python project. I googled around and found various recipes using pyOpenSSL, but all of those are quite complicated, and I didn’t even get the referenced one to work.

Also, Python has shipped its own built-in SSL module for quite a while. After reading some docs and playing around, I eventually got it to work with a remarkably simple piece of code using the builtin ssl module:

import BaseHTTPServer, SimpleHTTPServer
import ssl

httpd = BaseHTTPServer.HTTPServer(('localhost', 4443), SimpleHTTPServer.SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket (httpd.socket, certfile='path/to/localhost.pem', server_side=True)
httpd.serve_forever()

(I use port 4443 so that I can run the tests as normal user; the usual port 443 requires root privileges).

Way to go, Python!

Read more
pitti

PostgreSQL 9.0 with a whole lot of new features and improvements is nearing completion. The first release candidate was just announced.

As with the beta versions, I uploaded RC1 to Debian experimental again. If you want to test/use them on Ubuntu 10.04 (Lucid Lynx), you can get packages from my “PostgreSQL backports for stable Ubuntu releases” PPA. Please let me know if you need them for other releases.

Just for the records, both Debian 6.0 “Squeeze” and Ubuntu 10.10 “Maverick Meerkat” will release and officially support 8.4 only, as 9.0 is too late for the feature freezes of both. Also, it will take quite some time to update all the packaged extensions to 9.0. As usual, 9.0 will be provided as official backports for both Debian and Ubuntu.

Happy testing!

Read more
pitti

The Debian import freeze is settled, the first rush of major changes went into Maverick, and the dust now has settled a bit. Thus it’s time to turn back some attention to crashes and quality in general.

This morning I created maverick chroots for the Apport retracers, and they are currently processing the backlog. I also uploaded a new Apport package which now enables crash reporting by default again.

Happy segfaulting!

Read more
pitti

I just did the 1000th commit of postgresql-common, the Debian/Ubuntu PostgreSQL management utilities. Wow, what started as a small hack in December 2004 to be able to install several major PostgreSQL versions in parallel has turned out to be a > 600 kB project providing a comprehensive tool set for uniformly setting up, upgrading, and maintaining PostgreSQL database instances from version 7.4 up to the just announced 9.0 beta-1, with a comprehensive test suite that I’m really proud of (it tests just about every aspect, option, and corner case of the installation, integration, upgrade, locale support, and error handling, and takes about half an hour on my system).

The actual commit is rather dull though, it’s just the release/upload tag for version 107 which I just uploaded to Debian unstable (it will hit Ubuntu maverick and backports soon). 107 introduces support for PostgreSQL 9.0, and I fixed up the scripts and tests enough so that all the tests pass now, and thus it’s good for public release.

I also uploaded the 9.0 beta 1 server itself now. It’ll be in Debian’s NEW queue for a bit, and hit experimental in a few days (or hours; recently the ftpmasters have been awesome!) It has a few cool new features (see the announcement), and upstream really appreciates testing and feedback. So, bug reports appreciated!

In particular, if you have existing 8.4 clusters you can just try to pg_upgradecluster them to 9.0 beta 1. Remember, if anything goes wrong, the cluster of the previous version is still intact and untouched, so you can run upgrades as many times as you like and only pg_dropcluster the old one when you’re completely satisfied with the upgrade.

Read more
pitti

PostgreSQL did microrelease updates three weeks ago: 8.4.3, 8.3.10, and 8.1.20 are the ones relevant for Debian/Ubuntu. There haven’t been reports about regressions in Debian or the upstream lists so far, so it’s time to push these into stable releases.

The new releases are in Lucid Beta-2, and hardy/jaunty/karmic-proposed. If you are running PostgreSQL, please upgrade to the proposed versions and give feedback to LP #557408.

Updates for Debian Lenny are prepared as well, and await release team ack.

On a related note, I recently fixed quite a major problem in pg_upgradecluster in postgresql-common 106: It did not copy database-level ACLs and configuration settings (Debian #543506). Fixing this required some reenginering of the upgrade process. It’s all thoroughly test case’d, but practical feedback would be very welcome! Remember, if anything goes wrong, the cluster of the previous version is still intact and untouched, so you can run upgrades as many times as you like and only pg_dropcluster the old one when you’re completely satisfied with the upgrade.

Thanks,

Martin

Read more
pitti

Yesterday PostgreSQL released new security/bug fix microreleases 8.4.2, 8.3.9, and 8.1.19, which fix two security issues and a whole bunch of bugs.

Updates for all supported Ubuntu releases are built in the ubuntu-security-proposed PPA. They pass the upstream and postgresql-common test suites, but more testing is heavily appreciated! Please give feedback in bug LP#496923.

Thanks!

Read more
pitti

It has been broken for two months, since we upgraded to the “new” (not quite any more) gdm in Karmic: But I finally got around to re-doing the gdm patch for supporting a guest session for 2.27. I use it myself a lot for testing stuff with a clean user profile, so I can finally delete my herd of test users again.

One known drawback is that the guest session is not currently restricted by AppArmor rules yet. I’ll get to this at some point, I filed LP #425793 to keep it on the radar.

Read more
pitti

PostgreSQL recently published new point releases which fix the usual range of important bugs (data loss/wrong results, etc.) and additionally fix another case of insecure “security definer” functions (the analogon to setuid programs in file system space for SQL functions) (CVE-2007-6600). Please see the complete changes for 8.1.18 (Ubuntu 6.06 LTS), 8.3.8 (Ubuntu 8.04 LTS, 8.10, and 9.04), and 8.4.1 (Ubuntu 9.10).

8.4.1 is already in Ubuntu 9.10 and in my PostgreSQL Backports PPA for Ubuntu 8.04 LTS and 9.04. Updates for the other supported Ubuntu releases are currently in -proposed, waiting for testing feedback.

If you use PostgreSQL, please give the -proposed packages some testing and report back in Ubuntu bug #430544. Thanks!

Read more