Canonical Voices

Posts tagged with 'debian'

pitti

On the last UDS we talked about migrating from upstart to systemd to boot Ubuntu, after Mark announced that Ubuntu will follow Debian in that regard. There’s a lot of work to do, but it parallelizes well once developers can run systemd on their workstations or in VMs easily and the system boots up enough to still be able to work with it.

So today I merged our systemd package with Debian again, dropped the systemd-services split (which wasn’t accepted by Debian and will be unnecessary now), and put it into my systemd PPA. Quite surprisingly, this booted a fresh 14.04 VM pretty much right away (of course there’s no Plymouth prettiness). The main two things which were missing were NetworkManager and lightdm, as these don’t have an init.d script at all (NM) or it isn’t enabled (lightdm). Thus the PPA also contains updated packages for these two which provide a proper systemd unit. With that, the desktop is pretty much fully working, except for some details like cron not running. I didn’t go through /etc/init/*.conf with a small comb yet to check which upstart jobs need to be ported, that’s now part of the TODO list.

So, if you want to help with that, or just test and tell us what’s wrong, take the plunge. In a 14.04 VM (or real machine if you feel adventurous), do

  sudo add-apt-repository ppa:pitti/systemd
  sudo apt-get update
  sudo apt-get dist-upgrade

This will replace systemd-services with systemd, update network-manager and lightdm, and a few libraries. Up to now, when you reboot you’ll still get good old upstart. To actually boot with systemd, press Shift during boot to get the grub menu, edit the Ubuntu stanza, and append this to the linux line: init=/lib/systemd/systemd.

For the record, if pressing shift doesn’t work for you (too fast, VM, or similar), enable the grub menu with

  sudo sed -i '/GRUB_HIDDEN_TIMEOUT/ s/^/#/' /etc/default/grub
  sudo update-grub

Once you are satisfied that your system boots well enough, you can make this permanent by adding the init= option to /etc/default/grub (and possibly remove the comment sign from the GRUB_HIDDEN_TIMEOUT lines) and run sudo update-grub again. To go back to upstart, just edit the file again, remove the init=sudo update-grub again.

I’ll be on the Debian systemd/GNOME sprint next weekend, so I feel reasonably well prepared now. :-)

Update: As the comments pointed out, this bricked /etc/resolv.conf. I now uploaded a resolvconf package to the PPA which provides the missing unit (counterpart to the /etc/init/resolvconf.conf upstart job) and this now works fine. If you are in that situation, please boot with upstart, and do the following to clean up:

  sudo rm /etc/resolv.conf
  sudo ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf

Then you can boot back to systemd.

Read more

A couple of weeks ago, Gunnar Wolf mentioned on IRC that his CuBox-i4 had arrived. This resulted in various jealous noises from me; having heard about this device making the rounds at the Kernel Summit, I ordered one for myself back in December, as part of the long-delayed HDification of our home entertainment system and coinciding with the purchase of a new Samsung SmartTV. We've been running an Intel Coppermine Celeron for a decade as a MythTV frontend and encoder (hardware-assisted with a PVR-250), which is fine for SD video, but really doesn't cut it for anything HD. So after finally getting a TV that would showcase HD in all its glory, I figured it was time to upgrade from an S-Video-out, barely-limping-along tower machine to something more modern with HDMI out, eSATA, hardware video decoding, and whose biggest problem is it's so small that it threatens to get lost in the wiring!

Since placing the order, I've been bemused to find that the SmartTV is so smart that it has had a dramatic impact on how we consume media; between that and our decision to not be a boiled frog in the face of DISH Network's annual price increase, the MythTV frontend has become a much less important part of our entertainment center, well before I ever got a chance to lay hands on the intended replacement hardware. But that's a topic for another day.

Anyway, the CuBox-i4 finally arrived in the mail on Friday, so of course I immediately needed to start hacking on it! Like Gunnar, who wrote last week about his own experience getting a "proper" Debian install on the box, I'm not content with running a binary distribution image prepared by some third party; I expect my hardware toys to run official distro packages assembled using official distro tools and, if at all possible, distributed on official distro images for a minimum of hassle.

Whereas Gunnar was willing to settle for using third-party binaries for the bootloader and kernel, however, I'm not inclined to do any such thing. And between my stint at Linaro a few years ago and the recent work on Ubuntu for phones, I do have a little knowledge of Linux on ARM (probably just enough to be dangerous), so I set to work trying to get the CuBox-i4 bootable with stock Debian unstable.

Being such a cutting-edge piece of hardware, that does pose some challenges. Support for the i.MX6 chip is in the process of being upstreamed to U-Boot, but the support for the CuBox-i devices isn't there yet, nor is the support for SPL on i.MX6 (which allows booting the variants of the CuBox-i with a single U-Boot build, instead of requiring a different bootloader build for each flavor). The CuBox-i U-Boot that SolidRun makes available (with source at github) is based on U-Boot 2013.10-rc4, so more than a full release behind Debian unstable, and the patches there don't apply to U-Boot 2014.01 without a bit of effort.

But if it's worth doing, it's worth doing right, so I've taken the time to rebase the CuBox-i patches on top of 2014.01, publishing the results of the rebase to my own github repository and submitting a bug to the Debian U-Boot maintainers requesting its inclusion.

The next step is to get a Debian kernel that not only works, but fully supports the hardware out of the box (a 3.13 generic arm kernel will boot on the machine, but little things like ethernet and hdmi don't work yet). I've created a page in the Debian wiki for tracking the status of this work.

Read more
Michael Hall

Today I reached another milestone in my open source journey: I got my first package uploaded into Debian’s archives.  I’ve managed to get packages uploaded into Ubuntu before, and I’ve attempted to get one into Debian, but this is the first time I’ve actually gotten a contribution in that would benefit Debian users.

I couldn’t have done with without the the help and mentorship of Paul Tagliamonte, but I was also helped by a number of others in the Debian community, so a big thank you to everybody who answered my questions and walked me through getting setup with things like Alioth and re-learning how to use SVN.

One last bit of fun, I was invited to join the Linux Unplugged podcast today to talk about yesterday’s post, you can listen it it (and watch IRC comments scroll by) here: http://www.jupiterbroadcasting.com/51842/neckbeard-entitlement-factor-lup-28/

Read more
Michael Hall

Quick overview post today, because it’s late and I don’t have anything particular to talk about today.

First of all, the next vUDS was announced today, we’re a bit late in starting it off but we wanted to have another one early enough to still be useful to the Trusty release cycle.  Read the linked mailinglist post for details about where to find the schedule and how to propose sessions.

I pushed another update to the API website today that does a better job balancing the 2-column view of namespaces and fixes the sub-nav text to match the WordPress side of things. This was the first deployment in a while to go off without a problem, thanks to  having a new staging environment created last time.  I’m hoping my deployment problems on this are now far behind me.

I took a task during my weekly Core Apps update call to look more into the Terminal app’s problem with enter and backspace keys, so I may be pinging some of you in the coming week about it to get some help.  You have been warned.

Finally, I decided a few weeks ago to spread out my after-hours community a activity beyond Ubuntu, and I’ve settled on the Debian new maintainers Django website as somewhere I can easily start.  I’ve got a git repo where I’m starting writing the first unit tests for that website, and as part of that I’m also working on Debian packaging for the Python model-mommy library which we use extensively in Ubuntu’s Django website. I’m having to learn (or learn more) Debian packaging, Git workflows and Debian’s processes and community, all of which are going to be good for me, and I’m looking forward to the challenge.

Read more
Timo Jyrinki

Background

I upgraded from Linux 3.8 to 3.11 among with newer Mesa, X.Org and Intel driver recently and I found a small workaround was needed because of upstream changes.

The upstream change was the Add "Automatic" mode for "Broadcast RGB" property, and defaulting to the Automatic. This is a sensible default, since many (most?) TVs default to the more limited 16-235, and continuing to default to Full from the driver side would mean wrong colors on the TV. I've set my screen to support the full 0-255 range available to not cut the amount of available shades of colors down.

Unfortunately it seems the Automatic setting does not work for my HDMI input, ie blacks become grey since the driver still outputs the more limited range. Maybe there could be something to improve on the driver side, but I'd guess it's more about my 2008 Sony TV actually having a mode that the standard suggests limited range for. I remember the TV did default to limited range, so maybe the EDID data from TV does not change when setting the RGB range to Full.

I hope the Automatic setting works to offer full range on newer screens and the modes they have, but that's probably up to the manufacturers and standards.

Below is an illustration of the correct setting on my Haswell CPU. When the Broadcast RGB is left to its default Automatic setting, the above image is displayed. When set to Full, the image below with deeper blacks is seen instead. I used manual settings on my camera so it's the same exposure.


Workaround

For me the workaround has evolved to the following so far. Create a /etc/X11/Xsession.d/95fullrgb file:
 
if [ "$(/usr/bin/xrandr -q --prop | grep 'Broadcast RGB: Full' | wc -l)" = "0" ] ; then
/usr/bin/xrandr --output HDMI3 --set "Broadcast RGB" "Full"
fi
And since I'm using lightdm, adding the following to /etc/lightdm/lightdm.conf means the flicker only happens once during bootup:

display-setup-script=/etc/X11/Xsession.d/95fullrgb

Important: when using the LightDM setting, enable executable bits (chmod +x) to /etc/X11/Xsession.d/95fullrgb for it to work. Obviously also check your output, for me it was HDMI3.

If there is no situation where it'd set back to "Limited 16:235" setting on its own, the display manager script should be enough and having it in /etc/X11/Xsession.d is redundant and slows login time down. I think for me it maybe went from 2 seconds to 3 seconds since executing xrandr query is not cheap.

Misc

Note that unrelated to Full range usage, the Limited range at the moment behaves incorrectly on Haswell until the patch in bug #71769 is accepted. That means, the blacks are grey in Limited mode even if the screen is also set to Limited.

I'd prefer there would be a kernel parameter for the Broadcast RGB setting, although my Haswell machine does boot so fast I don't get to see too many seconds of wrong colors...

Read more
pitti

The current default D-BUS configuration (at least on Ubuntu) disallows monitoring method calls on the system D-BUS (dbus-monitor --system), which makes debugging rather cumbersome; this has worked years ago, but apparently got changed for security reasons. It took me a half an hour to figure out how to enable this for debugging, and as this has annoyingly little Google juice (I didn’t find any solution), let’s add some.

The trick seems to be to set a global policy to be able to eavesdrop any method call after the individual /etc/dbus-1/system.d/*.conf files applied their restrictions, for which there is already a convenient facility. Create a file /etc/dbus-1/system-local.conf with

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE busconfig PUBLIC
  "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
  "http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">

<busconfig>
  <policy user="root">
    <!-- Allow everything to be sent -->
    <allow send_destination="*" eavesdrop="true"/>
    <!-- Allow everything to be received -->
    <allow eavesdrop="true"/>
    <allow send_type="method_call"/>
  </policy>
</busconfig>

Then sudo dbus-monitor --system displays everything. Needless to say that you don’t want this file on any production system!

Does anyone know an easier way? My first naive stab was to run dbus-monitor as root, but that doesn’t make any difference at all.

Update: Turns out this is already described in a better way at https://wiki.ubuntu.com/DebuggingDBus. Yay me for not finding that.. I updated above recipe to limit access to root, which is much better indeed.

Read more
pitti

I did a 0.2.2 maintenance release for umockdev to fix building with Vala 0.16.1, gcc 4.8 (the changed sizeof behaviour caused segfaults), and current udev releases (umockdev-record stumbled over the new “link priority” fields of udevadm). There are also a couple of bug fixes, but no new features.

Read more
Timo Jyrinki

If you're running an Android device with GNU userland Linux in a chroot and need a full network access over USB cable (so that you can use your laptop/desktop machine's network connection from the device), here's a quick primer on how it can be set up.

When doing Openmoko hacking, one always first plugged in the USB cable and forwarded network, or like I did later forwarded network over Bluetooth. It was mostly because the WiFi was quite unstable with many of the kernels.

I recently found out myself using a chroot on a Nexus 4 without working WiFi, so instead of my usual WiFi usage I needed network over USB... trivial, of course, except that there's Android on the way and I'm a Android newbie. Thanks to ZDmitry on Freenode, I got the bits for the Android part so I got it working.

On device, have eg. data/usb.sh with the following contents.

#!/system/xbin/sh
CHROOT="/data/chroot"

ip addr add 192.168.137.2/30 dev usb0
ip link set usb0 up
ip route delete default
ip route add default via 192.168.137.1;
setprop net.dns1 8.8.8.8
echo 'nameserver 8.8.8.8' >> $CHROOT/run/resolvconf/resolv.conf
On the host, execute the following:
adb shell setprop sys.usb.config rndis,adb
adb shell data/usb.sh
sudo ifconfig usb0 192.168.137.1
sudo iptables -A POSTROUTING -t nat -j MASQUERADE -s 192.168.137.0/24
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
sudo iptables -P FORWARD ACCEPT
This works at least with Ubuntu saucy chroot. The main difference in some other distro might be whether the resolv.conf has moved to /run or not. You should be now all set up to browse / apt-get stuff from the device again.

Update: Clarified that this is to forward the desktop/laptop's network connection to the device so that network is accessible from the device over USB.
Update2, 09/2013: It's also possible to get working on the newer flipped images. Remove the "$CHROOT" from nameserver echoing and it should be fine. With small testing it got somehow reset after a while at which point another run of data/usb.sh on the device restored connection.

Read more

Congrats to the Debian release team on the new release of Debian 7.0 (wheezy)!

Leading up to the release, a meme making the rounds on Planet Debian has been to play a #newinwheezy game, calling out some of the many new packages in 7.0 that may be interesting to users. While upstart as a package is nothing new in wheezy, the jump to upstart 1.6.1 from 0.6.6 is quite a substantial change. It does bring with it a new package, mountall, which by itself isn't terribly interesting because it just provides an upstart-ish replacement for some core scripts from the initscripts package (essentially, /etc/rcS.d/*mount*). Where things get interesting (and, typically, controversial) is the way in which mountall leverages plymouth to achieve this.

What is plymouth?

There is a great deal of misunderstanding around plymouth, a fact I was reminded of again while working to get a modern version of upstart into wheezy. When Ubuntu first started requiring plymouth as an essential component of the boot infrastructure, there was a lot of outrage from users, particularly from Ubuntu Server users, who believed this was an attempt to force pretty splash screen graphics down their throats. Nothing could be further from the truth.

Plymouth provides a splash screen, but that's not what plymouth is. What plymouth is, is a boot-time I/O multiplexer. And why, you ask, would upstart - or mountall, whose job is just to get the filesystem mounted at boot - need a boot-time I/O multiplexer?

Why use plymouth?

The simple answer is that, like everything else in a truly event-driven boot system, filesystem mounting is handled in parallel - with no defined order. If a filesystem is missing or fails an fsck, mountall may need to interact with the user to decide how to handle it. And if there's more than one missing or broken filesystem, and these are all being found in parallel, there needs to be a way to associate each answer from the user to the corresponding question from mountall, to avoid crossed signals... and lost data.

One possible way to handle this would be for mountall to serialize the fsck's / mounts. But this is a pretty unsatisfactory answer; all other things (that is, boot reliability) being equal, admins would prefer their systems to boot as fast as possible, so that they can get back to being useful to users. So we reject the idea of solving the problem of serializing prompts by making mountall serialize all its filesystem checks.

Another option would be to have mountall prompt directly on the console, doing its own serialization of the prompts (even though successful mounts / fscks continue to be run in parallel). This, too, is not desirable in the general case, both because some users actually would like to have pretty splash screens at boot time, and this would be incompatible with direct console prompting; and because mountall is not the only piece of software that need to prompt at boot time (see also: cryptsetup).

Plymouth: not just a pretty face

Enter plymouth, which provides the framework for serializing requests to the user while booting. It can provide a graphical boot splash, yes; ironically, even its own homepage suggests that this is its purpose. But it can also provide a text-only console interface, which is what you get automatically when booting without a splash boot argument, or even handle I/O over a serial console.

Which is why, contrary to the initial intuitions of the s390 porters upon seeing this package, plymouth is available for all of Debian's Linux architectures in wheezy, s390 and s390x included, providing a consistent architecture for boot-time I/O for systems that need it - which is any machine using a modern boot system, such as upstart or systemd.

Room for improvement

Now, having a coherent architecture for your boot I/O is one thing; having a bug-free splash screen is another. The experience of plymouth in Ubuntu has certainly not been bug-free, with plymouth making significant demands of the kernel video layer. Recently, the binary video driver packages in Ubuntu have started to blacklist the framebuffer kernel driver entirely due to stability concerns, making plymouth splash screens a non-starter for users of these drivers and regressing the boot experience.

One solution for this would be to have plymouth offload the video handling complexity to something more reliable and better tested. Plymouth does already have an X backend, but we don't use that in Ubuntu because even if we do have an X server, it normally starts much later than when we would want to display the splash screen. With Mir on the horizon for Ubuntu, however, and its clean separation between system and session compositors, it's possible that using a Mir backend - that can continue running even after the greeter has started, unlike the current situation where plymouth has to cede the console to the display manager when it starts - will become an appealing option.

This, too, is not without its downsides. Needing to load plymouth when using crypted root filesystems already makes for a bloated initramfs; adding a system compositor to the initramfs won't make it any better, and introduces further questions about how to hand off between initramfs and root fs. Keeping your system compositor running from the initramfs post-boot isn't really ideal, particularly for low-memory systems; whereas killing the system compositor and restarting it will make it harder to provide a flicker-free experience. But for all that, it does have its architectural appeal, as it lets us use plymouth as long as we need to after boot. As the concept of static runlevels becomes increasingly obsolete in the face of dynamic systems, we need to design for the world where the distinction between "booting" and "booted" doesn't mean what it once did.

Read more
Timo Jyrinki

Packages

I quite like the current status of Qt 5 in Debian and Ubuntu (the links are to the qtbase packages, there are ca. 15 other modules as well). Despite Qt 5 being bleeding edge and Ubuntu having had the need to use it before even the first stable release came out in December, the co-operation with Debian has gone well. Debian is now having the first Qt 5 uploads done to experimental and later on to unstable. My work contributed to pkg-kde git on the modules has been welcomed, and even though more work has been done there by others, there haven't been drastic changes that would cause too big transition problems on the Ubuntu side. It has of course helped to ask others what they want, like the whole usage of qtchooser. Now with Qt 5.0.2 I've been able to mostly re-sync all newer changes / fixes to my packaging from Debian to Ubuntu and vice versa.

There will remain some delta, as pkg-kde plans to ask for a complete transition to qtchooser so that all Qt using packages would declare the Qt version either by QT_SELECT environment variable (preferable) or a package dependency (qt5-default or qt4-default). As a temporary change related to that, Debian will have a debhelper modification that defaults QT_SELECT to qt4 for the duration of the transition. Meanwhile, Ubuntu already shipped the 13.04 release with Qt 5, and a shortcut was taken there instead to prevent any Qt 4 package breakage. However, after the transition period in Debian is over, that small delta can again be removed.

I will also need to continue pushing any useful packaging I do to Debian. I pushed qtimageformats and qtdoc last week, but I know I'm still behind with some "possibly interesting" git snapshot modules like qtsensors and qtpim.

Patches

More delta exists in the form of multiple patches related to the recent Ubuntu Touch efforts. I do not think they are of immediate interest to Debian – let's start packaging Qt 5 apps to Debian first. However, about all of those patches have already been upstreamed to be part of Qt 5.1 or Qt 5.2, or will be later on. Some already were for 5.0.2.

A couple of months ago Ubuntu did have some patches hanging around with no clear author information. This was a result of the heated preparation for the Ubuntu Touch launches, and the fact that patches flew (too) quickly in place into various PPA:s. I started hunting down the authors, and the situation turned out to be better than I thought. About half of the patches were already upstreamed, and work on properly upstreaming the other ones was swiftly started after my initial contact. Proper DEP3 fields do help understanding the overall situation. There are now 10 Canonical individuals in the upstream group of contributors, and in the last week's sprint it turned out more people will be joining them to upstream their future patches.

Nowadays about all the requests I get for including patches from developers are stuff that was already upstreamed, like the XEmbed support in qtbase. This is how it should be.

One big patch still being Ubuntu only is the Unity appmenu support. There was a temporary solution for 13.04 that forward-ported the Qt 4 way of doing it. This will be however removed from the first 13.10 ('saucy') upload, as it's not upstreamable (the old way of supporting Unity appmenus was deliberately dropped from Qt 5). A re-implementation via QPA plugin support is on its way, but it may be that the development version users will be without appmenu support for some duration. Another big patch is related to qtwebkit's device pixel ratio, which will need to be fixed. Apart from these two areas of work that need to be followed through, patches situation is quite nice as mentioned.

Conclusion

Free software will do world domination, and I'm happy to be part of it.

Read more
Timo Jyrinki

Whee!! zy

Congrats and thanks to everyone,

Debian 7.0 Wheezy released

Updating my trusty orion5x box as we speak. No better way to spend a (jetlagged) Sunday.

Read more
pitti

Paul Wise poked me this morning about uploading fatrace (“file access trace”, see the original announcement for details) to Debian, thanks for the reminder!

So I filed an Intent To Package, and will upload it in a few days, unless some discussion evolves.

I also took the opportunity to do some modernization: The power-usage-report script now uses the current PowerTop 2.x instead of the old 1.13, uses Python 3 now, and includes the “process device activity” in the report. I released this as 0.5. The actual fatrace binary didn’t change its behaviour, it just got some code optimizations; thanks to Yann Droneaud for those.

Read more
pitti

PostgreSQL just released security updates. 9.1 (as found in Debian testing and unstable and Ubuntu 11.10 and later) is affected by a critical remote vulnerability which potentially allows anyone who can access the TCP port (without credentials) to corrupt local files. If your PostgreSQL database exposes the TCP port to any potentially untrusted location, please shut down your servers and update now!

PostgreSQL 8.4 for Debian stable (squeeze) and Ubuntu 8.04 LTS and 10.04 LTS also got an update, but these are much less urgent.

Debian and Ubuntu advisories for all stable releases, as well as Debian testing are going out as we speak. The updates are already on security.debian.org and security.ubuntu.com.

I also uploaded updates for Debian unstable (8.4, 9.1, and 9.2 in experimental) and the Ubuntu backports PPA, but it will take a bit for these to build as we don’t have embargoed staging builds for those. Christoph updated the apt.postgresql.org repository as well.

Warning: If you use the current Ubuntu raring Beta-2 candidate images, you will still have the old version. So if you do anything serious with those installations, please make sure to upgrade immediately.

Update: Debian and Ubuntu security announcements have been sent out, and all packages in the backports PPA are built.

Please see the official FAQ if you want to know some more details about the nature of the vulnerabilities.

Read more
pitti

I just pushed out a new python-dbusmock release 0.6.

Calling a method on the mock now emits a MethodCalled signal on the org.freedesktop.DBus.Mock interface. In some cases this is easier to track than parsing the mock’s log or using GetMethodCalls. Thanks to Lars Uebernickel for this.

DBusMockObject.AddTemplate() and DBusTestCase.spawn_server_template() can now load local templates from your own project by specifying a path to a *.py file as template name. Thanks to Lucas De Marchi for this feature.

I also wrote a quite comprehensive template for systemd’s logind. It stubs out the power management functionality as well as user/seat/session objects, and is convincing enough for loginctl. Some bits like AttachDevice is missing, as this sounds unlikely to be required for D-BUS mock tests, but please let me know if you need anything else.

The mock processes now terminate automatically if their connected D-BUS goes down, as advertised in the documentation.

You can get the new tarball from Launchpad, and I uploaded it to Debian experimental now.

Enjoy!

Read more
pitti

Many libraries build a GObject introspection repository (*.gir) these days which allows the library to be used from many scripting (Python, JavaScript, Perl, etc.) and other (e. g. Vala) languages without the need for manually writing bindings for each of those.

One issue that I hear surprisingly often is “there is zero documentation for those bindings”. Tools for building documentation out of a .gir have existed for a long time already, just far too many people seem to not know about them.

For example, to build Yelp XML documentation out of the libnotify bindings for Python:

  $ g-ir-doc-tool --language=Python -o /tmp/notify-doc /usr/share/gir-1.0/Notify-0.7.gir

Then you can call yelp /tmp/notify-doc to browse the documentation. You can of course also use the standard Mallard tools to convert them to HTML for sticking them on a website:

  $ cd /tmp/notify-doc
  $ yelp-build html .

Admittedly they are far from pretty, and there are still lots of refinements that should be done for the documentation itself (like adding language specific examples) and also for the generated result (prettification, dynamic search, and what not), but it’s certainly far from “nothign”, and a good start.

If you are interested in working on this, please show up in #introspection or discuss it on bugzilla, desktop-devel-list@, or the library specific lists/bug trackers.

Read more
pitti

I just released umockdev 0.2.

The big new feature of this release is support for evdev ioctls. I. e. you can now record what e. g. X.org is doing to touchpads, touch screens, etc.:

  $ umockdev-record /dev/input/event15 > /tmp/touchpad.umockdev
  # umockdev-record -i /tmp/touchpad.ioctl /dev/input/event15 -- Xorg -logfile /dev/null

and load that back into a testbed with X.org using the dummy driver:

  cat <<EOF > xorg-dummy.conf
  Section "Device"
        Identifier "test"
        Driver "dummy"
  EndSection
  EOF

  $ umockdev-run -l /tmp/touchpad.umockdev -i /dev/input/event15=/tmp/touchpad.ioctl -- \
       Xorg -config xorg-dummy.conf -logfile /tmp/X.log :5

Then e. g. DISPLAY=:5 xinput will recognize the simulated device. Note that Xvfb won’t work as that does not use udev for device discovery, but only adds the XTest virtual devices and nothing else, so you need to use the real X.org with the dummy driver to run this as a normal user.

This enables easier debugging of new kinds of input devices, as well as writing tests for handling multiple touchscreens/monitors, integration tests of Wacom devices, and so on.

This release now also works with older automakes and Vala 0.16, so that you can use this from Ubuntu 12.04 LTS. The daily PPA now also has packages for that.

Attention: This version does not work any more with recorded ioctl files from version 0.1.

More detailled list of changes:

  • umockdev-run: Fix running of child program to keep stdin.
  • preload: Fix resolution of “/dev” and “/sys”
  • ioctl_tree: Fix endless loop when the first encountered ioctl was unknown
  • preload: Support opening a /dev node multiple times for ioctl emulation (issue #3)
  • Fix parallel build (issue #2)
  • Support xz compressed ioctl files in umockdev_testbed_load_ioctl().
  • Add example umockdev and ioctl files for a gphoto camera and an MTP capable mobile phone.
  • Fix building with automake 1.11.3 and Vala 0.16.
  • Generalize ioctl recording and emulation for ioctls with simple structs, i. e. no pointer fields. This makes it much easier to add more ioctls in the future.
  • Store return values of ioctls in records, as they are not always 0 (like EVIOCGBIT)
  • Add support for ioctl ranges (like EVIOCGABS) and ioctls with variable length (like EVIOCGBIT).
  • Add all reading evdev ioctls, for recording and mocking input devices like touch pads, touch screens, or keyboards. (issue #1)

Read more
pitti

What is this?

umockdev is a set of tools and a library to mock hardware devices for programs that handle Linux hardware devices. It also provides tools to record the properties and behaviour of particular devices, and to run a program or test suite under a test bed with the previously recorded devices loaded.

This allows developers of software like gphoto or libmtp to receive these records in bug reports and recreate the problem on their system without having access to the affected hardware, as well as writing regression tests for those that do not need any particular privileges and thus are capable of running in standard make check.

After working on it for several weeks and lots of rumbling on G+, it’s now useful and documented enough for the first release 0.1!

Component overview

umockdev consists of the following parts:

  • The umockdev-record program generates text dumps (conventionally called *.umockdev) of some specified, or all of the system’s devices and their sysfs attributes and udev properties. It can also record ioctls that a particular program sends and receives to/from a device, and store them into a text file (conventionally called *.ioctl).
  • The libumockdev library provides the UMockdevTestbed GObject class which builds sysfs and /dev testbeds, provides API to generate devices, attributes, properties, and uevents on the fly, and can load *.umockdev and *.ioctl records into them. It provides VAPI and GI bindings, so you can use it from C, Vala, and any programming language that supports introspection. This is the API that you should use for writing regression tests. You can find the API documentation in docs/reference in the source directory.
  • The libumockdev-preload library intercepts access to /sys, /dev/, the kernel’s netlink socket (for uevents) and ioctl() and re-routes them into the sandbox built by libumockdev. You don’t interface with this library directly, instead you need to run your test suite or other program that uses libumockdev through the umockdev-wrapper program.
  • The umockdev-run program builds a sandbox using libumockdev, can load *.umockdev and *.ioctl files into it, and run a program in that sandbox. I. e. it is a CLI interface to libumockdev, which is useful in the “debug a failure with a particular device” use case if you get the text dumps from a bug report. This automatically takes care of using the preload library, i. e. you don’t need umockdev-wrapper with this. You cannot use this program if you need to simulate uevents or change attributes/properties on the fly; for those you need to use libumockdev directly.

Example: Record and replay PtP/MTP USB devices

So how do you use umockdev? For the “debug a problem” use case you usually don’t want to write a program that uses libumockdev, but just use the command line tools. Let’s capture some runs from libmtp tools, and replay them in a mock environment:

  • Connect your digital camera, mobile phone, or other device which supports PtP or MTP, and locate it in lsusb. For example
      Bus 001 Device 012: ID 0fce:0166 Sony Ericsson Xperia Mini Pro
  • Dump the sysfs device and udev properties:
      $ umockdev-record /dev/bus/usb/001/012 > mobile.umockdev
  • Now record the dynamic behaviour (i. e. usbfs ioctls) of various operations. You can store multiple different operations in the same file, which will share the common communication between them. For example:
      $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-detect
      $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-emptyfolders
  • Now you can disconnect your device, and run the same operations in a mocked testbed. Please note that /dev/bus/usb/001/012 merely echoes what is in mobile.umockdev and it is independent of what is actually in the real /dev directory. You can rename that device in the generated *.umockdev files and on the command line.
      $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-detect
      $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-emptyfolders

Example: using the library to fake a battery

If you want to write regression tests, it’s usually more flexible to use the library instead of calling everything through umockdev-run. As a simple example, let’s pretend we want to write tests for upower.

Batteries, and power supplies in general, are simple devices in the sense that userspace programs such as upower only communicate with them through sysfs and uevents. No /dev nor ioctls are necessary. docs/examples/ has two example programs how to use libumockdev to create a fake battery device, change it to low charge, sending an uevent, and running upower on a local test system D-BUS in the testbed, with watching what happens with upower --monitor-detail. battery.c shows how to do that with plain GObject in C, battery.py is the equivalent program in Python that uses the GI binding. You can just run the latter like this:

  umockdev-wrapper python3 docs/examples/battery.py

and you will see that upowerd (which runs on a temporary local system D-BUS in the test bed) will report a single battery with 75% charge, which gets down to 2.5% a second later.

The gist of it is that you create a test bed with

  UMockdevTestbed *testbed = umockdev_testbed_new ();

and add a device with certain sysfs attributes and udev properties with

    gchar *sys_bat = umockdev_testbed_add_device (
            testbed, "power_supply", "fakeBAT0", NULL,
            /* attributes */
            "type", "Battery",
            "present", "1",
            "status", "Discharging",
            "energy_full", "60000000",
            "energy_full_design", "80000000",
            "energy_now", "48000000",
            "voltage_now", "12000000",
            NULL,
            /* properties */
            "POWER_SUPPLY_ONLINE", "1",
            NULL);

You can then e. g. change an attribute and synthesize a “change” uevent with

  umockdev_testbed_set_attribute (testbed, sys_bat, "energy_now", "1500000");
  umockdev_testbed_uevent (testbed, sys_bat, "change");

With Python or other introspected languages, or in Vala it works the same way, except that it looks a bit leaner due to “proper” object semantics.

Packages

I have a packaging branch for Ubuntu and a recipe to do daily builds with the latest upstream code into my daily builds PPA (for 12.10 and raring). I will soon upload it to Raring proper, too.

What’s next?

The current set of features should already get you quite far for a range of devices. I’d love to get feedback from you if you use this for anything useful, in particular how to improve the API, the command line tools, or the text dump format. I’m not really happy with the split between umockdev (sys/dev) and ioctl files and the relatively complicated CLI syntax of umockdev-record, so any suggestion is welcome.

One use case that I have for myself is to extend the coverage of ioctls for input devices such as touch screens and wacom tablets, so that we can write some tests for gnome-settings-daemon plugins.

I also want to find a way to pass ioctls back to the test suite/calling program instead of having to handle them all in the preload library, which would make it a lot more flexible. However, due to the nature of the ioctl ABI this is not easy.

Where to go to

The code is hosted on github in the umockdev project; this started out as a systemd branch to add this functionality to libudev, but after a discussion with Kay we decided to keep it separate. But I kept it in git anyway, given how popular it is today. For the bzr lovers, Launchpad has an import at lp:umockdev.

Release tarballs will be on Launchpad as well. Please file bugs and enhancement requests in the git hub tracker.

Finally, if you have questions or want to discuss something, you can always find me on IRC (pitti on Freenode or GNOME).

Thanks for your attention and happy testing!

Read more
pitti

When writing system integration tests it often happens that I want to mount some tmpfses over directories like /etc/postgresql/ or /home, and run the whole script with an unshared mount namespace so that (1) it does not interfere with the real system, and (2) is guaranteed to clean up after itself (unmounting etc.) after it ends in any possible way (including SIGKILL, which breaks usual cleanup methods like “trap”, “finally”, “def tearDown()”, “atexit()” and so on).

In gvfs’ and postgresql-common’s tests, which both have been around for a while, I prepare a set of shell commands in a variable and pipe that into unshare -m sh, but that has some major problems: It doesn’t scale well to large programs, looks rather ugly, breaks syntax highlighting in editors, and it destroys the real stdin, so you cannot e. g. call a “bash -i” in your test for interactively debugging a failed test.

I just changed postgresql-common’s test runner to use unshare/tmpfses as well, and needed a better approach. What I eventually figured out preserves stdin, $0, and $@, and still looks like a normal script (i. e. not just a single big string). It still looks a bit hackish, but I can live with that:

#!/bin/sh
set -e
# call ourselves through unshare in a way that keeps normal stdin, $0, and CLI args
unshare -uim sh -- -c "`tail -n +7 $0`" "$0" "$@"
exit $?

# unshared program starts here
set -e
echo "args: $@"
echo "mounting tmpfs"
mount -n -t tmpfs tmpfs /etc
grep /etc /proc/mounts
echo "done"

As Unix/Linux’ shebang parsing is rather limited, I didn’t find a way to do something like

#!/usr/bin/env unshare -m sh

If anyone knows a trick which avoids the “tail -n +7″ hack and having to pay attention to passing around “$@”, I’d appreciate a comment how to simplify this.

Read more

Upstart in Debian

Good news, everyone!

So as of last Sunday, this works on all Linux archs in Debian unstable and gives you a modern version of upstart:

echo 'Yes, do as I say!' | apt-get -o DPkg::options=--force-remove-essential -y --force-yes install upstart

Thanks to the ifupdown, sysvinit, and udev maintainers for their cooperation in getting upstart support in place; to the Debian release team for accomodating the late changes needed for upstart to be supported in wheezy; and to Scott for his past maintenance of upstart in Debian.

Benchmarking

One of the consequences is that it's now possible to do meaningful head-to-head comparisons of boot speed between sysvinit (with startpar), upstart, and systemd. At one time or another people have tested systemd vs. sysvinit when using bash as /bin/sh, and upstart vs. sysvinit, and systemd vs. sysvinit+startpar, and there are plenty of bootcharts floating around showing results of one init system or another on one distro or another, but I'm not aware of anyone having done a real, fair comparison of the three solutions, changing nothing but the init system.

I've done some initial comparisons in a barebones sid VM, and the results are definitely interesting. Sysvinit with startpar (the default in Debian) can boot a stock sid install, with no added services, in somewhere between 3.37 and 3.42 seconds (three runs). That's not a whole lot, but on the other hand this is a system with a single filesystem and no interesting services yet. Is this really as fast as we can boot?

No, even this minimal system can boot faster. Testing with upstart shows that upstart can do the same job in between 3.03 and 3.19 seconds (n=3, mean=3.09). This confirms what we'd already seen in Ubuntu, that it makes a difference to boot speed to have filesystem mounting handled by an integrated process that understands the whole system instead of as a group of serialized shell scripts.

What about systemd? The same test gives a boot time between 2.32 and 2.85 seconds (n=4, mean=2.48). Interesting; what would make systemd faster than upstart in sid? Well, a quick look at the system shows one possible contributing factor: the rsyslog package in Debian has a systemd unit file, but not an upstart job file. Dropping in the /etc/init/rsyslog.conf from Ubuntu has a noticeable impact, and brings the upstart boot time down nearer to that of systemd (2.78-3.03s, n=5, mean=2.92). Besides telling me that it's time to start spamming Debian maintainers with wishlist bugs asking them to include upstart jobs in their packages, this suggests that the remaining difference in boot time may be due to the outstanding init scripts in rcS.d that are made built-in no-ops by systemd but not (yet) by upstart in Debian (e.g., hwclock, hostname, udev-mtab). (In Ubuntu, /etc/rcS.d has long since been emptied out in favor of upstart jobs in the common case, since the time it takes to get to runlevel=2 is definitely a major issue for boot speed and boot parallelization.)

It also gives the lie to the claim that's been made in various places that spawning shells is a major bottleneck for upstart vs. systemd. More study is certainly needed to confirm this, but at least this naive first test suggests that in spite of the purported benefits of hard-coding boot-time policies in C code, upstart with its default degree of runtime configurability is at least in the ballpark of systemd. Indeed, when OpenSuSE switched from upstart to systemd, it seems that something else in the stack managed to nullify any benefit from improving the boot-time performance of apparmor. Contrary to what some would have you believe, systemd is not some kind of silver bullet for boot speed. Upstart, with its boot-time flexibility and its long history of real-world testing in Ubuntu, is a formidable competitor to systemd in the boot speed department - and a solid solution to the many longstanding boot-time ordering bugs in Debian, which still affect users of sysvinit.

I've published the bootcharts for the above tests here. Between the fact that Debian's bootchart package logs by default to /var/log/bootchart.tgz (thus overwriting on every boot) and the fact that these tests are in a VM, I haven't bothered to include the raw data, just the bootcharts themselves. The interested reader can probably generate more interesting boot charts of their own anyway - in particular, it will be interesting to see how the different init systems perform with more complicated filesystem layouts, or when booting a less trivial set of services.

Other musings

The boot charts have been created with the bootchart package rather than with bootchart2. For one thing, it turns out that bootchart2 includes systemd units, not init scripts; so when replacing bootchart with bootchart2, the non-matching init script is left behind and systemd in particular gets terribly confused. This is now reported as Bug #694403.

In an amusing twist, while I was experimenting with bootchart2, I also noticed that having systemd installed would slow down booting with other init systems, because systemd installs udev rules which take a noticeable amount of time to run a helper command at boot even though the helper should be a no-op. So if you're doing boot speed testing of other init systems, be sure you don't have systemd on the system at the time!

Read more
Barry Warsaw

UDS Update #1 - OAuth

For UDS-R for Raring (i.e. Ubuntu 13.04) in Copenhagen, I sponsored three blueprints.  These blueprints represent most of the work I will be doing for the next 6 months, as we're well on our way to the next LTS, Ubuntu 14.04.

I'll provide some updates to the other blueprints later, but for now, I want to talk about OAuth and Python 3.  OAuth is a protocol which allows you to programmatically interact with certain website APIs, in an authenticated manner, without having to provide your website password.  Essentially, it allows you to generate an authorization token which you can use instead, and it allows you to manage and share these tokens with applications, so that you can revoke them if you want, or decide how and which applications to trust to act on your behalf.

A good example of a site that uses OAuth is Launchpad, but many other sites also support OAuth, such as Twitter and Facebook.

There are actually two versions of OAuth out there.  OAuth version 1 is definitely the more prevelent, since it has been around for years, is relatively simple (at least on the client side), and enshrined in RFC 5849.  There are tons of libraries available that support OAuth v1, in a multitude of languages, with Python being no exception.

OAuth v2 is much less common, since it is currently only a draft specification, and has had its share of design-by-committee controversy.  Still, some sites such as Facebook do require OAuth v2.

One of the very earliest Python libraries to support OAuth v1, on both the client and server side, was python-oauth (I'll use the Debian package names in this post), and on the Ubuntu desktop, you'll find lots of scripts and libraries that use python-oauth.  There are major problems with this library though, and I highly recommend not using it.  The biggest problems are that the code is abandoned by its upstream maintainer (it hasn't be updated on PyPI since 2009), and it is not Python 3 compatible.  Because the OAuth v2 draft came after this library was abandoned, it provides no support for the successor specification.

For this reason, one of the blueprints I sponsored was specifically to survey the alternatives available for Python programmers, and make a decision about which one we would officially endorse for Ubuntu.  By "official endorsement" I mean promote the library to other Python programmers (hence this post!) and to port all of our desktop scripts from python-oauth to the agreed upon library.

After some discussion, it was unanimous by the attendees of the UDS session (both in-person and remotely), to choose the python-oauthlib as our preferred library.

python-oauthlib has a lot going for it.  It's Python 3 compatible, has an active upstream maintainer, supports both RFC 5849 for v1, and closely follows the draft for v2.  It's a well-tested, solid library, and it is available in Ubuntu for both Python 2 and Python 3.  Probably the only negative is that the library does not provide any support for the server side.  This is not a major problem for our immediate plans, since there aren't any server applications on the Ubuntu desktop requiring OAuth.  Eventually, yes, we'll need server side support, but we can punt on that recommendation for now.

Another cool thing about python-oauthlib is that it has been adopted by the python-requests library, meaning, if you want to use a modern replacement for the urllib2/httplib2 circus which supports OAuth out of the box, you can just use python-requests, provide the appropriate parameters, and you get request signing for free.

So, as you'll see from the blueprint, there are several bugs linked to packages which need porting to python-oauthlib for Ubuntu 13.04, and I am actively working on them, though contributions, as always, are welcome!  I thought I'd include a little bit of code to show you how you might port from python-oauth to python-oauthlib.  We'll stick with OAuth v1 in this discussion.

The first thing to recognize is that python-oauth uses different, older terminology that predates the RFC.  Thus, you'll see references to a token key and token secret, as well as a consumer key and consumer secret.  In the RFC, and in python-oauthlib, these terms are client key, client secret, resource owner key, and resource owner secret respectively.  After you get over that hump, the rest pretty much falls into place.  As an example, here is a code snippet from the piston-mini-client library which used the old python-oauth library:

class OAuthAuthorizer(object):
    """Authenticate to OAuth protected APIs."""
    def __init__(self, token_key, token_secret, consumer_key, consumer_secret,
                 oauth_realm="OAuth"):
        """Initialize a ``OAuthAuthorizer``.

        ``token_key``, ``token_secret``, ``consumer_key`` and
        ``consumer_secret`` are required for signing OAuth requests.  The
        ``oauth_realm`` to use is optional.
        """
        self.token_key = token_key
        self.token_secret = token_secret
        self.consumer_key = consumer_key
        self.consumer_secret = consumer_secret
        self.oauth_realm = oauth_realm

    def sign_request(self, url, method, body, headers):
        """Sign a request with OAuth credentials."""
        # Import oauth here so that you don't need it if you're not going
        # to use it.  Plan B: move this out into a separate oauth module.
        from oauth.oauth import (OAuthRequest, OAuthConsumer, OAuthToken,
                                 OAuthSignatureMethod_PLAINTEXT)
        consumer = OAuthConsumer(self.consumer_key, self.consumer_secret)
        token = OAuthToken(self.token_key, self.token_secret)
        oauth_request = OAuthRequest.from_consumer_and_token(
            consumer, token, http_url=url)
        oauth_request.sign_request(OAuthSignatureMethod_PLAINTEXT(),
                                   consumer, token)
        headers.update(oauth_request.to_header(self.oauth_realm))


The constructor is pretty simple, and it uses the old OAuth terminology.  The key thing to notice is the way the old API required you to create a consumer, a token, and then a request object, then ask the request object to sign the request.  On top of all the other disadvantages, this isn't a very convenient API.  Let's look at the snippet after conversion to python-oauthlib.

class OAuthAuthorizer(object):
    """Authenticate to OAuth protected APIs."""
    def __init__(self, token_key, token_secret, consumer_key, consumer_secret,
                 oauth_realm="OAuth"):
        """Initialize a ``OAuthAuthorizer``.

        ``token_key``, ``token_secret``, ``consumer_key`` and
        ``consumer_secret`` are required for signing OAuth requests.  The
        ``oauth_realm`` to use is optional.
        """
        # 2012-11-19 BAW: python-oauthlib requires unicodes for its tokens and
        # secrets.  Assume utf-8 values.
        # https://github.com/idan/oauthlib/issues/68
        self.token_key = _unicodeify(token_key)
        self.token_secret = _unicodeify(token_secret)
        self.consumer_key = _unicodeify(consumer_key)
        self.consumer_secret = _unicodeify(consumer_secret)
        self.oauth_realm = oauth_realm

    def sign_request(self, url, method, body, headers):
        """Sign a request with OAuth credentials."""
        # 2012-11-19 BAW: In order to preserve API backward compatibility,
        # convert empty string body to None.  The old python-oauth library
        # would treat the empty string as "no body", but python-oauthlib
        # requires None.
        if not body:
            body = None
        # Import oauthlib here so that you don't need it if you're not going
        # to use it.  Plan B: move this out into a separate oauth module.
        from oauthlib.oauth1 import Client, SIGNATURE_PLAINTEXT
        oauth_client = Client(self.consumer_key, self.consumer_secret,
                              self.token_key, self.token_secret,
                              signature_method=SIGNATURE_PLAINTEXT,
                              realm=self.oauth_realm)
        uri, signed_headers, body = oauth_client.sign(
            url, method, body, headers)
        headers.update(signed_headers)


See how much nicer this is?  You need only create a client object, essentially using all the same bits of information.  Then you ask the client to sign the request, and update the request headers with the signature.  Much easier.

Two important things to note.  If you are doing an HTTP GET, there is no request body, and thus no request content which needs to contribute to the signature.  In python-oauth, you could specify an empty body by using either None or the empty string.  piston-mini-client uses the latter, and this is embodied in its public API.  python-oauthlib however, treats the empty string as a body being present, so it would require the Content-Type header to be set even for an HTTP GET which has no content (i.e. no body).  This is why the replacement code checks for an empty string being passed in (actually, any false-ish value), and coerces that to None.

The second issue is that python-oauthlib requires the keys and secrets to be Unicode objects; they cannot be bytes objects.  In code ported straight from Python 2 however, these values are usually 8-bit strings, and so become bytes objects in Python 3.  python-oauthlib will raise a ValueError during signing if any of these are bytes objects.  Thus the use of the _unicodeify() function to decode these values to unicodes.

def _unicodeify(s):
    if isinstance(s, bytes):
        return s.decode('utf-8')
    return s


The above works in both Python 2 and Python 3.  Of course, we don't know for sure that the bytes values are UTF-8, but it's the only sane encoding to expect, and if a client of piston-mini-client were to be so insane as to use an incompatible encoding (US-ASCII is fine because it's compatible with UTF-8), it would be up to the client to just pass in unicodes in the first place.  At the time of this writing, this is under active discussion with upstream, but for now, it's not too difficult to work around.

Anyway, I hope this helps, and I encourage you to help increase the popularity of python-oauthlib on the Cheeseshop, so that we can one day finally kill off the long defunct python-oauth library.

Read more