Canonical Voices

What Steve Langasek talks about

Posts tagged with 'debian'

A couple of weeks ago, Gunnar Wolf mentioned on IRC that his CuBox-i4 had arrived. This resulted in various jealous noises from me; having heard about this device making the rounds at the Kernel Summit, I ordered one for myself back in December, as part of the long-delayed HDification of our home entertainment system and coinciding with the purchase of a new Samsung SmartTV. We've been running an Intel Coppermine Celeron for a decade as a MythTV frontend and encoder (hardware-assisted with a PVR-250), which is fine for SD video, but really doesn't cut it for anything HD. So after finally getting a TV that would showcase HD in all its glory, I figured it was time to upgrade from an S-Video-out, barely-limping-along tower machine to something more modern with HDMI out, eSATA, hardware video decoding, and whose biggest problem is it's so small that it threatens to get lost in the wiring!

Since placing the order, I've been bemused to find that the SmartTV is so smart that it has had a dramatic impact on how we consume media; between that and our decision to not be a boiled frog in the face of DISH Network's annual price increase, the MythTV frontend has become a much less important part of our entertainment center, well before I ever got a chance to lay hands on the intended replacement hardware. But that's a topic for another day.

Anyway, the CuBox-i4 finally arrived in the mail on Friday, so of course I immediately needed to start hacking on it! Like Gunnar, who wrote last week about his own experience getting a "proper" Debian install on the box, I'm not content with running a binary distribution image prepared by some third party; I expect my hardware toys to run official distro packages assembled using official distro tools and, if at all possible, distributed on official distro images for a minimum of hassle.

Whereas Gunnar was willing to settle for using third-party binaries for the bootloader and kernel, however, I'm not inclined to do any such thing. And between my stint at Linaro a few years ago and the recent work on Ubuntu for phones, I do have a little knowledge of Linux on ARM (probably just enough to be dangerous), so I set to work trying to get the CuBox-i4 bootable with stock Debian unstable.

Being such a cutting-edge piece of hardware, that does pose some challenges. Support for the i.MX6 chip is in the process of being upstreamed to U-Boot, but the support for the CuBox-i devices isn't there yet, nor is the support for SPL on i.MX6 (which allows booting the variants of the CuBox-i with a single U-Boot build, instead of requiring a different bootloader build for each flavor). The CuBox-i U-Boot that SolidRun makes available (with source at github) is based on U-Boot 2013.10-rc4, so more than a full release behind Debian unstable, and the patches there don't apply to U-Boot 2014.01 without a bit of effort.

But if it's worth doing, it's worth doing right, so I've taken the time to rebase the CuBox-i patches on top of 2014.01, publishing the results of the rebase to my own github repository and submitting a bug to the Debian U-Boot maintainers requesting its inclusion.

The next step is to get a Debian kernel that not only works, but fully supports the hardware out of the box (a 3.13 generic arm kernel will boot on the machine, but little things like ethernet and hdmi don't work yet). I've created a page in the Debian wiki for tracking the status of this work.

Read more

Congrats to the Debian release team on the new release of Debian 7.0 (wheezy)!

Leading up to the release, a meme making the rounds on Planet Debian has been to play a #newinwheezy game, calling out some of the many new packages in 7.0 that may be interesting to users. While upstart as a package is nothing new in wheezy, the jump to upstart 1.6.1 from 0.6.6 is quite a substantial change. It does bring with it a new package, mountall, which by itself isn't terribly interesting because it just provides an upstart-ish replacement for some core scripts from the initscripts package (essentially, /etc/rcS.d/*mount*). Where things get interesting (and, typically, controversial) is the way in which mountall leverages plymouth to achieve this.

What is plymouth?

There is a great deal of misunderstanding around plymouth, a fact I was reminded of again while working to get a modern version of upstart into wheezy. When Ubuntu first started requiring plymouth as an essential component of the boot infrastructure, there was a lot of outrage from users, particularly from Ubuntu Server users, who believed this was an attempt to force pretty splash screen graphics down their throats. Nothing could be further from the truth.

Plymouth provides a splash screen, but that's not what plymouth is. What plymouth is, is a boot-time I/O multiplexer. And why, you ask, would upstart - or mountall, whose job is just to get the filesystem mounted at boot - need a boot-time I/O multiplexer?

Why use plymouth?

The simple answer is that, like everything else in a truly event-driven boot system, filesystem mounting is handled in parallel - with no defined order. If a filesystem is missing or fails an fsck, mountall may need to interact with the user to decide how to handle it. And if there's more than one missing or broken filesystem, and these are all being found in parallel, there needs to be a way to associate each answer from the user to the corresponding question from mountall, to avoid crossed signals... and lost data.

One possible way to handle this would be for mountall to serialize the fsck's / mounts. But this is a pretty unsatisfactory answer; all other things (that is, boot reliability) being equal, admins would prefer their systems to boot as fast as possible, so that they can get back to being useful to users. So we reject the idea of solving the problem of serializing prompts by making mountall serialize all its filesystem checks.

Another option would be to have mountall prompt directly on the console, doing its own serialization of the prompts (even though successful mounts / fscks continue to be run in parallel). This, too, is not desirable in the general case, both because some users actually would like to have pretty splash screens at boot time, and this would be incompatible with direct console prompting; and because mountall is not the only piece of software that need to prompt at boot time (see also: cryptsetup).

Plymouth: not just a pretty face

Enter plymouth, which provides the framework for serializing requests to the user while booting. It can provide a graphical boot splash, yes; ironically, even its own homepage suggests that this is its purpose. But it can also provide a text-only console interface, which is what you get automatically when booting without a splash boot argument, or even handle I/O over a serial console.

Which is why, contrary to the initial intuitions of the s390 porters upon seeing this package, plymouth is available for all of Debian's Linux architectures in wheezy, s390 and s390x included, providing a consistent architecture for boot-time I/O for systems that need it - which is any machine using a modern boot system, such as upstart or systemd.

Room for improvement

Now, having a coherent architecture for your boot I/O is one thing; having a bug-free splash screen is another. The experience of plymouth in Ubuntu has certainly not been bug-free, with plymouth making significant demands of the kernel video layer. Recently, the binary video driver packages in Ubuntu have started to blacklist the framebuffer kernel driver entirely due to stability concerns, making plymouth splash screens a non-starter for users of these drivers and regressing the boot experience.

One solution for this would be to have plymouth offload the video handling complexity to something more reliable and better tested. Plymouth does already have an X backend, but we don't use that in Ubuntu because even if we do have an X server, it normally starts much later than when we would want to display the splash screen. With Mir on the horizon for Ubuntu, however, and its clean separation between system and session compositors, it's possible that using a Mir backend - that can continue running even after the greeter has started, unlike the current situation where plymouth has to cede the console to the display manager when it starts - will become an appealing option.

This, too, is not without its downsides. Needing to load plymouth when using crypted root filesystems already makes for a bloated initramfs; adding a system compositor to the initramfs won't make it any better, and introduces further questions about how to hand off between initramfs and root fs. Keeping your system compositor running from the initramfs post-boot isn't really ideal, particularly for low-memory systems; whereas killing the system compositor and restarting it will make it harder to provide a flicker-free experience. But for all that, it does have its architectural appeal, as it lets us use plymouth as long as we need to after boot. As the concept of static runlevels becomes increasingly obsolete in the face of dynamic systems, we need to design for the world where the distinction between "booting" and "booted" doesn't mean what it once did.

Read more

Upstart in Debian

Good news, everyone!

So as of last Sunday, this works on all Linux archs in Debian unstable and gives you a modern version of upstart:

echo 'Yes, do as I say!' | apt-get -o DPkg::options=--force-remove-essential -y --force-yes install upstart

Thanks to the ifupdown, sysvinit, and udev maintainers for their cooperation in getting upstart support in place; to the Debian release team for accomodating the late changes needed for upstart to be supported in wheezy; and to Scott for his past maintenance of upstart in Debian.

Benchmarking

One of the consequences is that it's now possible to do meaningful head-to-head comparisons of boot speed between sysvinit (with startpar), upstart, and systemd. At one time or another people have tested systemd vs. sysvinit when using bash as /bin/sh, and upstart vs. sysvinit, and systemd vs. sysvinit+startpar, and there are plenty of bootcharts floating around showing results of one init system or another on one distro or another, but I'm not aware of anyone having done a real, fair comparison of the three solutions, changing nothing but the init system.

I've done some initial comparisons in a barebones sid VM, and the results are definitely interesting. Sysvinit with startpar (the default in Debian) can boot a stock sid install, with no added services, in somewhere between 3.37 and 3.42 seconds (three runs). That's not a whole lot, but on the other hand this is a system with a single filesystem and no interesting services yet. Is this really as fast as we can boot?

No, even this minimal system can boot faster. Testing with upstart shows that upstart can do the same job in between 3.03 and 3.19 seconds (n=3, mean=3.09). This confirms what we'd already seen in Ubuntu, that it makes a difference to boot speed to have filesystem mounting handled by an integrated process that understands the whole system instead of as a group of serialized shell scripts.

What about systemd? The same test gives a boot time between 2.32 and 2.85 seconds (n=4, mean=2.48). Interesting; what would make systemd faster than upstart in sid? Well, a quick look at the system shows one possible contributing factor: the rsyslog package in Debian has a systemd unit file, but not an upstart job file. Dropping in the /etc/init/rsyslog.conf from Ubuntu has a noticeable impact, and brings the upstart boot time down nearer to that of systemd (2.78-3.03s, n=5, mean=2.92). Besides telling me that it's time to start spamming Debian maintainers with wishlist bugs asking them to include upstart jobs in their packages, this suggests that the remaining difference in boot time may be due to the outstanding init scripts in rcS.d that are made built-in no-ops by systemd but not (yet) by upstart in Debian (e.g., hwclock, hostname, udev-mtab). (In Ubuntu, /etc/rcS.d has long since been emptied out in favor of upstart jobs in the common case, since the time it takes to get to runlevel=2 is definitely a major issue for boot speed and boot parallelization.)

It also gives the lie to the claim that's been made in various places that spawning shells is a major bottleneck for upstart vs. systemd. More study is certainly needed to confirm this, but at least this naive first test suggests that in spite of the purported benefits of hard-coding boot-time policies in C code, upstart with its default degree of runtime configurability is at least in the ballpark of systemd. Indeed, when OpenSuSE switched from upstart to systemd, it seems that something else in the stack managed to nullify any benefit from improving the boot-time performance of apparmor. Contrary to what some would have you believe, systemd is not some kind of silver bullet for boot speed. Upstart, with its boot-time flexibility and its long history of real-world testing in Ubuntu, is a formidable competitor to systemd in the boot speed department - and a solid solution to the many longstanding boot-time ordering bugs in Debian, which still affect users of sysvinit.

I've published the bootcharts for the above tests here. Between the fact that Debian's bootchart package logs by default to /var/log/bootchart.tgz (thus overwriting on every boot) and the fact that these tests are in a VM, I haven't bothered to include the raw data, just the bootcharts themselves. The interested reader can probably generate more interesting boot charts of their own anyway - in particular, it will be interesting to see how the different init systems perform with more complicated filesystem layouts, or when booting a less trivial set of services.

Other musings

The boot charts have been created with the bootchart package rather than with bootchart2. For one thing, it turns out that bootchart2 includes systemd units, not init scripts; so when replacing bootchart with bootchart2, the non-matching init script is left behind and systemd in particular gets terribly confused. This is now reported as Bug #694403.

In an amusing twist, while I was experimenting with bootchart2, I also noticed that having systemd installed would slow down booting with other init systems, because systemd installs udev rules which take a noticeable amount of time to run a helper command at boot even though the helper should be a no-op. So if you're doing boot speed testing of other init systems, be sure you don't have systemd on the system at the time!

Read more

The 12.10 release is the first version of Ubuntu that supports Secure Boot out of the box. In what is largely an accident of release timing, from what I can tell (and please correct me if I'm wrong), this actually makes Ubuntu 12.10 the first general release of any OS to support Secure Boot. (Windows 8 of course is also now available; and I'm sure Matthew Garrett, who has been a welcome collaborator throughout this process, has everything in good order for the upcoming Fedora 18 release.)

That's certainly something of a bittersweet achievement. I'm proud of the work we've done to ensure Ubuntu will continue to work out of the box on the consumer hardware of the future; in spite of the predictable accusations on the blogowebs that we've sold out, I sleep well at night knowing that this was the pragmatic decision to make, maximizing users' freedom to use their hardware. All the same, I worry about what the landscape is going to look like in a few years' time. The Ubuntu first-stage EFI bootloader is signed by Microsoft, but the key that is used for signing is one that's recommended by Microsoft, not one that's required by the Windows 8 certification requirements. Will all hardware include this key in practice? The Windows 8 requirements also say that every machine must allow the user to disable Secure Boot. Will manufacturers get this right, and will users be able to make use of it in the event the manufacture didn't include the Microsoft-recommended key? Only time will tell. But I do think the Linux community is going to have to remain engaged on this for some time to come, and possibly hold OEMs' feet to the fire for shipping hardware that will only work with Windows 8.

But that's for the future. For now, we have a technical solution in 12.10 that solves the parts of the problem that we can solve.

  • The first-stage UEFI bootloader on 12.10 install media is a build of Matthew Garrett's shim code, embedding a Canonical UEFI CA and signed by Microsoft. This is pretty much the same as the Fedora solution, just with a different key and a binary built on the Ubuntu build infrastructure rather than Fedora's. If Secure Boot is not enabled, the second-stage bootloader is booted without applying any checks. If Secure Boot is enabled, the signature is checked on the second stage before passing control, as expected.
  • The second-stage bootloader is GRUB 2. Readers may remember that there were earlier concerns about GPLv3 in the mix for some Ubuntu use cases, but these have been ironed out now in discussion with the FSF. This enables us to continue to provide a consistent boot experience across the different ways Ubuntu may be booted.
  • We are providing signed kernel images in the Ubuntu archive and using them by default. When a signed kernel is present, this allows the bootloader to pass control to the kernel without first calling ExitBootServices(), letting the kernel apply any EFI quirks it might need to (such as for bug #1065263.
  • Unlike Fedora, however, we are not enforcing kernel signature checking. If the kernel is unsigned and the system has Secure Boot turned on, the Ubuntu grub will fall back to calling ExitBootServices() first, and then booting the kernel. This way, users still have the freedom to boot their own kernels on Secure Boot-enabled hardware, possibly with a slightly degraded boot experience, while the firmware remains protected from untrusted code.
  • We are not enforcing module signing. This allows external modules, such as dkms packages, to continue to work with SB turned on. While there's potential value in being able to verify the source of kernel modules at load time, that's not something implemented for this round, and we would want such enforcement to be optional regardless.

This first release gives us preliminary support for booting on Secure Boot, but there's more work to be done to provide a full solution that's sustainable over the long term. We'll be discussing some of that this week at UDS in Copenhagen.

  • The 12.10 solution does not include support for SuSE's MOK (machine owner key) approach. Supporting this is important to us, as this helps to ensure users have freedom and control over their machines instead of being forced to choose between disabling Secure Boot and only running vendor-provided kernels. Among other things, we want to make sure people have the freedom to continue doing kernel and bootloader development, without having to muddle their way through impossible firmware menus!
  • We will be enhancing the tooling around db/dbx updates, to ensure that Ubuntu users of Secure Boot receive the same protection against malware that Windows users do. This also addresses the need for publishing revocations in the event of bugs in our grub or kernel implementations that would compromise the security of Secure Boot.
  • Netboot support is currently nonexistent. The major blocker seems to be that unlike in BIOS booting we can't rely on the firmware's PXE stack, and in practice GRUB2's tftp support doesn't seem to be very robust yet. Once we have a working GRUB2 image that successfully reads files over tftp, we can evaluate signing it for Secure Boot as well.

And as part of our committment to enabling new hardware on the current LTS release, we will be backporting this work for inclusion in 12.04.2.

It remains to be decided how Debian will approach the Secure Boot question. At DebConf 12, many people seemed to consider it a foregone conclusion that Debian would never agree to include binaries in main signed by third-party keys. I don't think that should be a given; I think allowing third-party signatures in main for hardware compatibility is consistent with Debian's principles, and refusing to make Debian compatible with this hardware out of the box does nothing to advance user freedom. I hope to see frank discussion post-wheezy about keeping Debian relevant on consumer hardware of the future.

Read more

This weekend, we held a combined Debian Bug Squashing Party and Ubuntu Local Jam in Portland, OR. A big thank you to PuppetLabs for hosting!

Thanks to a brilliant insight from Kees Cook, we were able to give everyone access to their own pre-configured build environment as soon as they walked in the door by deploying schroot/sbuild instances in "the cloud" (in this case, Amazon EC2). Small blips with the mirrors notwithstanding, this worked out pretty well, and let people start to get their hands dirty as soon as they walked in the door instead of spending a lot of time up front doing the boring work of setting up a build environment. This was a big win for people who had never done a package build before, and I highly recommend it for future BSPs. You can read about the build environment setup in the Debian wiki, and details on setting up your own BSP cloud in Kees's blog.

(And the cloud instances were running Ubuntu 11.10 guests, with Debian unstable chroots - a perfect pairing for our joint Debian/Ubuntu event!)

So how did this curious foray into a combined Ubuntu/Debian event go? Not too shabby:

  • Roughly 25 people participated in the event - a pretty good turnout considering the short notice we gave. Thanks to everyone who turned up!
  • Multiarch patches were submitted for 14 library packages by 9 distinct contributors
  • Four of these people submitted their first patch to Debian!
  • Three more contributors worked on patches that were not submitted to Debian by the end of the event, but we will stalk them and see to it that their patches make it in ;)
  • 8 Ubuntu Stable Release Updates were looked at for verification of fixes
  • 7 of these fixes were successfully verified (one bug was not reproducible)
  • 6 of those packages have already been moved to the -updates pocket, where all of Ubuntu's users can now benefit from them

When all was said and done, we didn't get a chance to tackle any wheezy release critical bugs like we'd hoped. That's ok, that leaves us something to do for our next event, which will be bigger and even better than this one. Maybe even big enough to rival one of those crazy, all-weekend BSPs that they have in Germany...

Read more

Raphaël Hertzog recently announced a new dpkg-buildflags interface in dpkg that at long last gives the distribution, the package maintainers, and users the control they want over the build flags used when building packages.

The announcement mail gives all the gory details about how to invoke dpkg-buildflags in your build to be compliant; but the nice thing is, if you're using dh(1) with debian/compat=9, debhelper does it for you automatically so long as you're using a build system that it knows how to pass compiler flags to.

So for the first time, /usr/share/doc/debhelper/examples/rules.tiny can now be used as-is to provide a policy-compliant package by default (setting -g -O2 or -g -O0 for your build regardless of how debian/rules is invoked).

Of course, none of my packages actually work that way; among other things I have a habit of liberally sprinkling DEB_MAINT_CFLAGS_APPEND := -Wall in my rules, and sometimes DEB_LDFLAGS_MAINT_APPEND := -Wl,-z,defs and DEB_CFLAGS_MAINT_APPEND := $(shell getconf LFS_CFLAGS) as well. And my upstreams' build systems rarely work 100% out of the box with dhauto* without one override or another somewhere. So in practice, the shortest debian/rules file in any of my packages seems to be 13 lines currently.

But that's 13 lines of almost 100% signal, unlike the bad old days of cut'n'pasted dh_* command lists.

The biggest benefit, though, isn't in making it shorter to write a rules file with the old, standard build options. The biggest benefit is that dpkg-buildflags now also outputs build-hardening compiler and linker flags by default on Debian. Specifically, using the new interface lets you pick up all of these hardening flags for free:

-fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro

It also lets you get -fPIE and -Wl,-z,now by adding this one line to your debian/rules (assuming you're using dh(1) and compat 9):

export DEB_BUILD_MAINT_OPTIONS := hardening=+pie,+bindnow

Converting all my packages to use dh(1) has always been a long-term goal, but some packages are easier to convert than others. This was the tipping point for me, though. Even though debhelper compat level 9 isn't yet frozen, meaning there might still be other behavior changes to it that will make more work for me between now and release, over the past couple of weekends I've been systematically converting all my packages to use it with dh. In particular, pam and samba have been rebuilt to use the default hardening flags, and openldap uses these flags plus PIE support. (Samba already builds with PIE by default courtesy of upstream.)

You can't really make samba and openldap out on the graph, but they're there (with their rules files reduced by 50% or more).

I cannot overstate the significance of proactive hardening. There have been a number of vulnerabilities over the past few years that have been thwarted on Ubuntu because Ubuntu is using -fstack-protector by default. Debian has a great security team that responds quickly to these issues as soon as they're revealed, but we don't always get to find out about them before they're already being exploited in the wild. In this respect, Debian has lagged behind other distros.

With dpkg-buildflags, we now have the tools to correct this. It's just a matter of getting packages to use the new interfaces. If you're a maintainer of a security sensitive package (such as a network-facing daemon or a setuid application), please enable dpkg-buildflags in your package for wheezy! (Preferably with PIE as well.) And if you don't maintain security sensitive packages, you can still help out with the hardening release goal.

Read more