Canonical Voices

Timo Jyrinki

Converting an existing installation to LUKS using luksipc

This is a burst of notes that I wrote in an e-mail in June when asked about it, and I'm not going to have any better steps since I don't remember even that amount as back then. I figured it's better to have it out than not.

So... if you want to use LUKS In-Place Conversion Tool, the notes below on converting a shipped-with-Ubuntu Dell XPS 13 Developer Edition (2015 Intel Broadwell model) may help you. There were a couple of small learnings to be had...

The page http://www.johannes-bauer.com/linux/luksipc/ itself is good and without errors, although funnily uses reiserfs as an example. It was only a bit unclear why I did save the initial_keyfile.bin since it was then removed in the next step (I guess it's for the case you want to have a recovery file hidden somewhere in case you forget the passphrase).

For using the tool I booted from a 14.04.2 LTS USB live image and operated there, including downloading and compiling luksipc in the live session. The exact reason of resizing before luksipc was a bit unclear to me at first so I simply indeed resized the main rootfs partition and left unallocated space in the partition table.

Then finally I ran ./luksipc -d /dev/sda4 etc.

I realized I want /boot to be on an unencrypted partition to be able to load the kernel + initrd from grub before entering into LUKS unlocking. I couldn't resize the luks partition anymore since it was encrypted... So I resized what I think was the empty small DIAGS partition (maybe used for some system diagnostic or something, I don't know), or possibly the next one that is the actual recovery partition one can reinstall the pre-installed Ubuntu from. And naturally I had some problems because it seems vfatresize tool didn't do what I wanted it to do and gparted simply crashed when I tried to use it first to do the same. Anyway, when done with getting some extra free space somewhere, I used the remaining 350MB for /boot where I copied the rootfs's /boot contents to.

After adding the passphrase in luks I had everything encrypted etc and decryptable, but obviously I could only access it from a live session by manual cryptsetup luksOpen + mount /dev/mapper/myroot commands. I needed to configure GRUB, and I needed to do it with the grub-efi-amd64 which was a bit unfamiliar to me. There's also grub-efi-amd64-signed I have installed now but I'm not sure if it was required for the configuration. Secure boot is not enabled by default in BIOS so maybe it isn't needed.

I did GRUB installation – I think inside rootfs chroot where I also mounted /dev/sda6 as /boot (inside the rootfs chroot), ie mounted dev, sys with -o bind to under the chroot (from outside chroot) and mount -t proc proc proc too. I did a lot of trial and effort so I surely also tried from outside the chroot, in the live session, using some parameters to point to the mounted rootfs's directories...

I needed to definitely install cryptsetup etc inside the encrypted rootfs with apt, and I remember debugging for some time if they went to the initrd correctly after I executed mkinitramfs/update-initramfs inside the chroot.

At the end I had grub asking for the password correctly at bootup. Obviously I had edited the rootfs's /etc/fstab to include the new /boot partition, I changed / to be "UUID=/dev/mapper/myroot /     ext4    errors=remount-ro 0       ", kept /boot/efi as coming from the /dev/sda1 and so on. I had also added "myroot /dev/sda4 none luks" to /etc/crypttab. I seem to also have GRUB_CMDLINE_LINUX="cryptdevice=/dev/sda4:myroot root=/dev/mapper/myroot" in /etc/default/grub.

The only thing I did save from the live session was the original partition table if I want to revert.

So the original was:

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
...
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2048-sector boundaries
Total free space is 6765 sectors (3.3 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
1            2048         1026047   500.0 MiB   EF00  EFI system partition
2         1026048         1107967   40.0 MiB    FFFF  Basic data partition
3         1107968         7399423   3.0 GiB     0700  Basic data partition
4         7399424       467013631   219.2 GiB   8300
5       467017728       500117503   15.8 GiB    8200

And I now have:

Number  Start (sector)    End (sector)  Size       Code  Name

1            2048         1026047   500.0 MiB   EF00  EFI system partition
2         1026048         1107967   40.0 MiB    FFFF  Basic data partition
3         1832960         7399423   2.7 GiB     0700  Basic data partition
4         7399424       467013631   219.2 GiB   8300
5       467017728       500117503   15.8 GiB    8200
6         1107968         1832959   354.0 MiB   8300

So it seems I did not edit DIAGS (and it was also originally just 40MB) but did something with the recovery partition while preserving its contents. It's a FAT partition so maybe I was able to somehow resize it after all.

The 16GB partition is the default swap partition. I did not encrypt it at least yet, I tend to not run into swap anyway ever in my normal use with the 8GB RAM.

If you go this route, good luck! :D

Louis

kdump-tools enhancements to use smaller initrd.img

While testing the upcoming release of Ubuntu (15.10 Wily Warewolf), I ran over a bug that renders the kernel crash dump mechanism unusable by default :

LP: #1496317 : kexec fails with OOM killer with the current crashkernel=128 value

The root cause of this bug is that the initrd.img file that is used by kexec to reboot into a new kernel when the original one panics is getting bigger with kernel 4.2 on Ubuntu.  Hence, it is using too much of the reserved crashkernel memory (default: 128Mb). This triggers the « Out Of Memory (OOM) » killer and the kernel dump capture cannot complete.

One workaround for this issue is to increase the amount of reserved memory to a higher value. 150Mb seems to be sufficient but you may need to increase it to a higher value.  While one solution to this problem could be to increase the default crashkernel= value, it is only pushing the issue forward until we hit this limit once again.

Reduce the size of initrd.img

update-initramfs has an option in its configuration file ( /etc/initramfs-tools/initramfs.conf) that let us modify the modules that are included in the initrd.img file.  Our current default is to add most of the modules :

# MODULES: [ most | netboot | dep | list ]
#
# most - Add most filesystem and all harddrive drivers.
#
# dep - Try and guess which modules to load.
#
# netboot - Add the base modules, network modules, but skip block devices.
#
# list - Only include modules from the 'additional modules' list
#

MODULES=most

By changing this configuration to MODULES=dep, we can sensibly reduce the size of the initrd.img :

MODULES=most : initrd.img-4.2.0-16-generic = 30Mb

MODULES=dep :initrd.img-4.2.0-16-generic = 12Mb

Identifying this led to a discussion with the Ubuntu Kernel team about using a custom crafted initrd.img for kdump. This would keep the file to a sensible size and avoid triggering the OOM killer.

Implementation

The current implementation of kdump-tools already provides a mechanism to specify which vmlinuz and initrd.img files to use when settting up kexec (from /etc/default/kdump-tools) :

# ---------------------------------------------------------------
# Kdump Kernel:
# KDUMP_KERNEL - A full pathname to a kdump kernel.
# KDUMP_INITRD - A full pathname to the kdump initrd (if used).
# If these are not set, kdump-config will try to use the current kernel
# and initrd if it is relocatable. Otherwise, you will need to specify
# these manually.
#KDUMP_KERNEL=
#KDUMP_INITRD=

If we use those variables, defined to point to a generic value that can be adapted according to the running kernel version, we have a way to specify a smaller initrd.img for kdump.

Building a smaller initrd.img

Kernel package hooks already exists in /etc/kernel/postinst.d and /etc/kernel/postrm.d to create the initrd.img. Using those as templates, we created new hooks that will create smaller images in /var/lib/kdump and clean them up if the kernel version they pertain to is removed.

In order to create that smaller initrd.img, the content of the /etc/initramfs-tools directory needs to be replicated in /var/lib/kdump. This is done each time that the hook is executed to assure that the content matches the original source. Otherwise, their content may diverge if the content of the original directory gets modified.

Each time a new kernel package is installed, the hook will create a kdump specific initrd.img using MODULES=dep. and store it in /var/lib/kdump.  When the kernel package is removed, the corresponding file is removed.

Using the smaller initrd.img

As we outlined previously, the /etc/default/kdump-tools file can be used to point to a specific initrd.img/vmlinuz pair. So we can do :

KDUMP_KERNEL=/var/lib/kdump/vmlinuz
KDUMP_INITRD=/var/lib/kdump/initrd.img

When kexec will be loaded by kdump-config, it will find the appropriate files and load them in memory for future use.  But for that to happen, those new parameter needs to point to the correct file.  Here we use symbolic links to achieve our goal.

Using the hooks to create the proper symbolic links turns out to be overly complex. But since kdump-config runs at each boot, we can ask this script to be responsible for doing symlink maintenance.

This will assure that the symbolic links always  point to the file with the version of the running kernel.

One drawback of this method is that, in the remote eventuality that the running kernel breaks the kernel crash dump functionality, we cannot automatically revert to the previous kernel in order to use a known configuration.

A future evolution of the kdump-config tool will add a function to specify which kernel version to use to create the symbolic link. In the meantime, the links can be created manually with those simple commands :

$export wanted_version="some version"$ rm -f /var/lib/kdump/initrd.img
$ln -s /var/lib/kdump/initrd.img-${wanted_version} /var/lib/kdump/initrd.img
$rm -f /var/lib/kdump/vmlinuz$ ln -s /boot/vmlinuz-${wanted_version} /var/lib/kdump/vmlinuz For those of you interested in nitty-gritty details, you can find the modifications in the following GIT branch : Update: New git branch with cleanup commit history https://github.com/karibou/makedumpfile-next/tree/smaller_initrd_final Read more pitti autopkgtest 3.14 &#8220;now twice as rebooty&#8221; Almost every new autopkgtest release brings some small improvements, but 3.14 got some reboot related changes worth pointing out. First of all, I simplified and unified the implementation of rebooting across all runners that support it (ssh, lxc, and qemu). If you use a custom setup script for adt-virt-ssh you might have to update it: Previously, the setup script needed to respond to a reboot function to trigger a reboot, wait for the testbed to go down, and come back up. This got split into issuing the actual reboot system command directly by adt-run itself on the testbed, and the “wait for go down and back up” part. The latter now has a sensible default implementation: it simply waits for the ssh port to become unavailable, and then waits for ssh to respond again; most testbeds should be fine with that. You only need to provide the new wait-reboot function in your ssh setup script if you need to do anything else (such as re-enabling ssh after reboot). Please consult the manpage and the updated SKELETON for details. The ssh runner gained a new --reboot option to indicate that the remote testbed can be rebooted. This will automatically declare the reboot testbed capability and thus you can now run rebooting tests without having to use a setup script. This is very useful for running tests on real iron. Finally, in testbeds which support rebooting your tests will now find a new /tmp/autopkgtest-reboot-prepare command. Like /tmp/autopkgtest-reboot it takes an arbitrary “marker”, saves the current state, restores it after reboot and re-starts your test with the marker; however, it will not trigger the actual reboot but expects the test to do that. This is useful if you want to test a piece of software which does a reboot as part of its operation, such as a system-image upgrade. Another use case is testing kernel crashes, kexec or another “nonstandard” way of rebooting the testbed. README.package-tests shows an example how this looks like. 3.14 is now available in Debian unstable and Ubuntu wily. As usual, for older releases you can just grab the deb and install it, it works on all supported Debian and Ubuntu releases. Enjoy, and let me know if you run into troubles or have questions! Read more Timo Jyrinki Quick Look: Dell XPS 13 Developer Edition (2015) with Ubuntu 14.04 LTS I recently obtained the newest Dell's Ubuntu developer offering, XPS 13 (2015, model 9343). I opted in for FullHD non-touch display, mostly because of better battery life, the actual no need for higher resolution, and matte screen which is great outside. Touch would have been "nice-to-have", but in my work I don't really need it. The other specifications include i7-5600U CPU, 8GB RAM, 256GB SSD [edit: lshw], and of course Ubuntu 14.04 LTS pre-installed as OEM specific installation. It was not possible to directly order it from Dell site, as Finland is reportedly not online market for Dell... The wholesale company however managed to get two models on their lists and so it's now possible to order via retailers. [edit: here are some country specific direct web order links however US, DE, FR, SE, NL] In this blog post I give a quick look on how I started up using it, and do a few observations on the pre-installed Ubuntu included. I personally was interested in using the pre-installed Ubuntu like a non-Debian/Ubuntu developer would use it, but Dell has also provided instructions for Ubuntu 15.04, Debian 7.0 and Debian 8.0 advanced users among else. Even if not using the pre-installed Ubuntu, the benefit from buying an Ubuntu laptop is obviously smaller cost and on the other hand contributing to free software (by paying for the hardware enablement engineering done by or purchased by Dell). Unboxing  The Black Box. (and white cat)  Opened box.  First time lid opened, no dust here yet!  First time boot up, transitioning from the boot logo to a first time Ubuntu video.  A small clip from the end of the welcoming video.  First time setup. Language, Dell EULA, connecting to WiFi, location, keyboard, user+password.  Creating recovery media. I opted not to do this as I had happened to read that it's highly recommended to install upgrades first, including to this tool.  Finalizing setup.  Ready to log in!  It's alive!  Not so recent 14.04 LTS image... lots of updates. Problems in the First Batch Unfortunately the first batch of XPS 13:s with Ubuntu are going to ship with some problems. They're easy to fix if you know how to, but it's sad that they're there to begin with in the factory image. There is no knowledge when a fixed batch will start shipping - July maybe? First of all, installing software upgrades stops. You need to run the following command via Dash → Terminal once: sudo apt-get install -f (it suggests upgrading libc-dev-bin, libc6-dbg, libc6-dev and udev). After that you can continue running Software Updater as usual, maybe rebooting in between. Secondly, the fixed touchpad driver is included but not enabled by default. You need to enable the only non-enabled ”Additional Driver” as seen in the picture below or instructed in Youtube.  Dialog enabling the touchpad driver. Clarification: you can safely ignore the two paragraphs below, they're just for advanced users like me who want to play with upgraded driver stacks. Optionally, since I'm interested in the latest graphics drivers especially in case of a brand new hardware like Intel Broadwell, I upgraded my Ubuntu to use the 14.04.2 Hardware Enablement stack (matches 14.10 hardware support): sudo apt install --install-recommends libgles2-mesa-lts-utopic libglapi-mesa-lts-utopic linux-generic-lts-utopic xserver-xorg-lts-utopic libgl1-mesa-dri-lts-utopic libegl1-mesa-drivers-lts-utopic libgl1-mesa-glx-lts-utopic:i386 Even though it's much better than a normal Ubuntu 14.10 would be since many of the Dell fixes continue to be in use, some functionality might become worse compared to the pre-installed stack. The only thing I have noticed though is the internal microphone not working anymore out-of-the-box, requiring a kernel patch as mentioned in Dell's notes. This is not a surprise since the real eventual upstream support involves switching from HDA to I2S and during 14.10 kernel work that was not nearly done. If you're excited about new drivers, I'd recommend waiting until August when the 15.04 based 14.04.3 stack is available (same package names, but 'vivid' instead of 'utopic'). [edit: I couldn't resist myself when I saw linux-generic-lts-vivid (3.19 kernel) is already in the archives. 14.04.2 + that gives me working microphone again!] [edit 08/2015: full 14.04.3 HWE stack now available, improves graphics performance and features among else, everything seems good: sudo apt install --install-recommends linux-generic-lts-vivid libgles2-mesa-lts-vivid libglapi-mesa-lts-vivid xserver-xorg-lts-vivid libgl1-mesa-dri-lts-vivid libegl1-mesa-lts-vivid libgl1-mesa-glx-lts-vivid:i386 libegl1-mesa-lts-vivid libwayland-egl1-mesa-lts-vivid mesa-vdpau-drivers-lts-vivid libgl1-mesa-dri-lts-vivid:i386 ] Conclusion Dell XPS 13 Developer Edition with Ubuntu 14.04 LTS is an extremely capable laptop + OS combination nearing perfection, but not quite there because of the software problems in the launch pre-install image. The laptop looks great, feels like a quality product should and is very compact for the screen size. I've moved over all my work onto it and everything so far is working smoothly in my day-to-day tasks. I'm staying at Ubuntu 14.04 LTS and using my previous LXC configuration to run the latest Ubuntu and Debian development versions. I've also done some interesting changes already like LUKS In-Place Conversion, converting the pre-installed Ubuntu into whole disk encrypted one (not recommended for the faint hearted, GRUB reconfiguration is a bit of a pain). I look happily forward to working a few productive years with this one! Read more pitti Ramblings from LinuxCon/Plumbers 2014 I’m on my way home from Düsseldorf where I attended the LinuxCon Europe and Linux Plumber conferences. I was quite surprised how huge LinuxCon was, there were about 1.500 people there! Certainly much more than last year in New Orleans. Containers (in both LXC and docker flavors) are the Big Thing everybody talks about and works with these days; there was hardly a presentation where these weren’t mentioned at all, and (what felt like) half of the presentations were either how to improve these, or how to use these technologies to solve problems. For example, some people/companies really take LXC to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. For example there was an interesting talk how to secure networking for containers, and pretty much everyone uses docker or LXC now to deploy workloads, run CI tests. There are projects like “fleet” which manage systemd jobs across an entire cluster of containers (distributed task scheduler) or like project-builder.org which auto-build packages from each commit of projects. Another common topic is the trend towards building/shipping complete (r/o) system images, atomic updates and all that goodness. The central thing here was certainly “Stateless systems, factory reset, and golden images” which analyzed the common requirements and proposed how to implement this with various package systems and scenarios. In my opinion this is certainly the way to go, as our current solution on Ubuntu Touch (i. e. Ubuntu’s system-image) is far too limited and static yet, it doesn’t extend to desktops/servers/cloud workloads at all. It’s also a lot of work to implement this properly, so it’s certainly understandable that we took that shortcut for prototyping and the relatively limited Touch phone environment. On Plumbers my main occupations were mostly the highly interesting LXC track to see what’s coming in the container world, and the systemd hackfest. On the latter I was again mostly listening (after all, I’m still learning most of the internals there..) and was able to work on some cleanups and improvements like getting rid of some of Debian’s patches and properly run the test suite. It was also great to sync up again with David Zeuthen about the future of udisks and some particular proposed new features. Looks like I’m the de-facto maintainer now, so I’ll need to spend some time soon to review/include/clean up some much requested little features and some fixes. All in all a great week to meet some fellows of the FOSS world a gain, getting to know a lot of new interesting people and projects, and re-learning to drink beer in the evening (I hardly drink any at home :-P). If you are interested you can also see my raw notes, but beware that there are mostly just scribbling. Now, off to next week’s Canonical meeting in Washington, DC! Read more pitti Running autopkgtests in the cloud It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding. This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem. I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner! While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs. The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines. To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific “daily” image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical’s Bootstack cloud, so it might need some adjustments on other clouds. Thus something like this should be run daily (pick the base images from nova image-list): $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
$./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img  This will generate adt-utopic-i386 and adt-utopic-amd64. Now I picked 34 packages that have the “most demanding” tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell): $ cat pkglist
apport
apt
aptdaemon
apache2
autopilot-gtk
autopkgtest
binutils
chromium-browser
cups
dbus
gem2deb
glib-networking
glib2.0
gvfs
kcalc
keystone
libnih
libreoffice
lintian
lxc
mysql-5.5
network-manager
nut
ofono-phonesim
php5
postgresql-9.4
python3.4
sbuild
shotwell
systemd-shim
ubiquity
ubuntu-drivers-common
udisks2
upstart


Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$cat adt-run-nova #!/bin/sh -e adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \ --flavor m1.small --image adt-utopic-i386 \ --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5  Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an “always need this constant to select a non-default network” option that is specific to Canonical Bootstack. Finally, let’ run the packages from above with using ten VMs in parallel:  parallel -j 10 ./adt-run-nova --$(< pkglist)


After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

pitti

autopkgtest 3.5: Reboot support, Perl/Ruby implicit tests

Last week’s autopkgtest 3.5 release (in Debian sid and Ubuntu Utopic) brings several new features which I’d like to announce.

Tests that reboot

For testing low-level packages like init or the kernel it is sometimes desirable to reboot the testbed in the middle of a test. For example, I added a new boot_and_services systemd autopkgtest which configures grub to boot with systemd as pid 1, reboots, and then checks that the most important services like lightdm, D-BUS, NetworkManager, and cron come up as expected. (This test will be expanded a lot in the future to cover other areas like the journal, logind, etc.)

In a testbed which supports rebooting (currently only QEMU) your test will now find an “autopkgtest-reboot” command which the test calls with an arbitrary “marker” string. autopkgtest will then reboot the testbed, save/restore any files it needs to (like the tests file tree or previously created artifacts), and then re-run the test with ADT_REBOOT_MARK=mymarker.

The new “Reboot during a test” section in README.package-tests explains this in detail with an example.

Implicit test metadata for similar packages

The Debian pkg-perl team recently discussed how to add package tests to the ~ 3.000 Perl packages. For most of these the test metadata looks pretty much the same, so they created a new pkg-perl-autopkgtest package which centralizes the logic. autopkgtest 3.5 now supports an implicit debian/tests/control control file to avoid having to modify several thousand packages with exactly the same file.

An initial run already looked quite promising, 65% of the packages pass their tests. There will be a few iterations to identify common failures and fix those in pkg-perl-autopkgtest and autopkgtestitself now.

There is still some discussion about how implicit test control files go together with the DEP-8 specification, as other runners like sadt do not support them yet. Most probably we’ll declare those packages XS-Testsuite: autopkgtest-pkg-perl instead of the usual autopkgtest.

In the same vein, Debian’s Ruby maintainer (Antonio Terceiro) added implicit test control support for Ruby packages. We haven’t done a mass test run with those yet, but their structure will probably look very similar.

pitti

vim config for Markdown+LaTeX pandoc editing

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX -------------------------------------------

function s:MDSettings()
noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>

" adjust syntax highlighting for LaTeX parts
"   inline formulas:
syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
"   environments:
syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement
"   commands:
syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement
endfunction

autocmd FileType markdown :call <SID>MDSettings()


That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

pitti

autopkgtest 3.2: CLI cleanup, shell command tests, click improvements

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$cat adt_sid --output-dir=/tmp/out -s --- schroot sid$ adt-run libpng @adt_sid


Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]


This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

pitti

deb, click, schroot, LXC, QEMU, phone, cloud: One autopkgtest to Rule Them All!

We currently use completely different methods and tools of building test beds and running tests for Debian vs. Click packages, for normal uploads vs. CI airline landings vs. upstream project merge proposal testing, and keep lots of knowledge about Click package test metadata external and not easily accessible/discoverable.

Today I released autopkgtest 3.0 (and 3.0.1 with a few minor updates) which is a major milestone in unifying how we run package tests both locally and in production CI. The goals of this are:

• Keep all test metadata, such as test dependencies, commands to run the test etc., in the project/package source itself instead of external. We have had that for a long time for Debian packages with DEP-8 and debian/tests/control, but not yet for Ubuntu’s Click packages.
• Use the same tools for Debian and Click packages to simplify what developers have to know about and to reduce the amount of test infrastructure code to maintain.
• Use the exact same testbeds and test runners in production CI than what developers use locally, so that you can reproduce and investigate failures.
• Re-use the existing autopkgtest capabilities for using various kinds of testbeds, and conversely, making all new testbed types immediately available to all package formats.
• Stop putting tests into the Ubuntu archive as packages (such as mediaplayer-app-autopilot). This just adds packaging and archive space overhead and also makes updating tests a lot harder and taking longer than it should.

So, let’s dive into the new features!

We want to run tests on real hardware such as a laptop of a particular brand with a particular graphics card, or an Ubuntu phone. We also want to restructure our current CI machinery to run tests on a real OpenStack cloud and gradually get rid of our hand-maintained QA lab with its test machines. While these use cases seem rather different, they both have in common that there is an already existing machine which is pretty much only accessible with ssh. Once you have an ssh connection, they look pretty much the same, you just need different initial setup (like fiddling with adb, calling nova boot, etc.) to prepare them.

So the new adt-virt-ssh runner factorizes all the common bits such as communicating with adt-run, auto-detecting sudo availability, doing SSH connection sharing etc., and delegates the target specific bits to a “setup script”. E. g. we could specify --setup-script ssh-setup-nova or --setup-script ssh-setup-adb which would then get called with open at the appropriate time by adt-run; it calls the nova commands to create a VM, or run a few adb commands to install/start ssh and install the public key. Then autopkgtest does its thing, and eventually calls the script with cleanup again. The actual protocol is a bit more involved (see manpage), but that’s the general idea.

autopkgtest now ships readymade scripts for these two use cases. So you could e. g. run the libpng tests in a temporary cloud VM:

# if you don't have one, create it with "nova keypair-create"
$nova keypair-list [...] | pitti | 9f:31:cf:78:50:4f:42:04:7a:87:d7:2a:75:5e:46:56 | # find a suitable image$ nova image-list
[...]
| ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 | Ubuntu Server 14.04 LTS (amd64 20140607.1) - Partner Image                         | ACTIVE |

$nova flavor-list [...] | 100 | standard.xsmall | 1024 | 10 | 10 | | 1 | 1.0 | N/A | # now run the tests: please be patient, this takes a few mins!$ adt-run libpng --setup-commands="apt-get update" --- ssh -s /usr/share/autopkgtest/ssh-setup/nova -- \
-f standard.xsmall -i ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 -k pitti
[...]
adt-run [16:23:16]: test build:  - - - - - - - - - - results - - - - - - - - - -
build                PASS


Please see man adt-virt-ssh for details how to use it and how to write setup scripts. There is also a commented /usr/share/autopkgtest/ssh-setup/SKELETON template for writing your own for your use cases. You can also not use any setup script and just specify user and host name as options, but please remember that the ssh runner cannot clean up after itself, so never use this on important machines which you can’t reset/reinstall!

Ubuntu phones with system images have a read-only file system where you can’t install test dependencies with apt. A similar case is using the “null” runner without root. When apt-get install is not available, autopkgtest now has a reduced fallback mode: it downloads the required test dependencies, unpacks them into a temporary directory, and runs the tests with $PATH, $PYTHONPATH, $GI_TYPELIB_PATH, etc. pointing to the unpacked temp dir. Of course this only works for packages which are relocatable in that way, i. e. libraries, Python modules, or command line tools; it will totally fail for things which look for config files, plugins etc. in hardcoded directory paths. But it’s good enough for the purposes of Click package testing such as installing autopilot, libautopilot-qt etc. Click package support autopkgtest now recognizes click source directories and *.click package arguments, and introduces a new test metadata specification syntax in a click package manifest. This is similar in spirit and capabilities to DEP-8 debian/tests/control, except that it’s using JSON:  "x-test": { "unit": "tests/unittests", "smoke": { "path": "tests/smoketest", "depends": ["shunit2", "moreutils"], "restrictions": ["allow-stderr"] }, "another": { "command": "echo hello > /tmp/world.txt" } }  For convenience, there is also some magic to make running autopilot tests particularly simple. E. g. our existing click packages usually specify something like  "x-test": { "autopilot": "ubuntu_calculator_app" }  which is enough to “do what I mean”, i. e. implicitly add the autopilot test depends and run autopilot with the specified test module name. You can specify your own dependencies and/or commands, and restrictions etc., of course. So with this, and the previous support for non-apt test dependencies and the ssh runner, we can put all this together to run the tests for e. g. the Ubuntu calculator app on the phone: $ bzr branch lp:ubuntu-calculator-app
# built straight from that branch; TODO: where is the official" download URL?
$wget http://people.canonical.com/~pitti/tmp/com.ubuntu.calculator_1.3.283_all.click$ adt-run ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- \
[..]
Traceback (most recent call last):
File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 93, in test_divide_with_infinity_length_result_number
self._assert_result("0.33333333")
File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 63, in _assert_result
self.main_view.get_result, Eventually(Equals(expected_result)))
File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 406, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: After 10.0 seconds test failed: '0.33333333' != '0.3'

Ran 33 tests in 295.586s
FAILED (failures=1)


Note that the current adb ssh setup script deals with some things like applying the autopilot click AppArmor hooks and disabling screen dimming, but it does not do the first-time setup (connecting to network, doing the gesture intro) and unlocking the screen. These are still on the TODO list, but I need to find out how to do these properly. Help appreciated!

Click app tests in schroot/containers

But, that’s not the only thing you can do! autopkgtest has all these other runners, so why not try and run them in a schroot or container? To emulate the environment of an Ubuntu Touch session I wrote a --setup-commands script:

adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- schroot utopic


This will actually work in the sense of running (and succeeding) the autopilot tests, but it will fail due to a lot of libust[11345/11358]: Error: Error opening shm /lttng-ust-wait... warnings on stderr. I don’t know what these mean, just that I also see them on the phone itself occasionally.

I also wrote another setup-commands script which emulates “read-only apt”, so that you can test the “unpack only” fallback. So you could prepare a container with click and the App framework preinstalled (so that it doesn’t always take ages to install them), starting from a standard adt-build-lxc container:

$sudo lxc-clone -o adt-utopic -n click$ sudo lxc-start -n click
# run "sudo apt-get install click ubuntu-sdk-libs ubuntu-app-launch-tools" there
# then "sudo powerdown"

# current apparmor profile doesn't allow remounting something read-only
$echo "lxc.aa_profile = unconfined" | sudo tee -a /var/lib/lxc/click/config  Now that container has enough stuff preinstalled to be reasonably fast to set up, and the remaining test dependencies (mostly autopilot) work fine with the unpack/$*_PATH fallback:

$adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \ --setup-commands /usr/share/autopkgtest/setup-commands/ro-apt \ ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click \ --- lxc -es click  This will successfully run all the tests, and provided you have apt-cacher-ng installed, it only takes a few seconds to set up. This might be a nice thing to do on merge proposals, if you don’t have an actual phone at hand, or don’t want to clutter it up. autopkgtest 3.0.1 will be available in Utopic tomorrow (through autosyncs). If you can’t wait to try it out, download it from my people.c.c page ☺. Feedback appreciated! Read more pitti Booting Ubuntu with systemd: Now in Utopic Hot on the heels of my previous annoucement of my systemd PPA for trusty, I’m now happy to announce that the latest systemd 204-10ubuntu1 just landed in Utopic, after sorting out enough of the current uninstallability in -proposed. The other fixes (bluez, resolvconf, lightdm, etc.) already landed a few days ago. Compared to the PPA these have a lot of other fixes and cleanups, due to the excellent hackfest that we held last weekend. So, upgrade today and let us know about problems in bugs tagged “systemd-boot”. I think systemd in current utopic works well enough to not break a developer’s day to day workflow, so we can now start parallelizing the work of identifying packages which only have upstart jobs and provide corresponding systemd units (or SysV script). Also, this hasn’t yet been tested on the phone at all, I’m sure that it’ll require quite some work (e. g. lxc-android-config has a lot of upstart jobs). To clarify, there is nofixed date/plan/deadline when this will be done, in particular it might well last more than one release cycle. So we’ll “release” (i. e. switch to it as a default) when it’s ready Read more pitti Booting Ubuntu with systemd: Test packages available On the last UDS we talked about migrating from upstart to systemd to boot Ubuntu, after Mark announced that Ubuntu will follow Debian in that regard. There’s a lot of work to do, but it parallelizes well once developers can run systemd on their workstations or in VMs easily and the system boots up enough to still be able to work with it. So today I merged our systemd package with Debian again, dropped the systemd-services split (which wasn’t accepted by Debian and will be unnecessary now), and put it into my systemd PPA. Quite surprisingly, this booted a fresh 14.04 VM pretty much right away (of course there’s no Plymouth prettiness). The main two things which were missing were NetworkManager and lightdm, as these don’t have an init.d script at all (NM) or it isn’t enabled (lightdm). Thus the PPA also contains updated packages for these two which provide a proper systemd unit. With that, the desktop is pretty much fully working, except for some details like cron not running. I didn’t go through /etc/init/*.conf with a small comb yet to check which upstart jobs need to be ported, that’s now part of the TODO list. So, if you want to help with that, or just test and tell us what’s wrong, take the plunge. In a 14.04 VM (or real machine if you feel adventurous), do  sudo add-apt-repository ppa:pitti/systemd sudo apt-get update sudo apt-get dist-upgrade  This will replace systemd-services with systemd, update network-manager and lightdm, and a few libraries. Up to now, when you reboot you’ll still get good old upstart. To actually boot with systemd, press Shift during boot to get the grub menu, edit the Ubuntu stanza, and append this to the linux line: init=/lib/systemd/systemd. For the record, if pressing shift doesn’t work for you (too fast, VM, or similar), enable the grub menu with  sudo sed -i '/GRUB_HIDDEN_TIMEOUT/ s/^/#/' /etc/default/grub sudo update-grub  Once you are satisfied that your system boots well enough, you can make this permanent by adding the init= option to /etc/default/grub (and possibly remove the comment sign from the GRUB_HIDDEN_TIMEOUT lines) and run sudo update-grub again. To go back to upstart, just edit the file again, remove the init=sudo update-grub again.  I’ll be on the Debian systemd/GNOME sprint next weekend, so I feel reasonably well prepared now. Update: As the comments pointed out, this bricked /etc/resolv.conf. I now uploaded a resolvconf package to the PPA which provides the missing unit (counterpart to the /etc/init/resolvconf.conf upstart job) and this now works fine. If you are in that situation, please boot with upstart, and do the following to clean up: sudo rm /etc/resolv.conf sudo ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf Then you can boot back to systemd. Update 2: If you want to help testing, please file bugs with a systemd-boot tag. See the list of known bugs when booting with systemd. Read more     My CuBox-i has arrived March 11, 2014 Under debian , ubuntu A couple of weeks ago, Gunnar Wolf mentioned on IRC that his CuBox-i4 had arrived. This resulted in various jealous noises from me; having heard about this device making the rounds at the Kernel Summit, I ordered one for myself back in December, as part of the long-delayed HDification of our home entertainment system and coinciding with the purchase of a new Samsung SmartTV. We've been running an Intel Coppermine Celeron for a decade as a MythTV frontend and encoder (hardware-assisted with a PVR-250), which is fine for SD video, but really doesn't cut it for anything HD. So after finally getting a TV that would showcase HD in all its glory, I figured it was time to upgrade from an S-Video-out, barely-limping-along tower machine to something more modern with HDMI out, eSATA, hardware video decoding, and whose biggest problem is it's so small that it threatens to get lost in the wiring! Since placing the order, I've been bemused to find that the SmartTV is so smart that it has had a dramatic impact on how we consume media; between that and our decision to not be a boiled frog in the face of DISH Network's annual price increase, the MythTV frontend has become a much less important part of our entertainment center, well before I ever got a chance to lay hands on the intended replacement hardware. But that's a topic for another day. Anyway, the CuBox-i4 finally arrived in the mail on Friday, so of course I immediately needed to start hacking on it! Like Gunnar, who wrote last week about his own experience getting a "proper" Debian install on the box, I'm not content with running a binary distribution image prepared by some third party; I expect my hardware toys to run official distro packages assembled using official distro tools and, if at all possible, distributed on official distro images for a minimum of hassle. Whereas Gunnar was willing to settle for using third-party binaries for the bootloader and kernel, however, I'm not inclined to do any such thing. And between my stint at Linaro a few years ago and the recent work on Ubuntu for phones, I do have a little knowledge of Linux on ARM (probably just enough to be dangerous), so I set to work trying to get the CuBox-i4 bootable with stock Debian unstable. Being such a cutting-edge piece of hardware, that does pose some challenges. Support for the i.MX6 chip is in the process of being upstreamed to U-Boot, but the support for the CuBox-i devices isn't there yet, nor is the support for SPL on i.MX6 (which allows booting the variants of the CuBox-i with a single U-Boot build, instead of requiring a different bootloader build for each flavor). The CuBox-i U-Boot that SolidRun makes available (with source at github) is based on U-Boot 2013.10-rc4, so more than a full release behind Debian unstable, and the patches there don't apply to U-Boot 2014.01 without a bit of effort. But if it's worth doing, it's worth doing right, so I've taken the time to rebase the CuBox-i patches on top of 2014.01, publishing the results of the rebase to my own github repository and submitting a bug to the Debian U-Boot maintainers requesting its inclusion. The next step is to get a Debian kernel that not only works, but fully supports the hardware out of the box (a 3.13 generic arm kernel will boot on the machine, but little things like ethernet and hdmi don't work yet). I've created a page in the Debian wiki for tracking the status of this work. Read more Michael Hall My first Debian package uploaded February 19, 2014 by Michael Hall Under canonical , community , debian , opensource , packaging , programming , python , ubblopomo , ubuntu , upstream | Comment Today I reached another milestone in my open source journey: I got my first package uploaded into Debian’s archives. I’ve managed to get packages uploaded into Ubuntu before, and I’ve attempted to get one into Debian, but this is the first time I’ve actually gotten a contribution in that would benefit Debian users. I couldn’t have done with without the the help and mentorship of Paul Tagliamonte, but I was also helped by a number of others in the Debian community, so a big thank you to everybody who answered my questions and walked me through getting setup with things like Alioth and re-learning how to use SVN. One last bit of fun, I was invited to join the Linux Unplugged podcast today to talk about yesterday’s post, you can listen it it (and watch IRC comments scroll by) here: http://www.jupiterbroadcasting.com/51842/neckbeard-entitlement-factor-lup-28/ Read more Michael Hall Getting into Debian February 11, 2014 by Michael Hall Under application-development , canonical , community , debian , django , foss , opensource , packaging , programming , python , sdk , ubblopomo , ubuntu , uds , upstream , work | Comment Quick overview post today, because it’s late and I don’t have anything particular to talk about today. First of all, the next vUDS was announced today, we’re a bit late in starting it off but we wanted to have another one early enough to still be useful to the Trusty release cycle. Read the linked mailinglist post for details about where to find the schedule and how to propose sessions. I pushed another update to the API website today that does a better job balancing the 2-column view of namespaces and fixes the sub-nav text to match the WordPress side of things. This was the first deployment in a while to go off without a problem, thanks to having a new staging environment created last time. I’m hoping my deployment problems on this are now far behind me. I took a task during my weekly Core Apps update call to look more into the Terminal app’s problem with enter and backspace keys, so I may be pinging some of you in the coming week about it to get some help. You have been warned. Finally, I decided a few weeks ago to spread out my after-hours community a activity beyond Ubuntu, and I’ve settled on the Debian new maintainers Django website as somewhere I can easily start. I’ve got a git repo where I’m starting writing the first unit tests for that website, and as part of that I’m also working on Debian packaging for the Python model-mommy library which we use extensively in Ubuntu’s Django website. I’m having to learn (or learn more) Debian packaging, Git workflows and Debian’s processes and community, all of which are going to be good for me, and I’m looking forward to the challenge. Read more Timo Jyrinki Workaround for setting Full RGB when Intel driver's Automatic setting does not work November 27, 2013 by Timo Jyrinki Under debian , en , software , ubuntu Background I upgraded from Linux 3.8 to 3.11 among with newer Mesa, X.Org and Intel driver recently and I found a small workaround was needed because of upstream changes.The upstream change was the Add "Automatic" mode for "Broadcast RGB" property, and defaulting to the Automatic. This is a sensible default, since many (most?) TVs default to the more limited 16-235, and continuing to default to Full from the driver side would mean wrong colors on the TV. I've set my screen to support the full 0-255 range available to not cut the amount of available shades of colors down.Unfortunately it seems the Automatic setting does not work for my HDMI input, ie blacks become grey since the driver still outputs the more limited range. Maybe there could be something to improve on the driver side, but I'd guess it's more about my 2008 Sony TV actually having a mode that the standard suggests limited range for. I remember the TV did default to limited range, so maybe the EDID data from TV does not change when setting the RGB range to Full.I hope the Automatic setting works to offer full range on newer screens and the modes they have, but that's probably up to the manufacturers and standards.Below is an illustration of the correct setting on my Haswell CPU. When the Broadcast RGB is left to its default Automatic setting, the above image is displayed. When set to Full, the image below with deeper blacks is seen instead. I used manual settings on my camera so it's the same exposure.Workaround For me the workaround has evolved to the following so far. Create a /etc/X11/Xsession.d/95fullrgb file: if [ "$(/usr/bin/xrandr -q --prop | grep 'Broadcast RGB: Full' | wc -l)" = "0" ] ; then/usr/bin/xrandr --output HDMI3 --set "Broadcast RGB" "Full"fiAnd since I'm using lightdm, adding the following to /etc/lightdm/lightdm.conf means the flicker only happens once during bootup:display-setup-script=/etc/X11/Xsession.d/95fullrgbImportant: when using the LightDM setting, enable executable bits (chmod +x) to /etc/X11/Xsession.d/95fullrgb for it to work. Obviously also check your output, for me it was HDMI3.If there is no situation where it'd set back to "Limited 16:235" setting on its own, the display manager script should be enough and having it in /etc/X11/Xsession.d is redundant and slows login time down. I think for me it maybe went from 2 seconds to 3 seconds since executing xrandr query is not cheap.Misc Note that unrelated to Full range usage, the Limited range at the moment behaves incorrectly on Haswell until the patch in bug #71769 is accepted. That means, the blacks are grey in Limited mode even if the screen is also set to Limited.I'd prefer there would be a kernel parameter for the Broadcast RGB setting, although my Haswell machine does boot so fast I don't get to see too many seconds of wrong colors... Read more pitti How to watch system D-BUS method calls September 26, 2013 by pitti Under d-bus , debian , debug , gnome , ubuntu The current default D-BUS configuration (at least on Ubuntu) disallows monitoring method calls on the system D-BUS (dbus-monitor --system), which makes debugging rather cumbersome; this has worked years ago, but apparently got changed for security reasons. It took me a half an hour to figure out how to enable this for debugging, and as this has annoyingly little Google juice (I didn’t find any solution), let’s add some. The trick seems to be to set a global policy to be able to eavesdrop any method call after the individual /etc/dbus-1/system.d/*.conf files applied their restrictions, for which there is already a convenient facility. Create a file /etc/dbus-1/system-local.conf with <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE busconfig PUBLIC "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN" "http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd"> <busconfig> <policy user="root"> <!-- Allow everything to be sent --> <allow send_destination="*" eavesdrop="true"/> <!-- Allow everything to be received --> <allow eavesdrop="true"/> <allow send_type="method_call"/> </policy> </busconfig> Then sudo dbus-monitor --system displays everything. Needless to say that you don’t want this file on any production system! Does anyone know an easier way? My first naive stab was to run dbus-monitor as root, but that doesn’t make any difference at all. Update: Turns out this is already described in a better way at https://wiki.ubuntu.com/DebuggingDBus. Yay me for not finding that.. I updated above recipe to limit access to root, which is much better indeed. Read more pitti umockdev 0.2.2 released May 24, 2013 by pitti Under announcement , debian , gnome , qa , release , testing , ubuntu , umockdev I did a 0.2.2 maintenance release for umockdev to fix building with Vala 0.16.1, gcc 4.8 (the changed sizeof behaviour caused segfaults), and current udev releases (umockdev-record stumbled over the new “link priority” fields of udevadm). There are also a couple of bug fixes, but no new features. Read more Timo Jyrinki Network from laptop to Android device over USB May 21, 2013 by Timo Jyrinki Under debian , devices , en , ubuntu If you're running an Android device with GNU userland Linux in a chroot and need a full network access over USB cable (so that you can use your laptop/desktop machine's network connection from the device), here's a quick primer on how it can be set up.When doing Openmoko hacking, one always first plugged in the USB cable and forwarded network, or like I did later forwarded network over Bluetooth. It was mostly because the WiFi was quite unstable with many of the kernels.I recently found out myself using a chroot on a Nexus 4 without working WiFi, so instead of my usual WiFi usage I needed network over USB... trivial, of course, except that there's Android on the way and I'm a Android newbie. Thanks to ZDmitry on Freenode, I got the bits for the Android part so I got it working.On device, have eg. data/usb.sh with the following contents.#!/system/xbin/shCHROOT="/data/chroot"ip addr add 192.168.137.2/30 dev usb0ip link set usb0 upip route delete defaultip route add default via 192.168.137.1;setprop net.dns1 8.8.8.8echo 'nameserver 8.8.8.8' >> $CHROOT/run/resolvconf/resolv.confOn the host, execute the following:adb shell setprop sys.usb.config rndis,adbadb shell data/usb.shsudo ifconfig usb0 192.168.137.1sudo iptables -A POSTROUTING -t nat -j MASQUERADE -s 192.168.137.0/24echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward sudo iptables -P FORWARD ACCEPTThis works at least with Ubuntu saucy chroot. The main difference in some other distro might be whether the resolv.conf has moved to /run or not. You should be now all set up to browse / apt-get stuff from the device again.Update: Clarified that this is to forward the desktop/laptop's network connection to the device so that network is accessible from the device over USB.Update2, 09/2013: It's also possible to get working on the newer flipped images. Remove the "$CHROOT" from nameserver echoing and it should be fine. With small testing it got somehow reset after a while at which point another run of data/usb.sh on the device restored connection. Read more Plymouth is not a bootsplash May 8, 2013 Under debian , ubuntu Congrats to the Debian release team on the new release of Debian 7.0 (wheezy)! Leading up to the release, a meme making the rounds on Planet Debian has been to play a #newinwheezy game, calling out some of the many new packages in 7.0 that may be interesting to users. While upstart as a package is nothing new in wheezy, the jump to upstart 1.6.1 from 0.6.6 is quite a substantial change. It does bring with it a new package, mountall, which by itself isn't terribly interesting because it just provides an upstart-ish replacement for some core scripts from the initscripts package (essentially, /etc/rcS.d/*mount*). Where things get interesting (and, typically, controversial) is the way in which mountall leverages plymouth to achieve this. What is plymouth? There is a great deal of misunderstanding around plymouth, a fact I was reminded of again while working to get a modern version of upstart into wheezy. When Ubuntu first started requiring plymouth as an essential component of the boot infrastructure, there was a lot of outrage from users, particularly from Ubuntu Server users, who believed this was an attempt to force pretty splash screen graphics down their throats. Nothing could be further from the truth. Plymouth provides a splash screen, but that's not what plymouth is. What plymouth is, is a boot-time I/O multiplexer. And why, you ask, would upstart - or mountall, whose job is just to get the filesystem mounted at boot - need a boot-time I/O multiplexer? Why use plymouth? The simple answer is that, like everything else in a truly event-driven boot system, filesystem mounting is handled in parallel - with no defined order. If a filesystem is missing or fails an fsck, mountall may need to interact with the user to decide how to handle it. And if there's more than one missing or broken filesystem, and these are all being found in parallel, there needs to be a way to associate each answer from the user to the corresponding question from mountall, to avoid crossed signals... and lost data. One possible way to handle this would be for mountall to serialize the fsck's / mounts. But this is a pretty unsatisfactory answer; all other things (that is, boot reliability) being equal, admins would prefer their systems to boot as fast as possible, so that they can get back to being useful to users. So we reject the idea of solving the problem of serializing prompts by making mountall serialize all its filesystem checks. Another option would be to have mountall prompt directly on the console, doing its own serialization of the prompts (even though successful mounts / fscks continue to be run in parallel). This, too, is not desirable in the general case, both because some users actually would like to have pretty splash screens at boot time, and this would be incompatible with direct console prompting; and because mountall is not the only piece of software that need to prompt at boot time (see also: cryptsetup). Plymouth: not just a pretty face Enter plymouth, which provides the framework for serializing requests to the user while booting. It can provide a graphical boot splash, yes; ironically, even its own homepage suggests that this is its purpose. But it can also provide a text-only console interface, which is what you get automatically when booting without a splash boot argument, or even handle I/O over a serial console. Which is why, contrary to the initial intuitions of the s390 porters upon seeing this package, plymouth is available for all of Debian's Linux architectures in wheezy, s390 and s390x included, providing a consistent architecture for boot-time I/O for systems that need it - which is any machine using a modern boot system, such as upstart or systemd. Room for improvement Now, having a coherent architecture for your boot I/O is one thing; having a bug-free splash screen is another. The experience of plymouth in Ubuntu has certainly not been bug-free, with plymouth making significant demands of the kernel video layer. Recently, the binary video driver packages in Ubuntu have started to blacklist the framebuffer kernel driver entirely due to stability concerns, making plymouth splash screens a non-starter for users of these drivers and regressing the boot experience. One solution for this would be to have plymouth offload the video handling complexity to something more reliable and better tested. Plymouth does already have an X backend, but we don't use that in Ubuntu because even if we do have an X server, it normally starts much later than when we would want to display the splash screen. With Mir on the horizon for Ubuntu, however, and its clean separation between system and session compositors, it's possible that using a Mir backend - that can continue running even after the greeter has started, unlike the current situation where plymouth has to cede the console to the display manager when it starts - will become an appealing option. This, too, is not without its downsides. Needing to load plymouth when using crypted root filesystems already makes for a bloated initramfs; adding a system compositor to the initramfs won't make it any better, and introduces further questions about how to hand off between initramfs and root fs. Keeping your system compositor running from the initramfs post-boot isn't really ideal, particularly for low-memory systems; whereas killing the system compositor and restarting it will make it harder to provide a flicker-free experience. But for all that, it does have its architectural appeal, as it lets us use plymouth as long as we need to after boot. As the concept of static runlevels becomes increasingly obsolete in the face of dynamic systems, we need to design for the world where the distinction between "booting" and "booted" doesn't mean what it once did. Read more >> Older posts 
 Latest Official Posts July 15, 2015 Clarification on IP Rights Policy May 13, 2015 Ubuntu Security Update on VENOM (CVE-2015-3456) [UPDATED] October 23, 2014 Ten years of Ubuntu October 16, 2014 Ubuntu Security Update on Poodle (CVE-2014-3566) and SSLv3 Downgrade Attack September 12, 2014 Cycling in London Featured Blogs Ubuntu One Launchpad Blog Mark Shuttleworth Canonical Blog Ayatana Blog Ubuntu App Developer Blog People You can't take the sky from me Alberto Milone allenap allenap/maas Amit Kucheria Andres Rodriguez Andrew Glen-Young Anthony Wong Ara Pulido Barry Warsaw Bazaar team Bitácora de Vuelo Bjoern Michaelsen Björn Tillenius Björn Tillenius Blogging in the Wind Bofu Chen Brad Figg Brad Marshall Brian Fromme Canonical Blog Canonical Design Blog Canonical ISD Canonical Marketing Team Blog cat /dev/ursula cat /dev/ursula Certifiably (Brendan Donegan's Ubuntu Blog) Chad Miller Chris Halse Rogers Chris Johnston Christian Reis Daniel Holbach's blog Danilo Segan Darryl Weaver David Henningsson David Murphy David Owen David Planella Distributed Teams Dustin Kirkland Dustin Kirkland Gavin Panella Graham Binns Guilherme Salgado Gustavo Niemeyer Hardik Dalwadi Hardik Dalwadi @ CANONICAL How Bazaar I me mine I me mine I me mine Iain Lane Inert Ramblings James Henstridge James Tait James Westby Jamie Strandboge Jeremy Kerr Joey Stanford John Pugh Jorge Castro Julian Edwards Julien Funk Junien Fridrick JussiP Ken VanDine Keng-Yu Lin kevin gunn KyleN Ubuntu KyleN Ubuntu LaMont Jones Landscape Blog Launchpad Blog Launchpad Blog Lee Jones Louis Bouchard Manuel de la Pena Mark Shuttleworth Mark W Wenning Martin Albisetti's blog Martin Pitt Matt Fischer Michael Hall's Blog Michael Hudson Michael Terry Michi Henning Michi Henning Multi-touch on Ubuntu Nick Moffitt Not Lucky All The Time, But Smart Everyday… Olli's random thoughts and impressions One More Thing To Do person@CANONICAL-DESK person@CANONICAL-DESK person@CANONICAL-DESK Pixoul Photography Prakash Advani racarr's blog racarrs blog! RedVoodoo.org Ricardo Salveti Rick Harding Robert Ancell Robert Ayres Ryan Finnie Ryan Finnie S3hh Scott Sweeny Sean Feole Shang Wu Shuduo sil2100//vx web-page Smackerel of Opinion Something driven development Stéphane Graber Steve George Steve Langasek Stuart Bishop Stuart Metcalfe Ted Gould the blog of robin The Orange Notebook The Quality Hour The Raving Rick Timo Jyrinki Train of Thought | John McAleely tvoss@work Ubuntu App Developer Blog Ubuntu Kernel Team Blog Ubuntu One Blog Ubuntu Server Team Blog Ubuntu Touch Development in Chinese (Ubuntu Touch ??) Ubuntu Touch Development in CSDN (Chinese) utlemming utlemming's blog Victor Palau's Blog Wanderings of a Kernel Engineer ZhengPeng Hou ~apw Canonical Voices Last update: Sat Nov 28 00:16:49 2015 A Feedjack powered Planet 
 About Canonical What we do Canonical and open source Careers Partnerships News and events Contact About Ubuntu Ubuntu for business Ubuntu for you Ubuntu and education Case studies Enterprise services Ubuntu Advantage Training Consultancy Engineering services OEM services Certification Core engineering Consumer services Ubuntu One Support Training Further information News and events Careers Legal News feed © 2010 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.