Canonical Voices

Posts tagged with 'debian'

pitti

I’m on my way home from Düsseldorf where I attended the LinuxCon Europe and Linux Plumber conferences. I was quite surprised how huge LinuxCon was, there were about 1.500 people there! Certainly much more than last year in New Orleans.

Containers (in both LXC and docker flavors) are the Big Thing everybody talks about and works with these days; there was hardly a presentation where these weren’t mentioned at all, and (what felt like) half of the presentations were either how to improve these, or how to use these technologies to solve problems. For example, some people/companies really take LXC to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. For example there was an interesting talk how to secure networking for containers, and pretty much everyone uses docker or LXC now to deploy workloads, run CI tests. There are projects like “fleet” which manage systemd jobs across an entire cluster of containers (distributed task scheduler) or like project-builder.org which auto-build packages from each commit of projects.

Another common topic is the trend towards building/shipping complete (r/o) system images, atomic updates and all that goodness. The central thing here was certainly “Stateless systems, factory reset, and golden images” which analyzed the common requirements and proposed how to implement this with various package systems and scenarios. In my opinion this is certainly the way to go, as our current solution on Ubuntu Touch (i. e. Ubuntu’s system-image) is far too limited and static yet, it doesn’t extend to desktops/servers/cloud workloads at all. It’s also a lot of work to implement this properly, so it’s certainly understandable that we took that shortcut for prototyping and the relatively limited Touch phone environment.

On Plumbers my main occupations were mostly the highly interesting LXC track to see what’s coming in the container world, and the systemd hackfest. On the latter I was again mostly listening (after all, I’m still learning most of the internals there..) and was able to work on some cleanups and improvements like getting rid of some of Debian’s patches and properly run the test suite. It was also great to sync up again with David Zeuthen about the future of udisks and some particular proposed new features. Looks like I’m the de-facto maintainer now, so I’ll need to spend some time soon to review/include/clean up some much requested little features and some fixes.

All in all a great week to meet some fellows of the FOSS world a gain, getting to know a lot of new interesting people and projects, and re-learning to drink beer in the evening (I hardly drink any at home :-P).

If you are interested you can also see my raw notes, but beware that there are mostly just scribbling.

Now, off to next week’s Canonical meeting in Washington, DC!

Read more
pitti

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific “daily” image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical’s Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the “most demanding” tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an “always need this constant to select a non-default network” option that is specific to Canonical Bootstack.

Finally, let’ run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Read more
pitti

Last week’s autopkgtest 3.5 release (in Debian sid and Ubuntu Utopic) brings several new features which I’d like to announce.

Tests that reboot

For testing low-level packages like init or the kernel it is sometimes desirable to reboot the testbed in the middle of a test. For example, I added a new boot_and_services systemd autopkgtest which configures grub to boot with systemd as pid 1, reboots, and then checks that the most important services like lightdm, D-BUS, NetworkManager, and cron come up as expected. (This test will be expanded a lot in the future to cover other areas like the journal, logind, etc.)

In a testbed which supports rebooting (currently only QEMU) your test will now find an “autopkgtest-reboot” command which the test calls with an arbitrary “marker” string. autopkgtest will then reboot the testbed, save/restore any files it needs to (like the tests file tree or previously created artifacts), and then re-run the test with ADT_REBOOT_MARK=mymarker.

The new “Reboot during a test” section in README.package-tests explains this in detail with an example.

Implicit test metadata for similar packages

The Debian pkg-perl team recently discussed how to add package tests to the ~ 3.000 Perl packages. For most of these the test metadata looks pretty much the same, so they created a new pkg-perl-autopkgtest package which centralizes the logic. autopkgtest 3.5 now supports an implicit debian/tests/control control file to avoid having to modify several thousand packages with exactly the same file.

An initial run already looked quite promising, 65% of the packages pass their tests. There will be a few iterations to identify common failures and fix those in pkg-perl-autopkgtest and autopkgtestitself now.

There is still some discussion about how implicit test control files go together with the DEP-8 specification, as other runners like sadt do not support them yet. Most probably we’ll declare those packages XS-Testsuite: autopkgtest-pkg-perl instead of the usual autopkgtest.

In the same vein, Debian’s Ruby maintainer (Antonio Terceiro) added implicit test control support for Ruby packages. We haven’t done a mass test run with those yet, but their structure will probably look very similar.

Read more
pitti

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX -------------------------------------------

function s:MDSettings()
    inoremap <buffer> <Leader>n \note[item]{}<Esc>i
    noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
    noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
    noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>

    " adjust syntax highlighting for LaTeX parts
    "   inline formulas:
    syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
    "   environments:
    syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement
    "   commands:
    syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement
endfunction

autocmd BufRead,BufNewFile *.md setfiletype markdown
autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

Read more
pitti

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$ cat adt_sid
--output-dir=/tmp/out
-s
---
schroot
sid

$ adt-run libpng @adt_sid

Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]

This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

Read more
pitti

We currently use completely different methods and tools of building test beds and running tests for Debian vs. Click packages, for normal uploads vs. CI airline landings vs. upstream project merge proposal testing, and keep lots of knowledge about Click package test metadata external and not easily accessible/discoverable.

Today I released autopkgtest 3.0 (and 3.0.1 with a few minor updates) which is a major milestone in unifying how we run package tests both locally and in production CI. The goals of this are:

  • Keep all test metadata, such as test dependencies, commands to run the test etc., in the project/package source itself instead of external. We have had that for a long time for Debian packages with DEP-8 and debian/tests/control, but not yet for Ubuntu’s Click packages.
  • Use the same tools for Debian and Click packages to simplify what developers have to know about and to reduce the amount of test infrastructure code to maintain.
  • Use the exact same testbeds and test runners in production CI than what developers use locally, so that you can reproduce and investigate failures.
  • Re-use the existing autopkgtest capabilities for using various kinds of testbeds, and conversely, making all new testbed types immediately available to all package formats.
  • Stop putting tests into the Ubuntu archive as packages (such as mediaplayer-app-autopilot). This just adds packaging and archive space overhead and also makes updating tests a lot harder and taking longer than it should.

So, let’s dive into the new features!

New runner: adt-virt-ssh

We want to run tests on real hardware such as a laptop of a particular brand with a particular graphics card, or an Ubuntu phone. We also want to restructure our current CI machinery to run tests on a real OpenStack cloud and gradually get rid of our hand-maintained QA lab with its test machines. While these use cases seem rather different, they both have in common that there is an already existing machine which is pretty much only accessible with ssh. Once you have an ssh connection, they look pretty much the same, you just need different initial setup (like fiddling with adb, calling nova boot, etc.) to prepare them.

So the new adt-virt-ssh runner factorizes all the common bits such as communicating with adt-run, auto-detecting sudo availability, doing SSH connection sharing etc., and delegates the target specific bits to a “setup script”. E. g. we could specify --setup-script ssh-setup-nova or --setup-script ssh-setup-adb which would then get called with open at the appropriate time by adt-run; it calls the nova commands to create a VM, or run a few adb commands to install/start ssh and install the public key. Then autopkgtest does its thing, and eventually calls the script with cleanup again. The actual protocol is a bit more involved (see manpage), but that’s the general idea.

autopkgtest now ships readymade scripts for these two use cases. So you could e. g. run the libpng tests in a temporary cloud VM:

# if you don't have one, create it with "nova keypair-create"
$ nova keypair-list
[...]
| pitti | 9f:31:cf:78:50:4f:42:04:7a:87:d7:2a:75:5e:46:56 |

# find a suitable image
$ nova image-list 
[...]
| ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 | Ubuntu Server 14.04 LTS (amd64 20140607.1) - Partner Image                         | ACTIVE |  

$ nova flavor-list 
[...]
| 100 | standard.xsmall  | 1024      | 10   | 10        |      | 1     | 1.0         | N/A       |

# now run the tests: please be patient, this takes a few mins!
$ adt-run libpng --setup-commands="apt-get update" --- ssh -s /usr/share/autopkgtest/ssh-setup/nova -- \
   -f standard.xsmall -i ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 -k pitti
[...]
adt-run [16:23:16]: test build:  - - - - - - - - - - results - - - - - - - - - -
build                PASS
adt-run: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ tests done.

Please see man adt-virt-ssh for details how to use it and how to write setup scripts. There is also a commented /usr/share/autopkgtest/ssh-setup/SKELETON template for writing your own for your use cases. You can also not use any setup script and just specify user and host name as options, but please remember that the ssh runner cannot clean up after itself, so never use this on important machines which you can’t reset/reinstall!

Test dependency installation without apt/root

Ubuntu phones with system images have a read-only file system where you can’t install test dependencies with apt. A similar case is using the “null” runner without root. When apt-get install is not available, autopkgtest now has a reduced fallback mode: it downloads the required test dependencies, unpacks them into a temporary directory, and runs the tests with $PATH, $PYTHONPATH, $GI_TYPELIB_PATH, etc. pointing to the unpacked temp dir. Of course this only works for packages which are relocatable in that way, i. e. libraries, Python modules, or command line tools; it will totally fail for things which look for config files, plugins etc. in hardcoded directory paths. But it’s good enough for the purposes of Click package testing such as installing autopilot, libautopilot-qt etc.

Click package support

autopkgtest now recognizes click source directories and *.click package arguments, and introduces a new test metadata specification syntax in a click package manifest. This is similar in spirit and capabilities to DEP-8 debian/tests/control, except that it’s using JSON:

    "x-test": {
        "unit": "tests/unittests",
        "smoke": {
            "path": "tests/smoketest",
            "depends": ["shunit2", "moreutils"],
            "restrictions": ["allow-stderr"]
        },
        "another": {
            "command": "echo hello > /tmp/world.txt"
        }
    }

For convenience, there is also some magic to make running autopilot tests particularly simple. E. g. our existing click packages usually specify something like

    "x-test": {
        "autopilot": "ubuntu_calculator_app"
    }

which is enough to “do what I mean”, i. e. implicitly add the autopilot test depends and run autopilot with the specified test module name. You can specify your own dependencies and/or commands, and restrictions etc., of course.

So with this, and the previous support for non-apt test dependencies and the ssh runner, we can put all this together to run the tests for e. g. the Ubuntu calculator app on the phone:

$ bzr branch lp:ubuntu-calculator-app
# built straight from that branch; TODO: where is the official" download URL?
$ wget http://people.canonical.com/~pitti/tmp/com.ubuntu.calculator_1.3.283_all.click
$ adt-run ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- \
      ssh -s /usr/share/autopkgtest/ssh-setup/adb
[..]
Traceback (most recent call last):
  File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 93, in test_divide_with_infinity_length_result_number
    self._assert_result("0.33333333")
  File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 63, in _assert_result
    self.main_view.get_result, Eventually(Equals(expected_result)))
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 406, in assertThat
    raise mismatch_error
testtools.matchers._impl.MismatchError: After 10.0 seconds test failed: '0.33333333' != '0.3'

Ran 33 tests in 295.586s
FAILED (failures=1)

Note that the current adb ssh setup script deals with some things like applying the autopilot click AppArmor hooks and disabling screen dimming, but it does not do the first-time setup (connecting to network, doing the gesture intro) and unlocking the screen. These are still on the TODO list, but I need to find out how to do these properly. Help appreciated!

Click app tests in schroot/containers

But, that’s not the only thing you can do! autopkgtest has all these other runners, so why not try and run them in a schroot or container? To emulate the environment of an Ubuntu Touch session I wrote a --setup-commands script:

adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
    ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- schroot utopic

This will actually work in the sense of running (and succeeding) the autopilot tests, but it will fail due to a lot of libust[11345/11358]: Error: Error opening shm /lttng-ust-wait... warnings on stderr. I don’t know what these mean, just that I also see them on the phone itself occasionally.

I also wrote another setup-commands script which emulates “read-only apt”, so that you can test the “unpack only” fallback. So you could prepare a container with click and the App framework preinstalled (so that it doesn’t always take ages to install them), starting from a standard adt-build-lxc container:

$ sudo lxc-clone -o adt-utopic -n click
$ sudo lxc-start -n click
  # run "sudo apt-get install click ubuntu-sdk-libs ubuntu-app-launch-tools" there
  # then "sudo powerdown"

# current apparmor profile doesn't allow remounting something read-only
$ echo "lxc.aa_profile = unconfined" | sudo tee -a /var/lib/lxc/click/config

Now that container has enough stuff preinstalled to be reasonably fast to set up, and the remaining test dependencies (mostly autopilot) work fine with the unpack/$*_PATH fallback:

$ adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
          --setup-commands /usr/share/autopkgtest/setup-commands/ro-apt \
          ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click \
          --- lxc -es click

This will successfully run all the tests, and provided you have apt-cacher-ng installed, it only takes a few seconds to set up. This might be a nice thing to do on merge proposals, if you don’t have an actual phone at hand, or don’t want to clutter it up.

autopkgtest 3.0.1 will be available in Utopic tomorrow (through autosyncs). If you can’t wait to try it out, download it from my people.c.c page ☺.

Feedback appreciated!

Read more
pitti

Hot on the heels of my previous annoucement of my systemd PPA for trusty, I’m now happy to announce that the latest systemd 204-10ubuntu1 just landed in Utopic, after sorting out enough of the current uninstallability in -proposed. The other fixes (bluez, resolvconf, lightdm, etc.) already landed a few days ago. Compared to the PPA these have a lot of other fixes and cleanups, due to the excellent hackfest that we held last weekend.

So, upgrade today and let us know about problems in bugs tagged “systemd-boot”.

I think systemd in current utopic works well enough to not break a developer’s day to day workflow, so we can now start parallelizing the work of identifying packages which only have upstart jobs and provide corresponding systemd units (or SysV script). Also, this hasn’t yet been tested on the phone at all, I’m sure that it’ll require quite some work (e. g. lxc-android-config has a lot of upstart jobs). To clarify, there is nofixed date/plan/deadline when this will be done, in particular it might well last more than one release cycle. So we’ll “release” (i. e. switch to it as a default) when it’s ready :-)

Read more
pitti

On the last UDS we talked about migrating from upstart to systemd to boot Ubuntu, after Mark announced that Ubuntu will follow Debian in that regard. There’s a lot of work to do, but it parallelizes well once developers can run systemd on their workstations or in VMs easily and the system boots up enough to still be able to work with it.

So today I merged our systemd package with Debian again, dropped the systemd-services split (which wasn’t accepted by Debian and will be unnecessary now), and put it into my systemd PPA. Quite surprisingly, this booted a fresh 14.04 VM pretty much right away (of course there’s no Plymouth prettiness). The main two things which were missing were NetworkManager and lightdm, as these don’t have an init.d script at all (NM) or it isn’t enabled (lightdm). Thus the PPA also contains updated packages for these two which provide a proper systemd unit. With that, the desktop is pretty much fully working, except for some details like cron not running. I didn’t go through /etc/init/*.conf with a small comb yet to check which upstart jobs need to be ported, that’s now part of the TODO list.

So, if you want to help with that, or just test and tell us what’s wrong, take the plunge. In a 14.04 VM (or real machine if you feel adventurous), do

  sudo add-apt-repository ppa:pitti/systemd
  sudo apt-get update
  sudo apt-get dist-upgrade

This will replace systemd-services with systemd, update network-manager and lightdm, and a few libraries. Up to now, when you reboot you’ll still get good old upstart. To actually boot with systemd, press Shift during boot to get the grub menu, edit the Ubuntu stanza, and append this to the linux line: init=/lib/systemd/systemd.

For the record, if pressing shift doesn’t work for you (too fast, VM, or similar), enable the grub menu with

  sudo sed -i '/GRUB_HIDDEN_TIMEOUT/ s/^/#/' /etc/default/grub
  sudo update-grub

Once you are satisfied that your system boots well enough, you can make this permanent by adding the init= option to /etc/default/grub (and possibly remove the comment sign from the GRUB_HIDDEN_TIMEOUT lines) and run sudo update-grub again. To go back to upstart, just edit the file again, remove the init=sudo update-grub again.

I’ll be on the Debian systemd/GNOME sprint next weekend, so I feel reasonably well prepared now. :-)

Update: As the comments pointed out, this bricked /etc/resolv.conf. I now uploaded a resolvconf package to the PPA which provides the missing unit (counterpart to the /etc/init/resolvconf.conf upstart job) and this now works fine. If you are in that situation, please boot with upstart, and do the following to clean up:

  sudo rm /etc/resolv.conf
  sudo ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf

Then you can boot back to systemd.

Update 2: If you want to help testing, please file bugs with a systemd-boot tag. See the list of known bugs when booting with systemd.

Read more

A couple of weeks ago, Gunnar Wolf mentioned on IRC that his CuBox-i4 had arrived. This resulted in various jealous noises from me; having heard about this device making the rounds at the Kernel Summit, I ordered one for myself back in December, as part of the long-delayed HDification of our home entertainment system and coinciding with the purchase of a new Samsung SmartTV. We've been running an Intel Coppermine Celeron for a decade as a MythTV frontend and encoder (hardware-assisted with a PVR-250), which is fine for SD video, but really doesn't cut it for anything HD. So after finally getting a TV that would showcase HD in all its glory, I figured it was time to upgrade from an S-Video-out, barely-limping-along tower machine to something more modern with HDMI out, eSATA, hardware video decoding, and whose biggest problem is it's so small that it threatens to get lost in the wiring!

Since placing the order, I've been bemused to find that the SmartTV is so smart that it has had a dramatic impact on how we consume media; between that and our decision to not be a boiled frog in the face of DISH Network's annual price increase, the MythTV frontend has become a much less important part of our entertainment center, well before I ever got a chance to lay hands on the intended replacement hardware. But that's a topic for another day.

Anyway, the CuBox-i4 finally arrived in the mail on Friday, so of course I immediately needed to start hacking on it! Like Gunnar, who wrote last week about his own experience getting a "proper" Debian install on the box, I'm not content with running a binary distribution image prepared by some third party; I expect my hardware toys to run official distro packages assembled using official distro tools and, if at all possible, distributed on official distro images for a minimum of hassle.

Whereas Gunnar was willing to settle for using third-party binaries for the bootloader and kernel, however, I'm not inclined to do any such thing. And between my stint at Linaro a few years ago and the recent work on Ubuntu for phones, I do have a little knowledge of Linux on ARM (probably just enough to be dangerous), so I set to work trying to get the CuBox-i4 bootable with stock Debian unstable.

Being such a cutting-edge piece of hardware, that does pose some challenges. Support for the i.MX6 chip is in the process of being upstreamed to U-Boot, but the support for the CuBox-i devices isn't there yet, nor is the support for SPL on i.MX6 (which allows booting the variants of the CuBox-i with a single U-Boot build, instead of requiring a different bootloader build for each flavor). The CuBox-i U-Boot that SolidRun makes available (with source at github) is based on U-Boot 2013.10-rc4, so more than a full release behind Debian unstable, and the patches there don't apply to U-Boot 2014.01 without a bit of effort.

But if it's worth doing, it's worth doing right, so I've taken the time to rebase the CuBox-i patches on top of 2014.01, publishing the results of the rebase to my own github repository and submitting a bug to the Debian U-Boot maintainers requesting its inclusion.

The next step is to get a Debian kernel that not only works, but fully supports the hardware out of the box (a 3.13 generic arm kernel will boot on the machine, but little things like ethernet and hdmi don't work yet). I've created a page in the Debian wiki for tracking the status of this work.

Read more
Michael Hall

Today I reached another milestone in my open source journey: I got my first package uploaded into Debian’s archives.  I’ve managed to get packages uploaded into Ubuntu before, and I’ve attempted to get one into Debian, but this is the first time I’ve actually gotten a contribution in that would benefit Debian users.

I couldn’t have done with without the the help and mentorship of Paul Tagliamonte, but I was also helped by a number of others in the Debian community, so a big thank you to everybody who answered my questions and walked me through getting setup with things like Alioth and re-learning how to use SVN.

One last bit of fun, I was invited to join the Linux Unplugged podcast today to talk about yesterday’s post, you can listen it it (and watch IRC comments scroll by) here: http://www.jupiterbroadcasting.com/51842/neckbeard-entitlement-factor-lup-28/

Read more
Michael Hall

Quick overview post today, because it’s late and I don’t have anything particular to talk about today.

First of all, the next vUDS was announced today, we’re a bit late in starting it off but we wanted to have another one early enough to still be useful to the Trusty release cycle.  Read the linked mailinglist post for details about where to find the schedule and how to propose sessions.

I pushed another update to the API website today that does a better job balancing the 2-column view of namespaces and fixes the sub-nav text to match the WordPress side of things. This was the first deployment in a while to go off without a problem, thanks to  having a new staging environment created last time.  I’m hoping my deployment problems on this are now far behind me.

I took a task during my weekly Core Apps update call to look more into the Terminal app’s problem with enter and backspace keys, so I may be pinging some of you in the coming week about it to get some help.  You have been warned.

Finally, I decided a few weeks ago to spread out my after-hours community a activity beyond Ubuntu, and I’ve settled on the Debian new maintainers Django website as somewhere I can easily start.  I’ve got a git repo where I’m starting writing the first unit tests for that website, and as part of that I’m also working on Debian packaging for the Python model-mommy library which we use extensively in Ubuntu’s Django website. I’m having to learn (or learn more) Debian packaging, Git workflows and Debian’s processes and community, all of which are going to be good for me, and I’m looking forward to the challenge.

Read more
Timo Jyrinki

Background

I upgraded from Linux 3.8 to 3.11 among with newer Mesa, X.Org and Intel driver recently and I found a small workaround was needed because of upstream changes.

The upstream change was the Add "Automatic" mode for "Broadcast RGB" property, and defaulting to the Automatic. This is a sensible default, since many (most?) TVs default to the more limited 16-235, and continuing to default to Full from the driver side would mean wrong colors on the TV. I've set my screen to support the full 0-255 range available to not cut the amount of available shades of colors down.

Unfortunately it seems the Automatic setting does not work for my HDMI input, ie blacks become grey since the driver still outputs the more limited range. Maybe there could be something to improve on the driver side, but I'd guess it's more about my 2008 Sony TV actually having a mode that the standard suggests limited range for. I remember the TV did default to limited range, so maybe the EDID data from TV does not change when setting the RGB range to Full.

I hope the Automatic setting works to offer full range on newer screens and the modes they have, but that's probably up to the manufacturers and standards.

Below is an illustration of the correct setting on my Haswell CPU. When the Broadcast RGB is left to its default Automatic setting, the above image is displayed. When set to Full, the image below with deeper blacks is seen instead. I used manual settings on my camera so it's the same exposure.


Workaround

For me the workaround has evolved to the following so far. Create a /etc/X11/Xsession.d/95fullrgb file:
 
if [ "$(/usr/bin/xrandr -q --prop | grep 'Broadcast RGB: Full' | wc -l)" = "0" ] ; then
/usr/bin/xrandr --output HDMI3 --set "Broadcast RGB" "Full"
fi
And since I'm using lightdm, adding the following to /etc/lightdm/lightdm.conf means the flicker only happens once during bootup:

display-setup-script=/etc/X11/Xsession.d/95fullrgb

Important: when using the LightDM setting, enable executable bits (chmod +x) to /etc/X11/Xsession.d/95fullrgb for it to work. Obviously also check your output, for me it was HDMI3.

If there is no situation where it'd set back to "Limited 16:235" setting on its own, the display manager script should be enough and having it in /etc/X11/Xsession.d is redundant and slows login time down. I think for me it maybe went from 2 seconds to 3 seconds since executing xrandr query is not cheap.

Misc

Note that unrelated to Full range usage, the Limited range at the moment behaves incorrectly on Haswell until the patch in bug #71769 is accepted. That means, the blacks are grey in Limited mode even if the screen is also set to Limited.

I'd prefer there would be a kernel parameter for the Broadcast RGB setting, although my Haswell machine does boot so fast I don't get to see too many seconds of wrong colors...

Read more
pitti

The current default D-BUS configuration (at least on Ubuntu) disallows monitoring method calls on the system D-BUS (dbus-monitor --system), which makes debugging rather cumbersome; this has worked years ago, but apparently got changed for security reasons. It took me a half an hour to figure out how to enable this for debugging, and as this has annoyingly little Google juice (I didn’t find any solution), let’s add some.

The trick seems to be to set a global policy to be able to eavesdrop any method call after the individual /etc/dbus-1/system.d/*.conf files applied their restrictions, for which there is already a convenient facility. Create a file /etc/dbus-1/system-local.conf with

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE busconfig PUBLIC
  "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
  "http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">

<busconfig>
  <policy user="root">
    <!-- Allow everything to be sent -->
    <allow send_destination="*" eavesdrop="true"/>
    <!-- Allow everything to be received -->
    <allow eavesdrop="true"/>
    <allow send_type="method_call"/>
  </policy>
</busconfig>

Then sudo dbus-monitor --system displays everything. Needless to say that you don’t want this file on any production system!

Does anyone know an easier way? My first naive stab was to run dbus-monitor as root, but that doesn’t make any difference at all.

Update: Turns out this is already described in a better way at https://wiki.ubuntu.com/DebuggingDBus. Yay me for not finding that.. I updated above recipe to limit access to root, which is much better indeed.

Read more
pitti

I did a 0.2.2 maintenance release for umockdev to fix building with Vala 0.16.1, gcc 4.8 (the changed sizeof behaviour caused segfaults), and current udev releases (umockdev-record stumbled over the new “link priority” fields of udevadm). There are also a couple of bug fixes, but no new features.

Read more
Timo Jyrinki

If you're running an Android device with GNU userland Linux in a chroot and need a full network access over USB cable (so that you can use your laptop/desktop machine's network connection from the device), here's a quick primer on how it can be set up.

When doing Openmoko hacking, one always first plugged in the USB cable and forwarded network, or like I did later forwarded network over Bluetooth. It was mostly because the WiFi was quite unstable with many of the kernels.

I recently found out myself using a chroot on a Nexus 4 without working WiFi, so instead of my usual WiFi usage I needed network over USB... trivial, of course, except that there's Android on the way and I'm a Android newbie. Thanks to ZDmitry on Freenode, I got the bits for the Android part so I got it working.

On device, have eg. data/usb.sh with the following contents.

#!/system/xbin/sh
CHROOT="/data/chroot"

ip addr add 192.168.137.2/30 dev usb0
ip link set usb0 up
ip route delete default
ip route add default via 192.168.137.1;
setprop net.dns1 8.8.8.8
echo 'nameserver 8.8.8.8' >> $CHROOT/run/resolvconf/resolv.conf
On the host, execute the following:
adb shell setprop sys.usb.config rndis,adb
adb shell data/usb.sh
sudo ifconfig usb0 192.168.137.1
sudo iptables -A POSTROUTING -t nat -j MASQUERADE -s 192.168.137.0/24
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
sudo iptables -P FORWARD ACCEPT
This works at least with Ubuntu saucy chroot. The main difference in some other distro might be whether the resolv.conf has moved to /run or not. You should be now all set up to browse / apt-get stuff from the device again.

Update: Clarified that this is to forward the desktop/laptop's network connection to the device so that network is accessible from the device over USB.
Update2, 09/2013: It's also possible to get working on the newer flipped images. Remove the "$CHROOT" from nameserver echoing and it should be fine. With small testing it got somehow reset after a while at which point another run of data/usb.sh on the device restored connection.

Read more

Congrats to the Debian release team on the new release of Debian 7.0 (wheezy)!

Leading up to the release, a meme making the rounds on Planet Debian has been to play a #newinwheezy game, calling out some of the many new packages in 7.0 that may be interesting to users. While upstart as a package is nothing new in wheezy, the jump to upstart 1.6.1 from 0.6.6 is quite a substantial change. It does bring with it a new package, mountall, which by itself isn't terribly interesting because it just provides an upstart-ish replacement for some core scripts from the initscripts package (essentially, /etc/rcS.d/*mount*). Where things get interesting (and, typically, controversial) is the way in which mountall leverages plymouth to achieve this.

What is plymouth?

There is a great deal of misunderstanding around plymouth, a fact I was reminded of again while working to get a modern version of upstart into wheezy. When Ubuntu first started requiring plymouth as an essential component of the boot infrastructure, there was a lot of outrage from users, particularly from Ubuntu Server users, who believed this was an attempt to force pretty splash screen graphics down their throats. Nothing could be further from the truth.

Plymouth provides a splash screen, but that's not what plymouth is. What plymouth is, is a boot-time I/O multiplexer. And why, you ask, would upstart - or mountall, whose job is just to get the filesystem mounted at boot - need a boot-time I/O multiplexer?

Why use plymouth?

The simple answer is that, like everything else in a truly event-driven boot system, filesystem mounting is handled in parallel - with no defined order. If a filesystem is missing or fails an fsck, mountall may need to interact with the user to decide how to handle it. And if there's more than one missing or broken filesystem, and these are all being found in parallel, there needs to be a way to associate each answer from the user to the corresponding question from mountall, to avoid crossed signals... and lost data.

One possible way to handle this would be for mountall to serialize the fsck's / mounts. But this is a pretty unsatisfactory answer; all other things (that is, boot reliability) being equal, admins would prefer their systems to boot as fast as possible, so that they can get back to being useful to users. So we reject the idea of solving the problem of serializing prompts by making mountall serialize all its filesystem checks.

Another option would be to have mountall prompt directly on the console, doing its own serialization of the prompts (even though successful mounts / fscks continue to be run in parallel). This, too, is not desirable in the general case, both because some users actually would like to have pretty splash screens at boot time, and this would be incompatible with direct console prompting; and because mountall is not the only piece of software that need to prompt at boot time (see also: cryptsetup).

Plymouth: not just a pretty face

Enter plymouth, which provides the framework for serializing requests to the user while booting. It can provide a graphical boot splash, yes; ironically, even its own homepage suggests that this is its purpose. But it can also provide a text-only console interface, which is what you get automatically when booting without a splash boot argument, or even handle I/O over a serial console.

Which is why, contrary to the initial intuitions of the s390 porters upon seeing this package, plymouth is available for all of Debian's Linux architectures in wheezy, s390 and s390x included, providing a consistent architecture for boot-time I/O for systems that need it - which is any machine using a modern boot system, such as upstart or systemd.

Room for improvement

Now, having a coherent architecture for your boot I/O is one thing; having a bug-free splash screen is another. The experience of plymouth in Ubuntu has certainly not been bug-free, with plymouth making significant demands of the kernel video layer. Recently, the binary video driver packages in Ubuntu have started to blacklist the framebuffer kernel driver entirely due to stability concerns, making plymouth splash screens a non-starter for users of these drivers and regressing the boot experience.

One solution for this would be to have plymouth offload the video handling complexity to something more reliable and better tested. Plymouth does already have an X backend, but we don't use that in Ubuntu because even if we do have an X server, it normally starts much later than when we would want to display the splash screen. With Mir on the horizon for Ubuntu, however, and its clean separation between system and session compositors, it's possible that using a Mir backend - that can continue running even after the greeter has started, unlike the current situation where plymouth has to cede the console to the display manager when it starts - will become an appealing option.

This, too, is not without its downsides. Needing to load plymouth when using crypted root filesystems already makes for a bloated initramfs; adding a system compositor to the initramfs won't make it any better, and introduces further questions about how to hand off between initramfs and root fs. Keeping your system compositor running from the initramfs post-boot isn't really ideal, particularly for low-memory systems; whereas killing the system compositor and restarting it will make it harder to provide a flicker-free experience. But for all that, it does have its architectural appeal, as it lets us use plymouth as long as we need to after boot. As the concept of static runlevels becomes increasingly obsolete in the face of dynamic systems, we need to design for the world where the distinction between "booting" and "booted" doesn't mean what it once did.

Read more
Timo Jyrinki

Packages

I quite like the current status of Qt 5 in Debian and Ubuntu (the links are to the qtbase packages, there are ca. 15 other modules as well). Despite Qt 5 being bleeding edge and Ubuntu having had the need to use it before even the first stable release came out in December, the co-operation with Debian has gone well. Debian is now having the first Qt 5 uploads done to experimental and later on to unstable. My work contributed to pkg-kde git on the modules has been welcomed, and even though more work has been done there by others, there haven't been drastic changes that would cause too big transition problems on the Ubuntu side. It has of course helped to ask others what they want, like the whole usage of qtchooser. Now with Qt 5.0.2 I've been able to mostly re-sync all newer changes / fixes to my packaging from Debian to Ubuntu and vice versa.

There will remain some delta, as pkg-kde plans to ask for a complete transition to qtchooser so that all Qt using packages would declare the Qt version either by QT_SELECT environment variable (preferable) or a package dependency (qt5-default or qt4-default). As a temporary change related to that, Debian will have a debhelper modification that defaults QT_SELECT to qt4 for the duration of the transition. Meanwhile, Ubuntu already shipped the 13.04 release with Qt 5, and a shortcut was taken there instead to prevent any Qt 4 package breakage. However, after the transition period in Debian is over, that small delta can again be removed.

I will also need to continue pushing any useful packaging I do to Debian. I pushed qtimageformats and qtdoc last week, but I know I'm still behind with some "possibly interesting" git snapshot modules like qtsensors and qtpim.

Patches

More delta exists in the form of multiple patches related to the recent Ubuntu Touch efforts. I do not think they are of immediate interest to Debian – let's start packaging Qt 5 apps to Debian first. However, about all of those patches have already been upstreamed to be part of Qt 5.1 or Qt 5.2, or will be later on. Some already were for 5.0.2.

A couple of months ago Ubuntu did have some patches hanging around with no clear author information. This was a result of the heated preparation for the Ubuntu Touch launches, and the fact that patches flew (too) quickly in place into various PPA:s. I started hunting down the authors, and the situation turned out to be better than I thought. About half of the patches were already upstreamed, and work on properly upstreaming the other ones was swiftly started after my initial contact. Proper DEP3 fields do help understanding the overall situation. There are now 10 Canonical individuals in the upstream group of contributors, and in the last week's sprint it turned out more people will be joining them to upstream their future patches.

Nowadays about all the requests I get for including patches from developers are stuff that was already upstreamed, like the XEmbed support in qtbase. This is how it should be.

One big patch still being Ubuntu only is the Unity appmenu support. There was a temporary solution for 13.04 that forward-ported the Qt 4 way of doing it. This will be however removed from the first 13.10 ('saucy') upload, as it's not upstreamable (the old way of supporting Unity appmenus was deliberately dropped from Qt 5). A re-implementation via QPA plugin support is on its way, but it may be that the development version users will be without appmenu support for some duration. Another big patch is related to qtwebkit's device pixel ratio, which will need to be fixed. Apart from these two areas of work that need to be followed through, patches situation is quite nice as mentioned.

Conclusion

Free software will do world domination, and I'm happy to be part of it.

Read more
Timo Jyrinki

Whee!! zy

Congrats and thanks to everyone,

Debian 7.0 Wheezy released

Updating my trusty orion5x box as we speak. No better way to spend a (jetlagged) Sunday.

Read more
pitti

Paul Wise poked me this morning about uploading fatrace (“file access trace”, see the original announcement for details) to Debian, thanks for the reminder!

So I filed an Intent To Package, and will upload it in a few days, unless some discussion evolves.

I also took the opportunity to do some modernization: The power-usage-report script now uses the current PowerTop 2.x instead of the old 1.13, uses Python 3 now, and includes the “process device activity” in the report. I released this as 0.5. The actual fatrace binary didn’t change its behaviour, it just got some code optimizations; thanks to Yann Droneaud for those.

Read more
pitti

PostgreSQL just released security updates. 9.1 (as found in Debian testing and unstable and Ubuntu 11.10 and later) is affected by a critical remote vulnerability which potentially allows anyone who can access the TCP port (without credentials) to corrupt local files. If your PostgreSQL database exposes the TCP port to any potentially untrusted location, please shut down your servers and update now!

PostgreSQL 8.4 for Debian stable (squeeze) and Ubuntu 8.04 LTS and 10.04 LTS also got an update, but these are much less urgent.

Debian and Ubuntu advisories for all stable releases, as well as Debian testing are going out as we speak. The updates are already on security.debian.org and security.ubuntu.com.

I also uploaded updates for Debian unstable (8.4, 9.1, and 9.2 in experimental) and the Ubuntu backports PPA, but it will take a bit for these to build as we don’t have embargoed staging builds for those. Christoph updated the apt.postgresql.org repository as well.

Warning: If you use the current Ubuntu raring Beta-2 candidate images, you will still have the old version. So if you do anything serious with those installations, please make sure to upgrade immediately.

Update: Debian and Ubuntu security announcements have been sent out, and all packages in the backports PPA are built.

Please see the official FAQ if you want to know some more details about the nature of the vulnerabilities.

Read more