Canonical Voices

Posts tagged with 'debian'

Timo Jyrinki

I have a few thoughts based on the yet another round of debate where marketing clashes with free software advocation and technical details. Nothing new in the debate itself, but I'm adding a couple of insights.

Marketing is not highly respected by many technical people, and neither by the people wanting more advocating than the messing up with facts and feelings that the marketing does. I'm all for advocating free software, but it's currently not something you can use for marketing to win big markets. If we advance to a world where free software is as wanted as the green values today are, it can be used in marketing as well similar to all the ecological (according to the market department at least) products today, but alas the benefits of free software are not yet as universally known. Since it doesn't say much that touches the masses, advocating has a negative marketing effect since it takes space away from the potentially "hitting" marketing moves, in those cases where you target the big masses in the first place. "Open" this and that has some marketing power in it nowadays, but it's a mess of different meanings that probably doesn't advance libre software freedoms as such. Wikipedia has probably been the biggest contributor to advancing general knowledge of software and culture libre. Disclaimer: I'm not a proper marketing person, some more professional might have better insights in this area. If I'd be a proper marketing person, I'd decorate this blog post with fancy pictures so that more people would actually read it.

Some of the marketing can be done without sacrificing any of the advocation. The fabulous Fedora campaigns, graphics, slogans and materials are a great example of those and do an important job, even though Fedora isn't reaching the big masses with it (and Fedora isn't targeted for OEMs to ship to millions either). But they do hopefully reach a lot of level people on the grassroots level. I hope as a Debian Developer that more people valuing the freedoms of free software would help also Debian as a project to reach more of its advocation potential and developers from the more proprietary world. But I'm happy that at least some free software projects have nowadays true graphical and marketing talent. Even though not as widely known, freedoms of the software users including for example privacy aspects are a potential good marketing tool toward a portion of the developer pool.

Let's not forget that Android conquered the mobile market without using the brand power of Linux. The 30+ million people who know Linux a bit deeper than just "I've heard it" already run a Linux distribution like Ubuntu, but we need hotter brands than a project name of a kernel to reach the bigger masses. Call it Ubuntu, Fedora, or something, but no matter do what it needs to ship it to millions. AFAIK mostly Ubuntu and SUSE are shipping currently via desktop/notebook OEMs, and more Ubuntu than SUSE. Others aren't concentrating on the market, which is a very difficult one. Ubuntu is doing a lot more mass population advocating for free software than Android has ever done. Note for example the time any user tries to play a video for the first time in a non-free format - Ubuntu will tell the user about the problems related to those formats and asking a permission to install a free software player for those (or buy licensed codecs), and the Ubuntu Help texts describe a lot of details about restricted formats, DRM et cetera. Not to mention what happens if the person actually wanders into the community, discovering Debian, other free software projects, free software licenses and so on. Meanwhile Android users never notice anything being wrong while watching H.264 videos or watching DRM Flash videos. Granted, like Boot2Gecko debate shows, it may be a partially similar situation on shipping desktop Linux variants as well, in order to actually ship them via partners.

Anyway, the best example of brands is MeeGo. Even the LWN editor does not get into dissecting the meaning of MeeGo on Nokia N9, because "there has been no real agreement on that in the past", but just uses the brand name as is. That is  the power of brands. Technical people debate that it's not really MeeGo, it's maemo GNU/Linux with a special permission from Linux Foundation to use the MeeGo brand name, and then counter-argument with that the MeeGo is an API and maemo matches the MeeGo 1.2 API which is actually just Qt 4.7 API. And actually, as proven by the LWN example, it's not even "technical people". For most of technical people Nokia N9 is MeeGo as well. Only the people who have actually worked on it plus the couple of other people migrated from maemo and MeeGo.com communities to Mer project understand the legacy, history and the complete difference between the Maemo Harmattan platform and what MeeGo.com was. Yet at the same time like all GNU/FreeDesktop.org/Linux distributions, they are 95% same code, just all the infrastructure and packaging and polishing and history is different.

For 99.9% of people who know what MeeGo is, MeeGo is the cool Nokia smartphone, one of its kind and not sold in some of the major western markets, and/or the colorful sweet characters of meego.com shown at one time on a couple of netbooks. That is the brand, and the technical details do not matter even to the technical people unless they actually get into working on the projects directly.

To most people, the Linux brand is a mess of a lot of things, while other brands have the possibility at least to have a more differentiating and unique appearance.

Read more
Matt Fischer

A couple weeks ago I participated in the Ubuntu Global Jam/Bug Fix session. I started by looking at all the “fails to build from source” aka ftbfs list. I read through the list and found a few interesting packages, but ended up working on live-manual, because I use live-build and also it seems odd that we can’t build a user-manual.

At first I was confused, because I pulled the source and ran dpkg-buildpackage on it and it worked fine on my amd64 box. So I tried an i386 VM that I had running, works fine there too! Next, the litmus test, a build in a pbuilder chroot. Guess what, that one failed. I traced the errors back to this one error when building the German user manual:

ERROR occurred, message:”invalid byte sequence in US-ASCII”

It was odd, why are we processing a German user manual file using a US ASCII locale?

I compared the environment between my system, where it built, and the pbuilder chroot, where it failed and found that inside a pbuilder chroot, the LC_ALL variable is set to “C”. C is equivalent of US-ASCII. After trying in vain to change the locale to something that understands utf-8, I realized that I needed to install a locale that could handle UTF-8 inside my pbuilder chroot.

debian/control:
-Build-Depends: debhelper (>= 8), ruby, libnokogiri-ruby
+Build-Depends: debhelper (>= 8), ruby, libnokogiri-ruby, language-pack-en

debian/rules:

%:
- dh ${@}
+ LC_ALL=en_US.UTF-8 dh ${@}

Now the build was working, so I submitted my merge proposal.

My fix was accepted but discussions revealed that the fix wouldn’t work in debian because their locale packages are different. With that in mind, I worked on a more generic fix, based on what python-djvulibre does. After some tweaking (not shown in the merge proposal), I settled on this fix:

debian/rules:

+override_dh_auto_build:
+ mkdir -p debian/tmp/locale/
+ localedef -f UTF-8 -i en_US ./debian/tmp/locale/en_US.UTF-8/
+ export LOCPATH=$(CURDIR)/debian/tmp/locale/ && \
+ export LC_ALL=en_US.UTF-8 && \
+ dh_auto_build

Note: During the development of that fix, I had trouble finding all the different steps in the debhelper build and what I could override. A debian developer pointed me to this great presentation which answered a lot of questions: Joey Hess’ Not Your Grandpa’s Debhelper.

So there are two ways to solve a locale issue during a build. Do you know of any others? If so I’d love to hear them in the comments.

Read more
Timo Jyrinki

Just a quick note that the merry Finnish localization folks are organizing an (extended) localization weekend, starting today. As a nice step towards ease of use, they're utilizing the long developed, maybe even underused Translatewiki.net platform, or to be precise a separate instance of it. Translatewiki.net is used by MediaWiki (Wikimedia Foundation), StatusNet and other high profile projects. Co-incidentally the main developer of Translatewiki.net is Finnish as well.

Anyway enough of the platform, join the translation frenzy at http://l10n.laxstrom.name/wiki/Gnome_3.4, but do make sure to read the notes at http://muistio.tieke.fi/IYZxesy9uc.

I've promised to help in upstreaming those to git.gnome.org on Sunday. There is additionally a new report about Ubuntu 12.04 LTS translations schedule (to which these GNOME contributions will find their way as well) at the ubuntu-l10n-fin mailing list by Jiri.

Ja sama suomeksi.

Read more
pitti

Part of our efforts to reduce power consumption is to identify processes which keep waking up the disk even when the computer is idle. This already resulted in a few bug reports (and some fixes, too), but we only really just began with this.

Unfortunately there is no really good tool to trace file access events system-wide. powertop claims to, but its output is both very incomplete, and also wrong (e. g. it claims that read accesses are writes). strace gives you everything you do and don’t want to know about what’s going on, but is per-process, and attaching strace to all running and new processes is cumbersome. blktrace is system-wide, but operates at a way too low level for this task: its output has nothing to do any more with files or even inodes, just raw block numbers which are impossible to convert back to an inode and file path.

So I created a little tool called fatrace (“file access trace”, not “fat race” :-) ) which uses fanotify, a couple of /proc lookups and some glue to provide this. By default it monitors the whole system, i. e. all mounts (except the virtual ones like /proc, tmpfs, etc.), but you can also tell it to just consider the mount of the current directory. You can write the log into a file (stdout by default), and run it for a specified number of seconds. Optional time stamps and PID filters are also provided.

$ sudo fatrace
rsyslogd(967): W /var/log/auth.log
notify-osd(2264): O /usr/share/pixmaps/weechat.xpm
compiz(2001): R device 8:2 inode 658203
[...]

It shows the process name and pid, the event type (Rread, Write, Open, or Close), and the path. Sometimes its’ not possible to determine a path (usually because it’s a temporary file which already got deleted, and I suspect mmaps as well), in that case it shows the device and inode number; such programs then need closer inspection with strace.

If you run this in gnome-terminal, there is an annoying feedback loop, as gnome-terminal causes a disk access with each output line, which then causes another output line, ad infinitum. To fix this, you can either redirect output to a file (-o /tmp/trace) or ignore the PID of gnome-terminal (-p `pidof gnome-terminal`).

So to investigate which programs are keeping your disk spinning, run something like

  $ sudo fatrace -o /tmp/trace -s 60

and then do nothing until it finishes.

My next task will be to write an integration program which calls fatrace and powertop, and creates a nice little report out of that raw data, sorted by number of accesses and process name, and all that. But it might already help some folks as it is right now.

The code lives in bzr branch lp:fatrace (web view), you can just run make and sudo ./fatrace. I also uploaded a package to Ubuntu Precise, but it still needs to go through the NEW queue. I also made a 0.1 release, so you can just grab the release tarball if you prefer. Have a look at the manpage and --help, it should be pretty self-explanatory.

Read more
Martin Pool

Jelmer writes:

bzr-builddeb 2.8.1 has just landed on Debian Sid and Ubuntu Precise. This version contains some of my improvements from late last year for the handling of quilt patches in packaging branches. Most of these improvements depend on bzr 2.5 beta 5, which is also in Sid/Precise.

The most relevant changes (enabled by default) are:

  • ‘bzr merge-package’ is now integrated into ‘bzr merge’ (it’s just a hook that fires on merges involving packages)
  • patches are automatically unapplied in relevant trees before merges
  • before a commit, bzr will warn if you have some applied and some unapplied quilt patches

Furthermore, you can now specify whether you would like bzr to automatically apply all patches for stored data and whether you would like to automatically have them applied in your working tree by setting ‘quilt-tree-policy‘ and ‘quilt-commit-policy‘ to either ‘applied‘ or ‘unapplied‘. This means that you can have the patches unapplied in the repository, but automatically have them applied upon checkout or update. Setting these configuration options to an empty string causes bzr to not touch your patches during commits, checkout or update.

We’ve done some testing of it, as well as running through a package merge involving patches with Barry, but none of us do package merges regularly. If you do run into issues or if you think there are ways we can improve the quilt handling further, please comment here or file a bug report against the UDD project.

Caveats:

  • If there are patches to unapply for the OTHER tree, bzr will currently create a separate checkout and unapply the patches there. This may have performance consequences for big packages. The best way to prevent this is to set ‘quilt-commit-policy = unapplied‘.
  • bzr merge‘ will now fail if you are merging in a packaging tree that is lacking pristine tar metadata; I’m submitting a fix for this, but it’s not in 2.8.1.
  • if you set ‘quilt-commit-policy‘ and ‘quilt-tree-policy‘ but have them set to a different value, bzr will consider the tree to have changes.

To disable the automatic unapplying of patches and fall back to the previous behaviour, set the following in your builddeb configuration:

quilt-smart-merge = False

Read more
Timo Jyrinki

I have a new GPG key


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA512

Hello,

I'm transitioning from my 2003 GPG key to a new one.

The old key will continue to be valid for some time, but I eventually
plan to revoke it, so please use the new one from now on. I would also
like this new key to be re-integrated into the web of trust. This message
is signed by both keys to certify the transition.

The old key was:

pub 1024D/FC7F6D0F 2003-07-10
Key fingerprint = E6A8 8BA0 D28A 3629 30A9 899F 82D7 DF6D FC7F 6D0F

The new key is:

pub 4096R/90BDD207 2012-01-06
Key fingerprint = 6B85 4D46 E843 3CD7 CDC0 3630 E0F7 59F7 90BD D207

To fetch my new key from a public key server, you can simply do:

gpg --keyserver pgp.mit.edu --recv-key 90BDD207

If you already know my old key, you can now verify that the new key is
signed by the old one:

gpg --check-sigs 90BDD207

If you don't already know my old key, or you just want to be double
extra paranoid, you can check the fingerprint against the one above:

gpg --fingerprint 90BDD207

If you are satisfied that you've got the right key, and the UIDs match
what you expect, I'd appreciate it if you would sign my key:

gpg --sign-key 90BDD207

Lastly, if you could send me these signatures, i would appreciate it.
You can either send me an e-mail with the new signatures by attaching
the following file:

gpg --armor --export 90BDD207 > timojyrinki.asc

Or you can just upload the signatures to a public keyserver directly:

gpg --keyserver pgp.mit.edu --send-key 90BDD207

Please let me know if there is any trouble, and sorry for the inconvenience.

(this post has been modified from the example at
http://www.debian-administration.org/users/dkg/weblog/48)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk8GuZoACgkQgtffbfx/bQ9nqACglWyHnDTFQfdKmz8OCd3oL6iR
hcEAmgKJ7RZsgwxwkRGPhygy5y1Ztb+3iQIcBAEBCgAGBQJPBrmaAAoJEOD3WfeQ
vdIHdVQQAMT1yvIogzbtK6sUnWqwbrXI9pDEFk7AzJTb80R+wzxsw7gu9gcBDk8G
BL2O26GKUqKWA3ytuApSl42FJam/Lusi9npT3XNkmHs6FaBMNuLYrqEXmCwXwWr/
OrLyeeLiF4yxgbNWbv+600BqAWqFlo6NeTgQKsJWtCjR3RVMxX3R8nzjDnKJuF+z
c6+2JKBWyx/HVUKcJpJrFDDR36HRFvVJomTuma2JCQ/RAl9vzAguqNYOi1QkuuQv
EF1gXH7gLifukGuwquP1DHP6SWWkj77jtRWr5ewC0xymbrArzAwKbvMQl3VpKBHh
MmpJjYP3ECyL14AKi/TY2Lidi0Sf6yqFMcPcreoih01N0OU0NXmD4IrHMT24/ssb
okDUe1o3YImjGq1jTACvlzC8s54EfLsqDgSP98SGVpuoDqPJUwVk4nuHj8q0vDSs
qZox26gVwB2FAOUi1BFiZbIzM5rsyYfCGyWUGiAwBFf54lYRAeCDCt8iAOOL1Ov/
TumIGYdLoXnDuOJq1VjXLGx2OFDrpyU8SPGoa3zNEVz39tgxQ48ASJEqcqt7HvBy
IW+TTsMLdJ1Ait9aCM3mzzr1iwP8TrL0qUsdRLOE6AKdAqocIfqXY8OeDKhbUiOJ
CXWk5q3xheK3sDWUXX7J63bAAUH4jFnpQEOVMJKBUNMKsWa0iXDS
=mklN
-----END PGP SIGNATURE-----

Read more

Zeeshan sends along that he’s looking for help bring GNOME Boxes to Ubuntu and Debian.

If you read any of my previous blog entries, you must be now familiar with this ‘express installation’ concept we have in Boxes. Its pretty neat actually, you just set a few options at the beginning and then you can leave Boxes (or your machine) and when you are back, everything is setup for you automatically in a new box.

I have invested a lot of time/efforts on this already and will be spending a lot more time in future as well but I am just one man so can not possibly cover all operating systems out there. That is why I am asking for help from anyone who will be interested in adding express installation support for Ubuntu and Debian while I focus on Fedora and Windows variants. Oh and if you are interested in adding support for some other distribution/OS, that contribution will also be more than welcomed.

In any case, happy hacking!

If you’re interested in doing this (it would be great to get Boxes in 12.04) let me know. You’ll likely need to link up with the Desktop Team, I can help get you talking to the right people if you want to rock this.

Read more

This weekend, we held a combined Debian Bug Squashing Party and Ubuntu Local Jam in Portland, OR. A big thank you to PuppetLabs for hosting!

Thanks to a brilliant insight from Kees Cook, we were able to give everyone access to their own pre-configured build environment as soon as they walked in the door by deploying schroot/sbuild instances in "the cloud" (in this case, Amazon EC2). Small blips with the mirrors notwithstanding, this worked out pretty well, and let people start to get their hands dirty as soon as they walked in the door instead of spending a lot of time up front doing the boring work of setting up a build environment. This was a big win for people who had never done a package build before, and I highly recommend it for future BSPs. You can read about the build environment setup in the Debian wiki, and details on setting up your own BSP cloud in Kees's blog.

(And the cloud instances were running Ubuntu 11.10 guests, with Debian unstable chroots - a perfect pairing for our joint Debian/Ubuntu event!)

So how did this curious foray into a combined Ubuntu/Debian event go? Not too shabby:

  • Roughly 25 people participated in the event - a pretty good turnout considering the short notice we gave. Thanks to everyone who turned up!
  • Multiarch patches were submitted for 14 library packages by 9 distinct contributors
  • Four of these people submitted their first patch to Debian!
  • Three more contributors worked on patches that were not submitted to Debian by the end of the event, but we will stalk them and see to it that their patches make it in ;)
  • 8 Ubuntu Stable Release Updates were looked at for verification of fixes
  • 7 of these fixes were successfully verified (one bug was not reproducible)
  • 6 of those packages have already been moved to the -updates pocket, where all of Ubuntu's users can now benefit from them

When all was said and done, we didn't get a chance to tackle any wheezy release critical bugs like we'd hoped. That's ok, that leaves us something to do for our next event, which will be bigger and even better than this one. Maybe even big enough to rival one of those crazy, all-weekend BSPs that they have in Germany...

Read more
Timo Jyrinki

Since I think I just summarized a few thoughts of mine well at LWN, I'll copy-paste it here:

I can grumble about Android from time to time, but I do not say that it sucks. Extreme views are what are annoying. Android is what it is and it's great as it is, even though it could be different as well.

When it comes to discussing about free software and mobile phones, I'm especially annoyed by two types of comments:

1. People essentially saying that there is no value in an open project, ie. free software code dumps should be enough for everybody. I'm interested in the long term viability of free software projects, and it is hard to have successful projects without there being all sorts of factors that make up a good project - like transparency, inclusion, meritocracy. Even though the mobile projects have had little resources and a hard road, it's not useful to forget about these goal in the longer term. For example Debian, Mer, SHR, KDE Plasma Active have some of these in the mobile sector. I hope the best for them (and participate).

2. People complaining about something being not 100% free software, while not themselves actually even interested in it for other sake than complaining. When I've been talking about free software mobile phones, from time to time there is someone complaining about eg. not open GSM stack, wlan firmwares etc.. and to put it sharply probably writing the message from iPhone, while I'm reading it on Neo FreeRunner. If the complainer would be Harald Welte, I'd probably listen and agree with him.

 So there. For more civilized discussion.

Read more
Timo Jyrinki

Almost forgot to post this. My mobile phones running free software in photos. From left to right:



All of that software running on the devices is more or less free software, with Harmattan obviously being by far the least free, especially applications, but still better than any other on-the-shelf phone software *), and the others being 99% or "Ubuntu like" free ie. possibly with firmware and a few driver exceptions. N9 needs some bootloader work still before Nemo, Debian, Ubuntu etc. can be run there. I've collected a few things about N9 from this point of view at a wiki page.
*) Not sure about every Android phone, but Android is not openly developed anyway so it's hardly a similar free software project such as freedesktop.org projects or Qt

I gave my N900 away now since obviously I cannot make full use of each one of these. I'm multi-SIMming my N9 and the GTA02a7 Neo FreeRunner for daily use, while the other FreeRunner and N950 are purely for tinkering related purposes. The development FreeRunner will get on upgrade to GTA04 once it's available, and then hopefully that can be made into a daily usable phone as well.

By the way, see you in FSCONS in Gothenburg next weekend. Even rms will be there, which is always interesting of course :)

Read more

Raphaël Hertzog recently announced a new dpkg-buildflags interface in dpkg that at long last gives the distribution, the package maintainers, and users the control they want over the build flags used when building packages.

The announcement mail gives all the gory details about how to invoke dpkg-buildflags in your build to be compliant; but the nice thing is, if you're using dh(1) with debian/compat=9, debhelper does it for you automatically so long as you're using a build system that it knows how to pass compiler flags to.

So for the first time, /usr/share/doc/debhelper/examples/rules.tiny can now be used as-is to provide a policy-compliant package by default (setting -g -O2 or -g -O0 for your build regardless of how debian/rules is invoked).

Of course, none of my packages actually work that way; among other things I have a habit of liberally sprinkling DEB_MAINT_CFLAGS_APPEND := -Wall in my rules, and sometimes DEB_LDFLAGS_MAINT_APPEND := -Wl,-z,defs and DEB_CFLAGS_MAINT_APPEND := $(shell getconf LFS_CFLAGS) as well. And my upstreams' build systems rarely work 100% out of the box with dhauto* without one override or another somewhere. So in practice, the shortest debian/rules file in any of my packages seems to be 13 lines currently.

But that's 13 lines of almost 100% signal, unlike the bad old days of cut'n'pasted dh_* command lists.

The biggest benefit, though, isn't in making it shorter to write a rules file with the old, standard build options. The biggest benefit is that dpkg-buildflags now also outputs build-hardening compiler and linker flags by default on Debian. Specifically, using the new interface lets you pick up all of these hardening flags for free:

-fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro

It also lets you get -fPIE and -Wl,-z,now by adding this one line to your debian/rules (assuming you're using dh(1) and compat 9):

export DEB_BUILD_MAINT_OPTIONS := hardening=+pie,+bindnow

Converting all my packages to use dh(1) has always been a long-term goal, but some packages are easier to convert than others. This was the tipping point for me, though. Even though debhelper compat level 9 isn't yet frozen, meaning there might still be other behavior changes to it that will make more work for me between now and release, over the past couple of weekends I've been systematically converting all my packages to use it with dh. In particular, pam and samba have been rebuilt to use the default hardening flags, and openldap uses these flags plus PIE support. (Samba already builds with PIE by default courtesy of upstream.)

You can't really make samba and openldap out on the graph, but they're there (with their rules files reduced by 50% or more).

I cannot overstate the significance of proactive hardening. There have been a number of vulnerabilities over the past few years that have been thwarted on Ubuntu because Ubuntu is using -fstack-protector by default. Debian has a great security team that responds quickly to these issues as soon as they're revealed, but we don't always get to find out about them before they're already being exploited in the wild. In this respect, Debian has lagged behind other distros.

With dpkg-buildflags, we now have the tools to correct this. It's just a matter of getting packages to use the new interfaces. If you're a maintainer of a security sensitive package (such as a network-facing daemon or a setuid application), please enable dpkg-buildflags in your package for wheezy! (Preferably with PIE as well.) And if you don't maintain security sensitive packages, you can still help out with the hardening release goal.

Read more
Timo Jyrinki

The MeeGo community is frustrated with the news of the MeeGo brand being abandoned. Some are understandably angry or otherwise not happy about how Linux Foundation, Intel handled the Tizen announcement and community in general - or more like how they didn't handle it at all. Last week Openmind 2011 happened to be arranged in Tampere on the very same day as Tizen announcement came alive. It was good in the way that it lead to the fact that Nomovok's CEO Pasi Nieminen was able to initiate the "Reigniting MeeGo" session not just by talking vague things about future, but actually about the process which led to Tizen and the unfortunately brief initial PR about it. Pasi is intense on emphasizing the quality and role of Qt in Tizen as well, even though officially Tizen is all about HTML5 and apparently from Samsung's part at least EFL is provided as a native toolkit. However, the promise of Tizen compared to MeeGo is reportedly that the toolkit is not specified in compliancy documents, so HTML5 with WAC is the main/only "3rd party apps" layer whereas others can be offered case-by-case. This means that unlike before, the underlying system can be built on top of practically any distribution (theoretically) and using whatever toolkits and other techniques wanted. Obviously the "Nordic System Integrators" are probably all very keen of using Qt to produce more of Nokia N9 quality user experiences in various products.

Taking the corporate hat off, I as a community member am also puzzled. The only reason I was not completely blown by the news was that I didn't yet manage to get involved in MeeGo community on a daily basis, since I'm involved with a dozen communities already. Instead I've been more like scratching the surface with MeeGo Network Finland meetings, IRC activity, OBS usage for building a few apps for MeeGo Harmattan and MeeGo proper etc. But I can somewhat understand how people like Jarkko Moilanen from meego-fi feel. They have given a _lot_ to the MeeGo community and brand, all taken away without hearing or pre-notice.

So where to now for MeeGo community? Tizen is one obvious choice. However, for all the talks that even I started this post with, Tizen is still vaporware today, and the dislike of how community is being treated might make it easy to consider other options. Also, if Tizen's reference implementation has lesser meaning, it might also mean less to actually be "in" the Tizen community than in MeeGo. I met Jos Poortvliet at Openmind, and he invited people to openSUSE. There is a lot of common ground with MeeGo and openSUSE - strong OBS usage, RPM packaging, community side focused on KDE and therefore Qt.

I would like to now point similarly to Debian! If one is tired about corporate interests and not listening to community, there is no match for Debian's 15+ years history, purely volunteer based, trust based organization, and first of all scope. While openSUSE has traditionally focused on desktop (even though like Jos pointed out they are open to all new contributions and projects), Debian has always had the "universal" scope, ie. no boundaries besides producing free software operating system for various purposes. There are over 10 architectures maintained at the moment, including the ARM (different ports for ARMv4 and hard-float ARMv7) and x86 from MeeGo world. There are even alternative kernels to Linux, mainly the GNU/kFreeBSD port. There are multiple relevant plans and projects like the Smartphones wiki area, most noticeably Debian on Neo FreeRunner. I have run Debian on my primary mobile phone for over 2.5 years, although now in the recent months I've had dual-SIM in my Nokia N950 as well (Debian not yet running on Nokia N950 or Nokia N9 - but it can and will be done!).

What Debian may lack in both good and bad is corporate funding, if you don't count the still quite respectful contributions from Ubuntu to Debian (it's in Ubuntu's interests to contribute as much possible back to Debian, so that the delta remains small). For each and every aspect, it needs a volunteer - there are a thousand volunteer Debian Developers, and at least a double of that of people without the official DD status but who still maintain a package or two among the 25000+ packages in Debian. That means also that one my find it more lucrative to join a project that has paid people to do some of the "boring parts", more of fancy web tools, including for bug handling and build systems like the OBS (which I do love by the way). On the other hand, there is no other project in my opinion where what you do really matters as much.

To find out more about Debian from MeeGo perspective, please see the recent mailing list post Mobile UXes - From the DebConf11 BoF to the stars where I wrote most of the MeeGo (CE) part when I was asked to and known of my MeeGo involvement.

Last but not certainly least, there is the Mer project - originally "maemo reconstructed", ie. making Nokia's "not really distro" into a real distro by filling in the void places. Now it's obviously MeeGo reconstructed, and they aim to be the MeeGo they always wanted MeeGo to be! Read the post for details from Carsten Munk and other key Mer people. They share the love for Qt, and want the core to be as lean as possible. They also aim to incorporate the most community like aspect from MeeGo - MeeGo CE - as the reference vendor in Mer. They also aim to be Tizen compliant - and when Tizen comes alive, I wouldn't see why the Tizen reference implementation couldn't be used for saving resources. Maybe Nomovok and/or others could offer the Qt maintaining part.

So, it might be that Tizen itself is enough for most people's needs. The key point however in this post is not to fall in agony if one corporate based project takes big turns - it has happened before, it will happen in the future. There are always enough political and business reasons from some points of view to do Big Changes. But the wider community is out there, always, and it's bigger than you think. You should consider where you want to contribute by asking yourself why you are/were part of for example the MeeGo community. Aaron Seigo from KDE asked us all this question in the Openmind MeeGo Reignited session, and I think it's good to repeat.

Read more
pitti

Hot on the heels of the PostgreSQL 9.1.0 release I am happy to announce that the final version is now packaged for Debian unstable, the current Ubuntu development version “Oneiric”, and also in my Ubuntu backports PPA for Ubuntu 10.04 LTS, 10.10, and 11.04.

Enjoy trying out all the cool new features like builtin synchronous replication or per-column collation settings for correctly handling international strings, or an even finer-grained access control for large environments. Please see the detailled explanation of the new features.

As already announced a few days ago, 9.0 is gone from Ubuntu 11.10, as it is still only a development version and not an LTS. 9.1 will be the version which the next 12.04 LTS will support, so this slightly reduces the number of major upgrades Ubuntu users will need to do. However, 9.0 will still be available in Debian unstable and backports, and the Ubuntu backports PPA for a couple of months to give DB administrators some time to migrate.

Read more
pitti

PostgreSQL 9.1 has had its first release candidate out for some two weeks without major problem reports, so it’s time to promote this more heavily. If you use PostgreSQL, now is the time to try it out and report problems.

We always strive to minimize the number of major versions which we have to support. They not only mean more maintenance for developers, but also more upgrade cycles for the users.

9.0 has not been in any stable Debian or Ubuntu release, and 9.1 final will be released soon. So we recently updated the current Ubuntu development release for 11.10 (“oneiric”) to 9.1. In Debian, the migration from 8.4/9.0 to 9.1 is making good progress, and there is not much which is left until postgresql-9.0 can be removed.

Consequently, I also removed 9.0 from my PostgreSQL backports PPA, as there is nothing any more to backport it from. However, that mostly means that people will now set up installations with 9.1 instead of 9.0, and won’t magically make your already installed 9.0 packages go away. They will just be marked as obsolete in the postgresql-common debconf note.

If you want to build future 9.0 packages yourself, you can do this based on the current branch: bzr branch lp:~pitti/postgresql/debian-9.0, get a the new upstream tarball, name it accordingly, add a new changelog with a new upstream version number, and run bzr bd to build the package (you need to install the bzr-builddeb package for this).

Update 2011-09-09: As I got a ton of pleas to continue the 9.0 backports for a couple of months, and to keep it in Debian unstable for a while longer, I put them back now. I also updated the removal request in Debian to point out that I’m mainly interested in getting 9.0 out of testing. I don’t mind much maintaining it for a couple of more months in unstable. My dear, I had no idea that my backports PPA was that popular!

Read more
Barry Warsaw

sbuild is an excellent tool for locally building Ubuntu and Debian packages.  It fits into roughly the same problem space as the more popular pbuilder, but for many reasons, I prefer sbuild.  It's based on schroot to create chroot environments for any distribution and version you might want.  For example, I have chroots for Ubuntu Oneiric, Natty, Maverick, and Lucid, Debian Sid, Wheezy, and Squeeze, for both i386 and amd64.  It uses an overlay filesystem so you can easily set up the primary snapshot with whatever packages or prerequisites you want, and the individual builds will create a new session with an overlaid temporary filesystem on top of that, so the build results will not affect your primary snapshot.  sbuild can also be configured to save the session depending on the success or failure of your build, which is fantastic for debugging build failures.  I've been told that Launchpad's build farm uses a customized version of sbuild, and in my experience, if you can get a package to build locally with sbuild, it will build fine in the main archive or a PPA.

Right out of the box, sbuild will work great for individual package builds, with very little configuration or setup.  The Ubuntu Security Team's wiki page has some excellent instructions for getting started (you can stop reading when you get to UMT :).

One thing that sbuild doesn't do very well though, is help you build a stack of packages.  By that I mean, when you have a new package that itself has new dependencies, you need to build those dependencies first, and then build your new package based on those dependencies.  Here's an example.

I'm working on bug 832864 and I wanted to see if I could build the newer Debian Sid version of the PySide package.  However, this requires newer apiextractor, generatorrunner, and shiboken packages (and technically speaking, debhelper too, but I'm working around that), so you have to arrange for the chroot to have those newer packages when it builds PySide, rather than the ones in the Oneiric archive.  This is something that PPAs do very nicely, because when you build a package in your PPA, it will use the other packages in that PPA as dependencies before it uses the standard archive.  The problem with PPAs though is that when the Launchpad build farm is overloaded, you might have to wait several hours for your build.  Those long turnarounds don't help productivity much. ;)

What I wanted was something like the PPA dependencies, but with the speed and responsiveness of a local build.  After reading the sbuild manpage, and "suffering" through a scan of its source code (sbuild is written in Perl :), I found that this wasn't really supported by sbuild.  However, sbuild does have hooks that can run at various times during the build, which seemed promising.  My colleague Kees Cook was a contributor to sbuild, so a quick IRC chat indicated that most people create a local repository, populating it with the dependencies as you build them.  Of course, I want to automate that as much as possible.  The requisite googling found a few hints here and there, but nothing to pull it all together.  With some willful hackery, I managed to get it working.

Rather than post some code that will almost immediately go out of date, let me point you to the bzr repository where you can find the code.  There are two scripts: prep.sh and scan.sh, along with a snippet for your ~/.sbuildrc file to make it even easier.  sbuild will call scan.sh first, but here's the important part: it calls that outside the chroot, as you (not root). You'll probably want to change $where though; this is where you drop the .deb and .dsc files for the dependencies.  Note too, that you'll need to add an entry to your /etc/schroot/default/fstab file so that your outside-the-chroot repo directory gets mapped to /repo inside the chroot.  For example:

# Expose local apt repository to the chroot
/home/barry/ubuntu/repo    /repo    none   rw,bind  0 0
An apt repository needs a Packages and Packages.gz file for binary packages, and a Sources and Sources.gz file for the source packages.  Secure APT also requires a Release and Release.gpg file signed with a known key.  The scan.sh file sets all this up, using the apt-ftparchive command.  The first apt-ftparchive call creates the Sources and Sources.gz file.  It scans all your .dsc files and generates the proper entries, then creates a compressed copy, which is what apt actually "downloads".  The tricky thing here is that without changing directories before calling apt-ftparchive, your outside-the-chroot paths will leak into this file, in the form of Directory: headers in Sources.gz.  Because that path won't generally be available inside the chroot, we have to get rid of those headers.  I'm sure there's an apt-ftparchive option to do this, but I couldn't find it.  I accidentally discovered that cd'ing to the directory with the .dsc files was enough to trick the command into omitting the Directory: headers.

The second call to apt-ftparchive creates the Packages and Packages.gz files.  As with the source files, we get some outside-the-chroot paths leaking in, this time as path prefixes to the Filename: header value.  Again, we have to get rid of these prefixes, but cd'ing to the directory with the .deb files doesn't do the trick.  No doubt there's some apt-ftparchive magical option for this too, but sed'ing out the paths works well enough.

The third apt-ftparchive file creates the Release file.  I shameless stole this from the security team's update_repo script.  The tricky part here is getting Release signed with a gpg key that will be available to apt inside the chroot.  sbuild comes with its own signing key, so all you have to do is specify its public and private keys when signing the file.  However, because the public file from
/var/lib/sbuild/apt-keys/sbuild-key.pub
won't be available inside the chroot, the script copies it to what will be /repo inside the chroot.  You'll see later how this comes into play.

Okay, so now we have the repository set up well enough for sbuild to carry on.  Later, before the build commences, sbuild will call prep.sh, but this script gets called inside the chroot, as the root user.  Of course, at this point /repo is mounted in the chroot too.  All prep.sh needs to do is add a sources.list.d entry so apt can find your local repository, and it needs to add the public key of the sbuild signing key pair to apt's keyring.  After it does this, it needs to do one more apt-get update.  It's useful to know that at the point when sbuild calls prep.sh, it's already done one apt-get update, so this does add a duplicate step, but at least we're fortunate enough that prep.sh gets called before sbuild installs all the build dependencies.  Once prep.sh is run, the chroot will have your overriding dependent packages, and will proceed with a normal build.

Simple, huh?

Besides getting rid of the hackery mentioned above, there are a few things that could be done better:
  • Different /repo mounts for each different chroot
  • A command line switch to disable the /repo
  • Automatically placing .debs into the outside-the-chroot repo directory

Anyway, it all seems to hang together.  Please let me know what you think, and if you find better workarounds for the icky hacks.
 

Read more
Timo Jyrinki

Debian MeeGo mixup. MeeGo screenshot CC-BY wiki.meego.com, Debian screenshot by me
FreeSmartphone.Org (FSO), Openmoko, Debian's FSO group, SHR, QtMoko et cetera are a few of the community based intertwined projects to bring free software to smartphones. They have a relatively long and colorful history of doing this, and have nowadays been approaching multiple target devices despite limited resources and for example the losing of Openmoko Inc. in 2009. I've been using Debian on my Neo FreeRunner phone for over two years now, and over three years of FreeRunner use altogether. FSO2, the next generation freesmartphone.org stack, is finally coming into Debian now, extending the basic phone support besides Openmoko phones to eg. Palm Pre, Nexus One, Nokia N900 and a few HTC phones. It needs a lot of tweaking and eg. a proper kernel, but still.

Four years after beginning of sales of the first Openmoko device (Neo1973), we're still in the pioneering phase of free distributions for mobile phones. There is no "Ubuntu for phones" so to speak, not for even a selected models. Meanwhile, like so often in the wide world, competing free software approaches have arrived. Android is the obvious one, and has seen a port to Neo Freerunner among else. Android is not as open a project as one could like, and replaces everything we've known with its own code, but nevertheless it requires to be noted and is completely usable in its free software form. But since there are limitations to its approach, and since it's more of an own separate world from the rest of Linux distributions, it is not as interesting to me as the others in the long run, at least in its current shape.

A more similar competitor to FSO and distributions using FSO is MeeGo and its middleware, and to be more precise so far specifically Nokia's efforts on it. Obviously there is the strong competitor for the best general population smartphone of the year, the Qt based Nokia N9, but its default software is more of a proprietary thing even though it has a neatly rock solid free software foundation with separate free/non-free repositories and all that. Nice and great for the (GNU/)Linux in general, but the Harmattan software is not exactly on topic for this blog post. It's however in my opinion the best marketing to vendors around the world GNU/Linux + Qt can get as Android Linux has been taking most of the limelight. But meanwhile, with significantly smaller focus and resources, Nokia has also been sponsoring to try to create a truly community based mobile phone software stack at the MeeGo upstream, nowadays called "MeeGo Community Edition" or MeeGo CE for short. It co-operates with the MeeGo Handset target of MeeGo (and Intel) that hasn't got actual target consumer hardware at the moment, although that might change soon (?), but has been doing some nice applications recently. CE has the target hardware and additional device specific and non-specific software cooking. It used to target Nokia N900 only, but nowadays the project has added N950 (the for-developers-only phone) and N9 to the targets, and it is starting to seem there should be no showstoppers to bring MeeGo CE (or other distributions later on) to them, despite some earlier doubts. A few needed bits to void the warranty we all want to do are still missing, though, but coming. After that it's just developing free software. Usual caveats about specifics of a few kernel drivers apply, as the devices were not designed with the sole purpose of free software drivers in mind. Hopefully mobile 3D gets into a better shape in the coming years, but that's another story.

MeeGo (CE) smartphone middleware, of course, shares nothing with FSO *). While FSO is a "handle everything" in itself with its wide variety of daemons, MeeGo consists of more separate software like Intel's and Nokia's oFono for modem support. FSO demo UIs and SHR UIs have traditionally focused on using Enlightenment 17 for its suitability to low power machines, while in MeeGo everything is written in Qt. As Qt/QML is becoming faster and faster, and it's very powerful to write, there might be quite a bit of useful software emerging from there also for other distributions besides MeeGo itself. Actually, there is already a Debian MeeGo stack maintainer group available, although it hasn't yet focused on the UIs as far as I can see (if I'll have free time I'll join the effort and see for myself in more detail). There is also the QtMoko distribution, based on the original, canceled Trolltech/Nokia Qt Extended (Qtopia) project, but ported to newer Qt:s and put on top of Debian.

*) Although, correct me if I'm wrong and I might be, the Nokia N900 isi modem driver in FSO was ported/learned from oFono.

MeeGo CE is not only a project to bring a proper MeeGo distribution to a few smartphones, but also to shake out bugs in the MeeGo project's contributions and community processes. It is acting as a completely open MeeGo product contributor, and investigating how things should be optimally done so that everything gets integrated into the MeeGo properly, and that proper MeeGo software releases can be done for the target devices. Therefore it's also an important project for the vitality of the whole MeeGo.

MeeGo CE has so far had a whole team in Nokia working on it, but for obvious strategy related reasons community cannot rely on it lasting forever. The real hurdle is for the wider free smartphones community to be ready for really embracing a project that is already a community project but to outsiders might seem like a company project. I believe it's never easy to grow a volunteer community if the starting setup is a paid company team, but it mainly requires a) the interested people and b) the few smart ones to take control and communication responsibility so that it's not anymore seen as a company project. MeeGo CE never was a traditional company project, everything being done in the open and together with volunteers, but I just know how people perceive these things by default. People tend to assume stuff as someone else's responsibility

Whatever happens, I hope there are enough people interested in free software for mobile phones to carry on all these different approaches to the software needed. I hope MeeGo / MeeGo CE will have a great future, and I hope that both the middleware like FSO and MeeGo's components, and the UIs like FSO, SHR, QtMoko and MeeGo Handheld UIs continue to develop. I also hope other distributions like Debian will gather a strong suite of packaged software for smartphones. I know I have had some hard time to find suitable apps for my phone at times.

For those interested about how I use Debian as my mobile phone OS, see http://wiki.openmoko.org/wiki/User:TimoJyrinki

Read more
Barry Warsaw

So, yesterday (June 21, 2011), six talented and motivated Python hackers from the Washington DC area met at Panera Bread in downtown Silver Spring, Maryland to sprint on PEP 382. This is a Python Enhancement Proposal to introduce a better way for handling namespace packages, and our intent is to get this feature landed in Python 3.3. Here then is a summary, from my own spotty notes and memory, of how the sprint went.

First, just a brief outline of what the PEP does. For more details please read the PEP itself, or join the newly resurrected import-sig for more discussions. The PEP has two main purposes. First, it fixes the problem of which package owns a namespace's __init__.py file, e.g. zope/__init__.py for all the Zope packages. In essence, it eliminate the need for these by introducing a new variant of .pth files to define a namespace package. Thus, the zope.interfaces package would own zope/zope-interfaces.pth and the zope.components package would own zope/zope-components.pth.  The presence of either .pth file is enough to define the namespace package.  There's no ambiguity or collision with these files the way there is for zope/__init__.py.  This aspect will be very beneficial for Debian and Ubuntu.

Second, the PEP defines the one official way of defining namespace packages, rather than the multitude of ad-hoc ways currently in use.  With the pre-PEP 382 way, it was easy to get the details subtly wrong, and unless all subpackages cooperated correctly, the packages would be broken.  Now, all you do is put a * in the .pth file and you're done.

Sounds easy, right?  Well, Python's import machinery is pretty complex, and there are actually two parallel implementations of it in Python 3.3, so gaining traction on this PEP has been a hard slog.  Not only that, but the PEP has implications for all the packaging tools out there, and changes the API requirements for PEP 302 loaders.  It doesn't help that import.c (the primary implementation of the import machinery) has loads of crud that predates PEP 302.

On the plus side, Martin von Loewis (the PEP author) is one of the smartest Python developers around, and he's done a very good first cut of an implementation in his feature branch, so there's a great place to start.

Eric Smith (who is the 382 BDFOP, or benevolent dictator for one pep), Jason Coombs, and I  met once before to sprint on PEP 382, and we came away with more questions than answers.  Eric, Jason, and I live near each other so it's really great to meet up with people for some face-to-face hacking.  This time, we made a wider announcement, on social media and the BACON-PIG mailing list, and were joined by three other local Python developers.  The PSF graciously agreed to sponsor us, and while we couldn't get our first, second, third, or fourth choices of venues, we did manage to score some prime real-estate and free wifi at Panera.

So, what did we accomplish?  Both a lot, and a little.  Despite working from about 4pm until closing, we didn't commit much more than a few bug fixes (e.g. an uninitialized variable that was crashing the tests on Fedora), a build fix for Windows, and a few other minor things.  However, we did come away with a much better understanding of the existing code, and a plan of action to continue the work online.  All the gory details are in the wiki page that I created.


One very important thing we did was to review the existing test suite for coverage of the PEP specifications.  We identified a number of holes in the existing test suite, and we'll work on adding tests for these.  We also recognized that importlib (the pure-Python re-implementation of the import machinery) wasn't covered at all in the existing PEP 382 tests, so Michael worked on that.  Not surprisingly, once that was enabled, the tests failed, since importlib has not yet been modified to support PEP 382.


We also came up with a number of questions where we think the PEP needs clarification.  We'll start discussion about these on the relevant mailing lists.


Finally, Eric brought up a very interesting proposal.  We all observed how difficult it is to make progress on this PEP, and Eric commented on how there's a lot of historical cruft in import.c, much of which predates PEP 302.  That PEP defines an API for extending the import machinery with new loaders and finders.  Eric proposed that we could simplify import.c by removing all the bits that could be re-implemented as PEP 302 loaders, specifically the import-from-filesystem stuff.  The other neat thing is that the loaders could probably be implemented in pure-Python without much of a performance hit, since we surmise that the stat calls dominate. If that's true, then we'd be able to refactor importlib to share a lot of code with the built-in C import machinery.  This could have the potential to greatly simplify import.c so that it contains just the PEP 302 machinery, with some bootstrapping code.  It may even be possible to move most of the PEP 382 implementation into the loaders.  At the sprint we did a quick experiment with zipping up the standard library and it looked promising, so Eric's going to take a crack at this.


This is briefly what we accomplished at the sprint.  I hope we'll continue the enthusiasm online, and if you want to join us, please do subscribe to the import-sig!

Read more
Timo Jyrinki

Just reminding myself (and you) of the following pretty important tip that I got 2.5 years ago and already forgot: use alt=rss in blogger feeds when giving the feed to planetplanet. Planetplanet treats the "updated" field of Atom feeds similar to "published".

So hopefully planet Debian fixed now as well.

Read more
pitti

Hot on the heels of the Announcement of the second 9.1 Beta release there are now packages for it in Debian experimental and backports for Ubuntu 10.04 LTS, 10.10. and 11.04 in my PostgreSQL backports for stable Ubuntu releases PPA.

Warning for upgrades from Beta 1: The on-disk database format changed since Beta-1. So if you already have the beta-1 packages installed, you need to pg_dumpall your 9.1 clusters (if you still need them), and pg_dropcluster all 9.1 clusters before the upgrade. I added a check to the pre-install script to make the postgresql-9.1 package fail early to upgrade if you still have existing 9.1 clusters to avoid data loss.

Read more
pitti

Two weeks ago, PostgreSQL announced the first beta version of the new major 9.1 version, with a lot of anticipated new features like synchronous replication or better support for multilingual databases. Please see the release announcement for details.

Due to my recent moving and the Ubuntu Developer Summit it took me a bit to package them for Debian and Ubuntu, but here they are at last. I uploaded postgresql-9.1 to Debian experimental; currently they are sitting in the NEW queue, but I’m sure our restless Debian archive admins will get to it in a few days. I also provided builds for Ubuntu 10.04 LTS, 10.10. and 11.04 in my PostgreSQL backports for stable Ubuntu releases PPA.

I provided full postgresql-common integration, i. e. you can use all the usual tools like pg_createcluster, pg_upgradecluster etc. to install 9.1 side by side with your 8.4/9.0 instances, attempt an upgrade of your existing instances to 9.1 without endangering the running clusters, etc. Fortunately this time there were no deprecated configuration options, so pg_upgradecluster does not actually have to touch your postgresql.conf for the 9.0 ?9.1 upgrade.

They pass upstream’s and postgresql-common’s integration test suite, so should be reasonably working. But please let me know about everything that doesn’t, so that we can get them in perfect shape in time for the final release.

I anticipate that 9.1 will be the default (and only supported) version in the next Debian release (wheezy), and will most likely be the one shipped in the next Ubuntu LTS (in 12.04). It might be that the next Ubuntu release 11.10 will still ship with 9.0, but that pretty much depends on how many extensions get ported to 9.1 by feature freeze.

Read more