Canonical Voices

What Alex Chiang talks about


buenos aires, redux

San Francisco has been lovely. And I’m coming back, pretty soon, actually.

But first, I’m taking an Argentinean interlude. Tickets are already booked, leaving 11 March, and returning 5 May.

Even better, I’ve already got a place lined up to live, Buenos Aires, in the Recoleta neighborhood. Looking forward to hanging out with my buddy Salgado.

My plan is to eat empanadas and steak and drink red wine until I gain 20 kilos.

If anyone wants to visit, I’ll have a guest bed.


Read more

san francisco santacon, 2011

I’m happy to announce that a few packages I’ve been working on over the past year have finally landed in Ubuntu Precise[1].

If you have a 3G USB modem, and it currently doesn’t work well (or at all) in Debian or Ubuntu, you should check this list of modems[2]. If it listed, then you may be a candidate to try an alternative 3G networking stack.

$ sudo apt-get install wader-core

This command will remove ModemManager and install wader-core. It should be an entirely transparent operation to you, except that after you reboot, your modem should appear as a connection option in the network manager applet.


1: naturally, I was a good boy and uploaded the packages to Debian unstable first
2: this list is predominantly composed of Vodafone-branded modems, but there are others in there as well.

Thanks to the Debian python team for mentoring me and to Al Stone and dann frazier for even more mentoring in addition to sponsoring me.

Read more

Last January through April, I pretty much fell off the face of the earth, in real life as well as online. For those that asked, I alluded to some long hours at work, but of course couldn’t say much publicly.

Well, we finally launched, and I’m quite proud of all our team accomplished.

Without question, this was the hardest project I’ve taken on in my career to date. But I was part of a great team, and we pulled together to ship.

We’re bringing free, open software to the world. This is the mission I signed up for.

Some links for your reading pleasure, take with a grain of salt:

Read more

life changes

SF prep #1
forsooth, a brake!

I’m moving. Travelling, really.

Around the world.

In 3 to 4 month chunks.

A city at a time.

Really, it’s about time. I’ve been thinking about it for several years now, planning piecemeal, laying down disjointed bits of foundation. But it’s happening. For real.

One of the best perquisites of Canonical is the inherent assumption of remote working. As long as you have a laptop and wifi, you could really work from anywhere in the world (modulo a tiny bit of reality, but for the most part true), assuming you remain productive and available for your colleagues.

It’s time to get while the getting’s good, and take advantage of the freedom. Have laptop, sense of adventure, and strong GI tract; hitting the road, in search of wifi and the perfect bánh mi (or empanada, I’m not terribly picky).

I love Fort Collins. It’s the perfect Pleasantville, and I’ve never been happier living here for 8 years. But Penelope claims that you cannot have both a happy life and an interesting life; you have to choose one.

So, I choose interesting.

When are you leaving?
I leave Ft. Collins on 30 September 2011.

Where are you going?
First stop is San Francisco.

San Francisco is hilly, isn’t it?
Right-o. Hence the recent addition of a rear brake on my fixie. I’m not too scared of pedaling a 54×19 up hills, but I am scared of riding down them without additional stopping power.

For how long?
My sublease runs until 31 December 2011. I’ll probably extend it by an extra month and stay til 31 January 2012 because moving on New Year’s Eve sucks. Unless the world ends, of course, in which case the move will be permanent.

Then what?
I’ll come back to Ft. Collins to make sure my house hasn’t burnt down. Maybe gather a few things, maybe sell some other things, maybe do a bit of skiing (February is the best ski month in Colorado anyway), and figure out where I’m going next.

Oh, you’re not selling your house?
No, I’m too lazy to pack yet, or to fix the small nagging things that need to be fixed in order to sell a house.

Are you renting it out then?
Yes, I’ve some friends renting it out for the first stretch, but nothing lined up after that. Would you like to rent a nice house in early 2012?

How about your car?
My lovely renters will run it once in a while to keep the battery from dying. But I plan on leaving it garaged in Ft. Collins mostly.

Ok, so what’s next?
I’m not sure. I really want to go to Taipei, but it kinda depends on how my current work project is going. We currently have staff in two major timezones, the Americas and Europe. Stretching staff across 3 timezones into Asia is horrible. I did that for my last project, and it meant that someone always had a 2am meeting, which sucked. So, if current project is winding up as expected, Taipei is next. If not, then the next strongest candidate will be Buenos Aires.

What factors into your choices?
I’d really like to improve my Mandarin. I plan on taking lessons in San Francisco, and continuing them in Taipei if I end up there. Otherwise, my Spanish could use some tuning up as well. And I fucking love empanadas. Seriously. A lot.

One factor to consider is the length of the tourist visa. Most countries will give US citizens a 90-day stamp without too much hassle, so those countries are more appealing. But to be honest, this whole trip is an experiment in playing it by ear.

Why keep coming back to Ft. Collins? Why not just a ’round-the-world ticket?
I wouldn’t exactly call myself commitment-averse, but I’ve noticed a common pattern in my life heretofore has involved a lot of hedging. Also see above note re: ear-playing (which sounds a whole lot worse than the longer phrase).

Will you blog? Tweet? Facebook?
Yes. Yes. No.

Email works too.

Will we still get platypus Friday?
I shall endeavor to please.

Don’t you think fake-asking yourself questions on your own blog is a little pretentious?
At times, I hate me too.

And clichéd?
Ok, ok, I get the point.

In any case, if you have travel suggestions, tips, whathaveyou, I’m happy to hear them all.

Stay tuned to this space for the latest and greatest.



Read more

16 months later

new digs

April 2010, just hired.

clean office, clean mind

August 2011.

Yes, it took me 16 months to get a bookshelf and hang that photo. There’s been a lot of life in-between.

(click for large versions)

Read more

As of this writing, it is a little painful to use pbuilder to create a Debian chroot on an Ubuntu host due to LP: #599695.

The easiest workaround I could figure out was the following:

$ cat ~/.pbuilderrc-debian
COMPONENTS="main contrib non-free"
DEBOOTSTRAPOPTS=("${DEBOOTSTRAPOPTS[@]}" "--keyring=/usr/share/keyrings/debian-archive-keyring.gpg")

And then you can issue:

sudo pbuilder create --basetgz /var/cache/pbuilder/sid.tgz --distribution sid --mirror --configfile ~/.pbuilderrc-debian

The better way to fix this of course, would be to fix above bug. But this works for now.

Read more

It is a fact of life that everyone receives more email than they can handle.

It is also a fact that email is a skill, and there are varying levels of proficiency out there.

So, it is only a matter of time before you find yourself on the annoying end of an email thread gone awry. Perhaps it is a discussion on the wrong mailing list, or perhaps it is the infamous 1 grillion people in the To: or Cc: fields problem.

Before long, a “take me off this list” / “stop replying to all” storm ensues, and then something horrible like Facebook gets invented to “solve” this “problem”.

Of course mail filters can be deployed, shielding you from the idiocy. But what if you want to be more proactive? Is there a way to stop the insanity without having to hax0r into the mail server and just start BOFH‘ing luser accounts?

Yes, there is an easy solution that works most (but not all) of the time.

Put all the unintended recipients in the Bcc: field. Put the correct recipients in the To: field.

In the case of discussion on the wrong mailing list, this is easy; just put the correct list in the To: field. Include a note in the mail body, such as “Redirecting to foo list, which is more appropriate.” Respondents will then typically automatically respond to the correct list.

In the case of “too many Cc:s”, there’s no easy answer. You could move all the Cc: to Bcc:, and then put something like none@none.invalid in the To: address. You will get a single bounce, but then so will everyone else who attempts to respond to you. This trick only works because the people who tend to cause the problem also tend to be lazy and just respond to the last mail received. They can’t spam everyone else because their addresses are obfuscated via the Bcc:. If you feel brave, you could socially engineer the recipients by writing something inflammatory, in order to entice them to respond to you, rather than other mails in the thread, which will then result in a bounce.

Hope this helps.

[nb, .invalid is actually a reserved domain, read rfc2606 for more details.]

Read more


Like many open source folks, I consider irc a crucial piece of every day infrastructure. I use bip as a proxy to help me keep up with conversations that occurred while I was away. The next time I connect with my client (xchat, in this case), I get a playback of the old conversations, and my client does the right thing, highlighting tabs if my name was mentioned, etc.

Sometimes though, I want to read old logs, either to remind myself of a conversation I had with someone, or to dig out a URL, or whatever. bip keeps these logs around, but they can be annoying to read.

Here’s an example of how bip stores the logs:

achiang@complete:~/.bip$ ls -R
bip.conf  logs

bip.log  canonical  freenode  oftc  sekrit

2011-03  2011-04  2011-05  2011-06  2011-07

achiang.30.log     #coherellc.31.log      #ubuntu-devel.31.log
chanserv.30.log    #ubuntu-motu.30.log
chanserv.31.log    #ubuntu-motu.31.log
#coherellc.30.log  #ubuntu-devel.30.log

They’re just free form text, which is good, because you can then use normal tools like grep on them. Unfortunately, they’re also full of long, noisy lines that look like:

achiang@complete:~/.bip$ head ./logs/freenode/2011-03/#ubuntu-devel.31.log 
31-03-2011 00:01:48 -!- zeeshan313!~zeeshan@ has joined #ubuntu-devel
31-03-2011 00:03:05 -!- T0rCh__!~T0rCh_rao@ has quit [Remote host closed the connection]
31-03-2011 00:06:23 -!- holstein!~holstein@unaffiliated/mikeh789 has quit [Ping timeout: 240 seconds]
31-03-2011 00:08:23 -!- m_3! has quit [Ping timeout: 276 seconds]
31-03-2011 00:08:48 -!- abhinav-!~abhinav@ has joined #ubuntu-devel
31-03-2011 00:12:02 -!- holstein! has joined #ubuntu-devel
31-03-2011 00:12:03 -!- holstein! has quit [Changing host]
31-03-2011 00:12:03 -!- holstein!~holstein@unaffiliated/mikeh789 has joined #ubuntu-devel
31-03-2011 00:15:35 -!- andreasn!~andreas@ has joined #ubuntu-devel
31-03-2011 00:20:32 -!- TeTeT! has joined #ubuntu-devel

So just viewing them in an editor can be annoying.

And that was a rather long intro to describe what is one of the world’s most trivial scripts (which has at least one known bug :-/ ). But anyway, I call the snippet below “bipcat”:


cat $1 	| grep -v "has quit" 		\
	| grep -v "is now known as" 	\
	| grep -v "has joined" 		\
	| grep -v "has left" 		\
	| sed 's/!.*:/:/' 		\
	| cut -f 2- -d' '

And now, you can get much more sensible output:

achiang@complete:~/.bip$ bipcat ./logs/freenode/2011-03/#ubuntu-devel.31.log | head
00:54:45 < dholbach: good morning
01:05:16 < pitti: Good morning
01:58:29 < pitti: should bug 685682 be closed with the new fglrx that we landed yesterday?
01:58:32 < ubottu://
01:59:08 < didrocks: it seems that cnd still have that issue with the driver and workarounded compiz
01:59:41 < didrocks: anyway, there is still a need for a compiz upload which will come with other fixes (probably Monday)
01:59:48 < pitti: ah, thanks
02:00:09 < tseliot: the fix should be available in the next upload of compiz (it's already available in a daily PPA)
02:00:28 < pitti: so I guess for now the fglrx tasks should be closed then?
02:00:30 < didrocks: yeah, but as told, it's not working on cnd's machine, I asked him to check with you and jay

Much nicer!

[the bug is that the sed line does a greedy match, so it replaces everything up to the last ':', which is clearly not the right thing to do if someone actually typed in a ':'. suggestions for improvement welcome]

Read more

There are certain situations where one might want to generate a quick .deb package that just installs things onto the target system without doing anything fancy, like compiling source files.

The classic example is if you are in charge of delivering software to a group of machines, but do not have source code to the software. Maybe you just have a pre-compiled library you want installed somewhere.

You could ask your users to:

$ sudo cp /usr/lib

But then what if you need to update somehow? I can see the nightmare a-comin’ from all the way over here.

So then you think to yourself, gee, I have a very nice package management system; why don’t I use it? Which means, you’re going to try and teach yourself the bare minimum debian packaging skills needed to do such a thing, to which I say, good luck.

Perhaps there are easy examples out there [and if so, let me know and I'll update this post]; in the meantime, this is the bare minimum that I could come up with.

Hope it helps.

The directory layout follows:

achiang@aspen:~/Projects/example$ ls -RF
debian/  usr/

changelog  compat  control*  copyright  install  rules*  source/




Of course, we have the debian/ directory, which is where the magic happens. But the other top-level directory, usr/ in our case, is what we want to install on the target system.

The easiest thing to do is to re-create your target system’s filesystem layout as a top-level directory, and then put your files in the appropriate spot. Here, we want to install into /usr/lib on the target system, so you can see that I’ve recreated it above.

If you also wanted to install, an upstart script, say, you’d also create:

$ ls -RF
debian/  etc/  usr/



Ok, next let’s look at the stuff in debian/:

achiang@aspen:~/Projects/example/debian$ cat rules 
#!/usr/bin/make -f
	dh $@

That’s pretty easy. How about the control file?

achiang@aspen:~/Projects/example/debian$ cat control 
Source: example
Section: libs
Priority: extra
Maintainer: Alex Chiang 
Build-Depends: debhelper (>= 7)
Standards-Version: 3.9.1

Package: example
Architecture: any
Depends: ${misc:Depends}
Description: A skeleton installation deb
 An example, minimal package that installs files into the filesystem, without
 any processing from the packaging system.

Ok, one more interesting file, the ‘install’ file:

achiang@aspen:~/Projects/example/debian$ cat install 

The usr/ entry in ‘install’ maps to the usr/ directory you created above. Again, if you also wanted to install something into etc/, you’d add the corresponding line into ‘install’. Extend this concept to any other directories/files you’ve created.

The rest of the files are more or less boilerplate. I’ll display them for completeness’ sake:

achiang@aspen:~/Projects/example/debian$ cat compat 
achiang@aspen:~/Projects/example/debian$ cat copyright 
This package was debianized by Alex Chiang  on
Mon Jul 11 14:30:17 MDT 2011


    Copyright (C) 2011 Alex Chiang


    This program is free software: you can redistribute it and/or modify it
    under the terms of the the GNU General Public License version 3, as
    published by the Free Software Foundation.
    This program is distributed in the hope that it will be useful, but
    WITHOUT ANY WARRANTY; without even the implied warranties of
    PURPOSE.  See the applicable version of the GNU Lesser General Public
    License for more details.
On Debian systems, the complete text of the GNU General Public License
can be found in `/usr/share/common-licenses/GPL-3'
achiang@aspen:~/Projects/example/debian$ cat source/format 
3.0 (native)

So, there you have it. A pretty trivial example on how to package a binary inside of a debian source package. Of course, you could do this with text files, PDFs, whatever.

Feedback appreciated.

Read more

Warning: I just learned how to spell python-twisted today, so it’s entirely plausible I’m advocating bonghits below. If you have real answers, especially for developers, please let me know.

Let’s say you’re a sysadmin deploying or a developer writing a twisted app. It’s likely the app is writing a bunch of logs somewhere. The problem is that the default logging settings for twistd are set to write an unbounded number of log files, so over time, it’s possible to fill up your filesystem with old logs.

As a lazy developer (or more likely lazy bug fixer who is just trying to fix something and move on with life without having to learn yet another huge framework because let’s face it, you have gtk, glib, firefox, and kernel bugs that need fixing), you google and find the quick basic help on logging in twisted. You read:

If you are using twistd to run your daemon, it will take care of calling startLogging for you, and will also rotate log files.

So you think “top banana”, and merrily invoke your program with:

from twisted.python.release import sh
sh("/usr/bin/twistd --nodaemon --logfile=%s" % "myapp.log")

And you’re done and it’s time to go to the pub. Not so fast, Speedracer. Let’s take a look at the defaults (I checked twisted 10.0 and 11.0):

137	class LogFile(BaseLogFile):
138	    """
139	    A log file that can be rotated.
141	    A rotateLength of None disables automatic log rotation.
142	    """
143	    def __init__(self, name, directory, rotateLength=1000000, defaultMode=None,
144	                 maxRotatedFiles=None):
234	class DailyLogFile(BaseLogFile):
235	    """A log file that is rotated daily (at or after midnight localtime)
236	    """

See that maxRotatedFiles=None? It means you will eventually hit -ENOSPC, and pandas will be sad.

A little more digging, and reading through the twisted application framework howto, you get the hint on how to modify the default logging behavior. The example says:

The logging behavior can be customized through an API accessible from .tac files. The ILogObserver component can be set on an Application in order to customize the default log observer that twistd will use.

Ok, so you look at the example, and then you say to yourself, that’s pretty good, but Sidnei da Silva could improve it, and then you bug Sidnei for a few hours, asking him rudimentary questions about whether the whitespace in Python really matters or not, and then he politely gives you the following rune (all the ugliness below comes from me, anything elegant comes from Sidnei):

from twisted.application.service import Application
from twisted.python import log
from twisted.python.logfile import DailyLogFile

application = Application("foo")

And that kinda works without any fancy observers or anything, but then you realize that your codebase has the following pattern repeated across many hundreds of source files:

from twisted.python import log
log.msg("unique message 1 out of 47890478932423987234")

[This is the point where I just gave up trying to learn twisted in 45 minutes or less, and also wanted to stress eat a cheeseburger that is 180% beef, 120% bacon while forgetting everything I thought I knew about how percentages worked.]

You think to yourself, “this is Linux, and I wouldn’t be using this OS unless my stubbornness rating was +8″ so you go bash around on google for a while before remembering that Linux has a tool to take care of all this for you already called logrotate. So you stop trying to wrestle with twisted and beg your downstream packagers to fix it for you. Example below.

Luckily, twisted’s built-in log rotation mechanism matches up with what logrotate expects, namely, that when it rotates log files, it will do things like:

foo.log -> foo.log.1
foo.log.1 -> foo.log.2

So, all you have to do is teach logrotate about your application’s logfiles, and you are done. The beauty is, all the standard logrotate commands Just WorkTM.

On debian and Ubuntu, you can drop a new file into /etc/logrotate.d/. Not sure how the other distros package it.

Here’s an example file:

# cat /etc/logrotate.d/myapp
/var/log/myapp.log {
        rotate 0

This says:

  1. Pay attention to this file, /var/log/myapp.log. twisted will try and rotate it to /var/log/myapp.log.1, myapp.log.2, etc.
  2. We don’t want many log files around. In fact, we will only keep around myapp.log and myapp.log.1. Anything older gets auto-deleted
  3. Don’t try to create myapp.log. Let the myapp create it on its own
  4. Important! Don’t rename myapp.log to myapp.log.0. Rather, just copy it to myapp.log.1, and then clear out myapp.log so it’s empty. This is good because myapp might assume that its file descriptor is still valid and continue trying to write to it. Without this command, myapp might lose data because it’ll assume the original log file is still around and continue to write to it, even though it’s gone. It’s a little dangerous because it’s possible that you might lose some logged data between the time that logrotate copies the new log file and truncates the old file. Caveat administrator.
  5. Dear logrotate, please do not puke if you do not find myapp.log, it’s ok, I’ll give you a hug.

And now, twisted and logrotate are playing nicely with each other. On Ubuntu, logrotate runs as a daily cronjob, so twisted shouldn’t get too far ahead by creating too many extra log files. Of course, if it does, you can just create an hourly cronjob or something even more special, but probably the real answer is to discover why myapp is creating more than 1MB of logs so quickly.

One last tip, you can experiment with the behaviors above by playing with twisted and logrotate at the command line, without needing a reboot.

To force twisted to rotate its log files:

# kill -SIGUSR1

To check what logrotate will then do:

# logrotate -f -d /etc/logrotate.d/myapp

Beware, the above doesn’t actually do anything, it just pretends. So to make it actually do stuff, but with verbose output:

# logrotate -f -v /etc/logrotate.d/myapp

And that, friends, is a thousand words on how to solve what should be a pretty simple problem but turns out to be way harder than necessary because you don’t know yet another framework. Good luck. As always, I recommend moAR bacon.

Read more


Those with a keen eye or snoop around in the exif data will note that I made all of these photos with my Canon 10-22 wide angle lens. It’s becoming my favorite general purpose “travel with just one lens” lens in spite of several clear weaknesses. For most tourists who simply want to show they were there, this lens will capture more of “there” than any other, especially the grand buildings that are so prevalent in Europe. And, after a bit of practice, you can start taking advantage of the lens’s distortion to make interesting images of day-to-day life (since the small moments are what actually make travel interesting), but usually end up rather boring.

On the down side, the lens is slow and you’ll occasionally get frustrated with the “all wide, all the time” perspective, but on the whole, it works well for me as my walking around tourist lens, especially when you want to travel light.

Check out the full set here:

Budapest 2011

Oh, and for several reasons, I didn’t take many^Wany photos of UDS itself:

  • “still life of people in meeting rooms” isn’t exactly the most exciting subject
  • I left my Speedlite at home
  • my lens is too slow (F/3.5-4.5 ) for most indoor shooting
  • and anyway, you can see all of Sciri’s fantastic people photos on his site

alberto and mlegris disagree

Read more

coding freudians

/*  This file is part of the KDE project.

    Copyright (C) 2009 Nokia Corporation and/or its subsidiary(-ies).



Innocent English-as-a-second-language typo? Or wry self-referential pun? You decide.

Read more

copyright matters

[direct link for folks who can't see the embedded video above]

The entire world of libre software is built upon the idea that copyright laws must be respected.

And just as vigorously as our community defends the GPL, so too must we defend the rights of other makers whose works are copyrighted.

We are all the intellectual children of Stallman.

[nb, there is of course, a lot of nuance not captured in the black and white declaratives I wrote above; in the US, our IP laws aren't perfect, but our first instinct should be to first patch the bugs, and only much later, tend towards civil disobedience.]

Read more

Public service announcement:

# deluser --force

is quite different from

# userdel -f

You may now go back to your regularly scheduled man page puzzling.

Read more

it’s a tooth!

This is a BoingBoing re-run, but so apropos I had to share it.

Read more

Setting up a password protected channel on freenode involves wrestling with ChanServ, and if you don’t have sacrificial goats handy or an advanced degree in 8th level rune reading, you may be scratching all the hair off your head as you ponder the help that appears during any reasonable google search.

You read the page and think, “ah, I must say /msg ChanServ flags #channel +k mypassword”, but you are wrong. Because then you are missing a username.

Next, you think, “well, I better set this for all users, so I’ll use a wildcard: /msg ChanServ flags #channel *!*@* +k mypassword” but that just doesn’t work.

More googling doesn’t help.

Frustrated, you start feeling empathy with Descartes, wondering if you are the subject of some cruel demon torturing you and tricking you into perceiving a horrible reality where nothing makes sense.

It’s just irc, you tell yourself. ChanServ can’t be that complicated, you tell yourself.

Maybe you’re a stress eater, so you inhale a double-down.


You enter an electric grease-laden food coma, and your spirit animal, a grue, leads you down a twisty maze of passages, all alike, whereupon you enter the Chamber of Knowledge and unfurl the Scroll of Explanation from the Rod of Understanding, and you read it and it says:

/msg ChanServ set #channel mlock +k mypassword


Read more

Today’s simple Debian packaging task was adding a little C program and Makefile into an existing package that didn’t already compile anything. Through the magic of debian/rules, all I had to do was to write a top-level Makefile in the package directory, which would simply recurse into the subdirectory and invoke make there.

Assume a directory structure like so:

$ pwd
$ ls -F
bar/ baz/


The goal is to write a top-level Makefile in foo/ that will recurse into bar/ and baz/ and call the makefiles there.

There are lots of examples on how to write such a Makefile, but I thought they were kinda kludgey. For example, this page has this solution:


DIRS    = bar baz
OBJLIBS = libbar.a libbaz.a

all: $(OBJLIBS)
libbar.a: force_look
        cd bar; $(MAKE)
libbaz.a : force_look
        cd baz; $(MAKE)
install :
        -for d in $(DIRS); do (cd $$d; $(MAKE) install ); done
clean :
        -for d in $(DIRS); do (cd $$d; $(MAKE) clean ); done
force_look :


That works, but is a little ugly. They define $DIRS but only use it for the install and clean targets; a manual ‘cd bar ; cd baz’ is still required to build the libs. That’s unnecessary repetition. And the fake force_look target is definitely hacky.

It could be cleaned up a little. Here is an attempt at refinement. It assumes that the makefiles in bar/ and baz/ will properly build libbar.a and libbaz.a.


DIRS    = bar baz

all: $(DIRS)
$(DIRS): force_look
        for d in $@ ; do (cd $$d ; make ) ; done
        -for d in $(DIRS) ; do (cd $$d; $(MAKE) install ); done
clean :
        -for d in $(DIRS); do (cd $$d; $(MAKE) clean ); done
force_look :


The second attempt is much shorter and avoids repeating ourselves. We just need to specify the subdirectories, and issue make appropiately, depending on the action we want.

However, this still has a problem; the phony target section of the manual says it’s fragile, since you can’t detect errors in your submakes. Plus, typing out that loop several times just feels wrong. And we still have that ugly force_look target.

Here’s my final attempt:


DIRS = bar baz

	$(MAKE) -C $@
all: $(DIRS)
install: MAKE = make install
install: $(DIRS)
clean: MAKE = make clean
clean: $(DIRS)
.PHONY: clean install $(DIRS)


It’s 8 lines of real work vs 9 lines in the 2nd example and 11 lines in the initial attempt. It feels a lot cleaner than typing out the for loop several times. And using the -C option to properly recurse and invoke make, we can detect errors in our sub-makes.

I’m not a huge fan of redefining $(MAKE) so if anyone out there has better suggestions, please do let me know.

Read more

of course
mom and dad love rick steve

Recently, I performed some over-the-phone tech support to get my Dad going on Ubuntu, and thought sharing that experience might be interesting.

A little bit of background: Dad is 62 years old, technical but not necessarily computer savvy (chemical engineer), has always used Windows, and English is his second language. He was able to purchase a laptop hard drive from Newegg and install it himself, and was able to download the Lucid ISO and burn it in Windows on his own with a minimum of instruction from me.

Kudos to the folks who wrote the Ubuntu download page especially the section that has screenshots of how to burn an ISO in Windows. Dad’s first attempt went poorly, but then he went back and actually followed the instructions and his second attempt was successful.

Up until this point, I was simply sending him emails with pointers on how to get going, but then he ran into a little problem with his wireless, so he called me on the phone.

I googled his [older] computer model and discovered to my dismay that it had a Broadcom wifi chip in it, which mean messing around with proprietary firmware.

I had him go to System -> Administration -> Hardware Drivers but for some odd reason, jockey (?) simply said “no proprietary drivers are in use on this system”. There was no way to ask it to scan the system and guess whether a proprietary driver might be needed or not.

I don’t know how normal people would have resolved this issue, but in my case, I told him how to open the terminal prompt, and then I sent him an email of some apt-get commands to copy and paste. After he’d installed bcmwl-kernel-source, jockey detected a proprietary driver and said it was enabled.

At this point, the story should have ended, but he still wasn’t seeing any wireless APs in the gnome-nm applet. Half an hour later, I guessed that the physical rfkill switch for his wireless radio was in the “off” position; once he moved it to “on”, life was good again.

Some other observations:

  • he couldn’t figure out how to reboot the machine. I guess the icon wasn’t obvious to him.
  • after we installed the proprietary broadcom firmware package, upon reboot, there was a little text message in a bubble that said “In order to use your computer properly, you need to…” and then the message disappeared before he had a chance to read it fully. I think it was a dialogue from jockey, but the point is that he didn’t have time to read whatever it was before it disappeared.
  • upon re-opening jockey, the big message about “restricted” drivers, vendor support only, etc. caused him great concern before I told him just to ignore it.
  • once wireless was working, he opened Firefox. Then, he wanted to maximize the window, but couldn’t figure out how to do that. We figured out he wasn’t looking in the correct location for the button, but once he found the controls, it was obvious to click the “up” arrow instead of the “down” arrow.
  • at one point, he wanted me to just remote control his computer. He’s familiar with IT folks at his company using NetMeeting to remote his machine. We ship a “remote desktop” option, but it only works if your machines are on the same subnet. I ended up having him install TeamViewer, which worked quite well. Since many of us support our less savvy friends and family, I think having a remote control option that works across the internet at large would be a killer feature.
  • he had already played around with OpenOffice before this support call, and had a question lined up for me — he saw the nice PDF export icon in the Writer toolbar, and wanted to know how he could pay money to enable that feature. When I told him that OO.o could indeed export as PDF for free, he was shocked and ecstatic. I think folks on the other side of the chasm are quite concerned about creating PDFs, and the fact that we can do it for free is another killer feature. Perhaps it could be highlighted as one of the callouts during the installation process.
  • he had also already discovered our version of Minesweeper but scoffed at how easy it was. He wanted to know how to get the advanced version. In gnomine’s preferences, changing the size of the grid is labeled “Field Size” with “Small”, “Medium”, “Large”, and “Custom” as the options. I told him to select “Large” and he didn’t trust me that it would actually do it until he actually saw the size of the grid increase. My takeaway is that this wording is poor, and could be made way more friendly, especially for folks looking for familiarity coming from a Windows environment.

Those are all the notes I managed to scribble down over the course of our phone call.

I guess there are bugs, papercuts, and wishlist features in there, but I’ll hold off on filing Launchpad bugs unless others think it would actually be useful or not.

Read more

2^H3000 pushups

We here at Canonical have been put on notice by our CEO Jane Silber to do 2000 pushups (individual, cumulative) for the month of October.

It’s been nearly 3 years [since the last challenge], and I think everyone should have recovered from that by now. Which means it’s time to do it again. Therefore I’m designating October 2010 as the month for the second tri-annual [2000] Push Up Challenge. I am already envisioning all of the UDS participants doing them between sessions.

One of the lovely quirks of working for a small company.

Of course, I’m in. And although you need to average 66 pushups a day to hit the challenge, 100 a day isn’t really that hard. So I’m going for 3000.

Feel free to play along at home.

Read more

wagging the warthog

  Serengti pumbaa (yes, I’m recycling photos…)

I joined Canonical in April, as my loyal blog readers (best looking people on the planet!) know, and immediately, I was drowning in the deep end. I left the relatively safe and sane sandbox of the kernel and tried to choke down two huge pills at once, one called “userspace” and the other called “packaging”.

The medicine was worth it. In fact, it was a large reason why I wanted to join Canonical in the first place: to gain breadth across the entire software stack and learn — really learn — how an entire computer works.

The other large reason I joined Canonical was because I wanted to contribute more to free software, not less.

And that’s why it hurts to see my company constantly getting sniped at for not contributing back to the greater good of the free software ecosystem. The most commonly voiced complaint is that the amount of Canonical’s code contributed back to the ecosystem is very small percentage-wise1.

Today, I’d like to build a little bit on what my colleague Colin King does and talk a little about what I do2.

One of Colin’s salient points:

Because Canonical has a good relationship with many desktop hardware vendors, I have access to the BIOS engineers who are actually writing this code. I can directly report my findings and suggested fixes to them, and then they re-work the BIOS. The end result is a PC that is sold with a BIOS that works correctly with Linux, and not just Ubuntu – all distributions benefit from this work.

You may not see my fixes appear as commits in the Linux kernel because much of my work is silent and behind the scenes – fixing firmware that make Linux work better.

It’s hard to explain how important the above paragraphs are. In the bad old days, the companies who made your laptop and your desktop designed them for Windows only, and didn’t care at all about Linux. This meant that Linux was always going to be a second-class citizen, having to guess at how Windows did something3, and then trying its best to re-implement it, getting it mostly right, but often, subtly wrong in ways that lead to frustration.

In our brave new world, there’s a growing consumer demand for an alternative to Windows. And most of the time, that alternative is the Ubuntu flavor of Linux. The consumer hardware vendors want Ubuntu because their customers — real human beings — want computers with Ubuntu. And those human beings want Ubuntu because Ubuntu focuses on something that the other large distributions don’t care about as much as we do4: namely, the fit, finish, polish, and usability of Linux on the desktop5.

Of course, if a hardware vendor is actually going to sell a machine with Ubuntu, they’re going to want to make sure it is a high quality product, which means lots of testing. And so, major consumer hardware vendors are starting6 to test with Linux before they ship a product, and not leaving Linux users unsupported in the cold, after the fact, having to guess on how to get their machines working with their operating system of choice.

The fact that hardware vendors are choosing to partner with Canonical is a) a direct payoff of focusing on the Linux desktop and b) puts Canonical in a unique position to make huge contributions on behalf of the free software ecosystem that never appear in a git repository7.

Enough about Colin’s work. In my own short time with Canonical, I’ve become the lead engineer on an embedded Ubuntu project, with responsibilities ranging from customizing the entire software stack from debian-installer up through Chromium as well as providing technical consulting for our customer on a wide variety of topics.

One little adventure we shared recently was the selection of a USB wifi part for their product. They asked us which chip they should use and we were delighted that they should ask our opinion. Naturally, we recommended a solution that was easiest for us to support which meant a chip that had good, open drivers with an active upstream kernel development activity.

Luckily, our customer also understood the value of embracing upstream and was willing to pay for a slightly more expensive, but much better supported chip rather than the cheapest, but poorly supported chip from a company that has an adversarial relationship with the upstream kernel community.

I’m not going to lie; helping to influence this purchasing decision, directly financially supporting a chip company that has embraced the “upstream first” model of kernel development is probably one of the best things I’ve done for Linux, moreso than the 192 kernel patches I’ve written.

Of course, I want to make upstream code contributions too. I’m a software developer, after all. And that’s why being a part of Canonical’s OEM team is so exciting. We get lots of exposure to what consumer hardware vendors care about, which is a direct measures of what actual consumers care about, which means we get the chance to scratch big, community itches. By and large the vendors’ requirements share common themes: boot faster, run cooler, save battery life, use less disk space. These are generic problems with generalizable solutions, and it’s my intention to push as many as possible upstream in order to benefit the entire free software ecosystem.

So, that’s a little bit on my tiny role in the software ecosystem. And don’t just take my word for it. Check out some of the things that other Canonical employees are working on.

1: The complaints tend to focus on two major chunks of the ecosystem: the kernel and the GNOME core. If I may be permitted to torture a car analogy, the kernel might be the engine, and GNOME would be the body. (return)

2: There’s a lovely bit self-referentiality here that mirrors our actual work relationship: Colin provides a foundation for my software project; why not also leverage my blog entries off his? ;-) (return)

3: Where “something” might be suspend. Or resume. Or wifi. Or 3D graphics. You know, minor stuff like that. (return)

4: And for good reason too. Most of the money in Linux these days is in servers. So it makes perfect sense to put your effort where the money is and avoid unnecessary distractions. (return)

5: Going back to the car analogy, Ubuntu focuses on the trim level. It’s like a Camry vs. a Corolla. Who knows, maybe we’ll even be Lexus one day, but taking the crown from Apple will be hard. (return)

6: We’re certainly not the only Linux distro that does this. Novell has an OEM team as well, getting openSUSE preloaded on machines. But I’m pretty sure Canonical has more partners, and thus influence. Just a guess, no insider knowledge here… (return)

7: Forsooth! The title of my entry explained! (return)

Read more