Canonical Voices

Posts tagged with 'geek'

alex

One of the key design goals we had in mind when we set out to create Ubuntu for phones and tablets (henceforth abbreviated here as Ubuntu Devices (vs Ubuntu Desktop)) was how to balance continuing our rich heritage of downstreams, be they community remixes or commercial products, against the idea of preventing platform API and UI fragmentation, which is the Android antipattern.

A while back, my boss Victor wrote a blog entry titled Differentiation without fragmentation which gives a key insight as to why fragmentation occurs:

Fragmentation occurs when the software platform fails to provide a supported mechanism to differentiate.

Victor then goes on to describe our Scopes framework which is a core design concept that spans implementation, functional, and visual layers to enable our downstreams to differentiate.

Part of my job is making what Victor says actually come true, and as we started thinking through the actual mechanics of how our downstreams would start with a default Ubuntu Device image and end up with a customized version of it, we realized that the nuts and bolts of what an OEM engineer or enthusiastic community member would have to learn and understand about our platform to actually make a working device image were too complex.

So we roughed out some ideas and after several months of iterating in private, I’m pleased to announce the preview of the Ubuntu Savvy phone customization suite. It consists of several parts:

The prototype of Tailor, our tool to manipulate the Savvy source tree and deploy to your phone is definitely in early stages. But click on the screenshots below to get a sense for where we are going. We want it to be painless and fun for anyone to make their own version of Ubuntu for devices in an officially supported manner.

tailor-1   tailor-2

If you are interested in learning more about our plans or you have ideas for ways that you’d like to customize your version of Ubuntu or you’re interested in improving code, tests, or docs, please come to our vUDS session.

Carrier/OEM Customizations on 2014-03-13 from 15:00 to 15:55 UTC.

A final note, Ubuntu Savvy builds upon a lot of work, from the fine folks in UE who helped design a flexible, decoupled image architecture, to the SDK team for providing some nice QML code for us to re-purpose, and to my entire team, both present and emeritus (such as mfisch and sfeole). Thanks to all.

We invite the broader Ubuntu community to help tinker with and tailor Ubuntu.

Upward and onward!

Read more
alex

In both git and bzr, each branch you clone is a full copy of project’s history. Once you have downloaded the source control objects from the remote location (e.g. github or launchpad), you can then use your local copy of the repo to quickly create more local branches.

What if another user has code in their branch that you want to inspect or use?

In git, since it’s common to have many logical git branches in the same physical filesystem directory, the operation is conceptually a simple extension of the default workflow, where you use “git checkout” to switch between logical branches.

The conceptually simple extension of the workflow is to add the location of the remote repo to your local repo and download any potentially new objects you don’t already have.

Now you have access to the new branches, and can switch between them with “git checkout”.

In command sequences:

git remote add alice https://github.com/alice/project.git
git remote update
git checkout alice/new_branch

This workflow is great if project.git is very large, and you have a slow network. The remote update will only download Alice’s objects that you don’t already have, which should be minimal, comparatively speaking.

In bzr, the default workflow is to have a separate physical filesystem directory for each logical branch. It is possible to make different branches share the same physical directory with the ‘colo’ plugin, but my impression is most people don’t use it and opt for the default.

Since different bzr branches will have different directories by default, getting them to share source control objects can be trickier especially when a remote repo is involved.

Again, the use case here is to avoid having to re-download a gigantic remote branch especially when perhaps 98% of the objects are the same.

I read and re-read the `bzr branch` man page multiple times, wondering if some combination of –stacked, –use-existing-dir, –hardlink, or –bind could do this, but I ended up baffled. After some good clues from the friendly folks in the #bzr irc channel, I found this answer:

Can I take a bazaar branch and make it my main shared repository?

I used a variation of the second (non-accepted) answer:

bzr init-repo ../
bzr reconfigure --use-shared

I was then able to:

cd ..
bzr branch lp:~alice/project/new_branch
cd new_branch

The operation was very fast, as bzr downloaded only the new objects from Alice that I was missing, and that was exactly what I wanted. \o/

###

Additional notes:

  1. When you issue “bzr init-repo ../” be sure that your parent directory does not already contain a .bzr/ directory or you might be unhappy
  2. Another method to accomplish something similar during “git clone” is to use the –reference switch
  3. I don’t know what would have happened if you just issued “bzr pull lp:~alice/project/new_branch” inside your existing branch, but my intuition tells me “probably not what you want”, as “bzr pull” tends to want to change the working state of your tree with merge commits.
  4. Again contrast to git, which has a “git fetch” concept that only downloads the remote objects without applying them, and leaving it up to the user to decide what to do with them.

Read more
alex

MLS/SFO - before

When you use GPS on your mobile device, it is almost certainly using some form of assistance to find your location faster. Attempting to only use pure GPS satellites can take as long as 15 or 20 minutes.

Therefore, modern mobile devices use other ambient wireless signals such as cell towers and wifi access points to speed up your location lookup. There’s lots of technology behind this, but we simplify by calling it all AGPS (assisted GPS).

The thing is, the large databases that contain this ambient wireless information are almost all proprietary. Some data collectors will sell you commercial access to their database. Others such as Google, provide throttled, restricted, TOS-protected access. No one I am aware of provides access to the raw data at all.

Why are these proprietary databases an issue? Consider — wireless signals such as cell towers and wifi are ambient. They are just part of the environment. Since this information exists in the public domain, it should remain in the public domain, and free for all to access and build upon.

To be clear, collecting this public knowledge, aggregating it, and cleaning it up requires material effort. From a moral standpoint, I do think that if a company or organization goes through the immense effort to collect the data, it is reasonable and legitimate to monetarily profit from it. I have no moral issue there1.

At the same time, this is the type of infrastructural problem that an open source, crowd sourced approach is perfectly designed to fix. Once. And for all of humanity.

Which is why the Mozilla Location Service is such an interesting and important project. Giving API access to their database is fantastic2.

If you look at the map though, you’ll see lots of dark areas. And this is where you can help.

If you’re comfortable with early stage software with rough edges, you should install their Android app and help the project by going around and collecting this ambient wireless data.

Note: the only way to install the app right now is to put your Android phone in developer mode, physically connect a USB cable, and use the ‘adb’ tool to manually install it. Easy if you already know how; not so easy if you don’t. Hopefully they add the app to the Play store soon…

The app will upload the collected data to their database, and you can watch the map fill in (updated once a day). If you need more instant gratification, the leaderboard is updated in near realtime.

You might not want to spend time proofreading articles on Wikipedia, but running an app on your Android device and then moving around is pretty darn easy in comparison.

So that’s what I did today — rode my bike around for open source ideals. Here’s the map of my ride in Strava:

strava ride

I think I collected about 4000+ data points on that ride. And now the map in San Francisco looks like this:

MLS/SFO - after

Pretty neat! You can obviously collect data however you like: walking around, driving your car, or taking public transportation.

For more reading:

Happy mapping!

1: well, I might quibble with the vast amount of resources spent to collect this data, repeated across each vendor. None of them are collaborating with each other, so they all have to individually re-visit each GPS coordinate on the planet, which is incredibly wasteful.

2: you can’t download the raw database yet as they’re still working out the legal issues, but the Mozilla organization has a good track record of upholding open access ideals. This is addressed in their FAQ.

Read more
alex

terminal

A little something I worked out before the holiday break was to start figuring out how to make it easy to target Ubuntu Touch if you run OSX.

Michael Hall wrote a blurb about it and the wiki instructions are here.

There are quite a number of dependencies that must be resolved before you can actually write and deploy an Ubuntu Touch app from OSX, but for now, simply installing a device is a good start.

Combined with our recently announced dual boot instructions, we’re trying to remove as many barriers to entry as possible.

Happy new year!

Read more
alex

I wanted somewhere easy to dump technical notes that weren’t really suitable for this blog. I wanted a static HTML generator type of blog because the place to dump my notes (people.canonical.com) isn’t really set up to run anything complex for a multitude of reasons, such as security.

I also didn’t want to just do it 1990s style and throw up plain ASCII README files (the way I used to) because I envision embedding images and possibly movies in my notes here. At the same time, the closer I can get to a README the better, and so that seems to imply markdown.

After a brief fling with blacksmith where absolutely nothing worked because of a magical web 2.0 fix-everything-but-the-zillions-of-pages-of-existing-docs rewrite, I wiped the blood and puke from my mouth and settled on octopress.

Octopress was much better, but it was still a struggle. It’s a strange state of affairs that deploying wordpress on a hosted site is actually *less* difficult than configuring what *should* be a simple static HTML generator. Oh well.

Here are some notes to make life easier for the next person to come along.

Deploying to a subdir, fully explained
One wrinkle of hosting on a shared server using Apache conventions is that your filesystem path for hosting files will probably get rewritten by the web server and displayed differently.

That is:

    unix filesystem path                 =>  address displayed in url bar
    /home/achiang/public_html/technotes  =>  http://people.canonical.com/~achiang/technotes

The subdir deployment docs talk about how to do this, but the only way I could get it to work is by issuing: rake set_root_dir[~achiang/technotes] first. So the proper sequence is:

rake set_root_dir[~achiang/technotes]

vi Rakefile	# and change:
	-ssh_user       = "user@domain.com"
	+ssh_user       = "achiang@people.canonical.com"
	-document_root  = "~/website.com/"
	+document_root  = "~/public_html/technotes"

vi _config.yml	# and change:
	-url: http://yoursite.com
	+url: http://people.canonical.com/~achiang/technotes

rake install
rake generate
rake deploy	# assuming you've setup rsync deploy properly

Once you’ve tested this is working, then optionally set rsync_delete = true. But don’t make the same mistake I made and set that option too soon, or else you will delete files you didn’t want to delete.

Finally, once you have this working, the test address for your local machine using the `rake preview` command is http://localhost:4000/~achiang/technotes.

Video tag gotchas
One nice feature of Octopress is the video plugin it uses to allow embeddable H.264 movies. I discovered that unlike the image tag which apparently allows for local paths to images, the video tag seems to require an actual URL starting with http://.

Therefore:

    {% video /images/movie.mp4 %}	# BROKEN!

However, this works:

    {% video http://people.canonical.com/~achiang/images/movie.mp4 %}

I’ll work up a patch for this at some point.

Misc gotchas
The final thing I tripped over was https://github.com/imathis/octopress/pull/1438.

I’ll update here if upstream takes the patch, but if not, then you’ll want the one-liner in the pull request above.

Summary
After the initial fiddly bits, Octopress is good enough. I can efficiently write technical content using $EDITOR, the output looks modern and stylish, and it all works on a fairly constrained, bog-standard Apache install without opening any security holes in my company’s infrastructure.

Read more
alex

In the linux.com interview with gregkh is the following q&a:

What’s the most amused you’ve ever been by the collaborative development process (flame war, silly code submission, amazing accomplishment)?

I think the most amazing thing is that you never know when you will run into someone you have interacted with through email, in person. A great example of this was one year in the Czech Republic, at a Linux conference. A number of the developers all went to a climbing gym one evening, and I found myself climbing with another kernel developer who worked for a different company, someone whose code I had rejected in the past for various reasons, and then eventually accepted after a number of different iterations. So I’ve always thought after that incident, “always try to be nice in email, you never know when the person on the other side of the email might be holding onto a rope ensuring your safety.”

The other wonderful thing about this process is that it is centered around the individual, not the company they work for. People change jobs all the time, yet, we all still work together, on the same things, and see each other all around the world in different locations, no matter what company we work for.

I was the “other kernel developer” and we were probably talking about Physical PCI slot objects, which took 16 rounds of revision before it was accepted.

The great myth of open source is that it’s a complete meritocracy. While there’s more truth there than not, the fact is that as with any shared human endeavor, the personalities in your community are just as important as the raw intellectual output of that community.

This is not to say Rusty is wrong, but rather to remind that if you’re both smart and easy to get along with, life is a lot easier.

Or perhaps if you’re a jerk, you should stick to safer sports like golf.

Read more
alex

After wandering around for a bit, I’ve settled back in San Francisco on a more or less permanent basis. Part of the moving process was finding an ISP and it seems like Comcast is the best option (for my situation). I signed up for their standard residential service, and remote teleworking continued on quite merrily… except for one tiny wart.

We use Google Plus hangouts quite extensively on my team including a daily standup with attendance that hovers between 5 to 10 people. The first time I tried a hangout with my new Comcast service, it was unusable with extreme lag everywhere, connection timeouts, and general unhappiness.

I had a strong hunch that I was suffering from bufferbloat, and a quick ping test confirmed it (more on that later). Obviously I wanted to fix the problem, but there is a lot of text to digest for someone that just wants to make the problem go away.

After a bit of irc whingeing and generous help from people smarter than me, here are my bufferbloat notes for the impatient.

background
Bufferbloat is a complex topic, go read the wiki page for excruciating detail.

But the basic conceptual outline is:

  • a too large buffer on your upstream may cause latency for sensitive applications like video chat
  • you must manage your upstream bandwidth to reduce latency (which typically means you intentionally reduce upstream bandwidth)
  • use QoS in your router to globally reduce upstream bandwidth (not for traffic shaping!)

diagnosis
Ensure your internet connection is idle. Then, start pinging google.com. Observe the “time” field, which will give you a value in ms. Watch this long enough to get an intuitive feel for what is a normal amount of latency on your link. For me, it hovered consistently around 20ms, with some intermittent spikes. You don’t need to be exact. If the values swing wildly, then you’ve got other problems that need to be fixed first. Stop reading this blog and call your ISP.

While the ping is running, visit http://testmy.net/upload and kick off a large upload, say 15MB or more.

If your ping times increase by an order of magnitude and stay there (like mine did to around 300ms), then you have bufferbloat.

This isn’t as rigorous as setting up smokeping and making pretty graphs, but trust me, it’s a lot faster and way easier. Thanks to Alex Williamson for this tip.

mitigation
You will need a router that can do QoS.

The easiest solution is to spend $100 and buy a Netgear WNDR3700 which is capable of running CeroWRT. Get that going and presumably you’re done, although I can’t verify it since I am el cheapo.

I didn’t want to spend $100 and I had an old Linksys WRT54GL lying around. Install Tomato onto it. (Big thanks to Paul Bame for helping me (remotely!!) recover a semi-bricked router.) Now it’s time to tune QoS.

In the Tomato admin interface, navigate to QoS => Basic Settings. Check the “Enable QoS” box and for the “Default class” dropdown list, change it to “highest”.

Figure out your maximum upload speed. You should be able to obtain this number after a few upload tests at testmy.net that you did in the previous step. Enter your max upload speed into the “Outbound Rate / Limit” => “Max Bandwidth” field. Make sure you use the right units, kbits/s please!

Finally, in the “Highest” QoS setting under Outbound, set your lower and upper bounds. I started with 50% as a lower bound and 60% as an upper bound.

Put a large fake number in for “Inbound Limit” and change all the settings there to “None”. These settings don’t seem to affect latency.

Click “save” at the bottom of the page — you do not need to reboot your router.

Re-run the google.com ping test + large upload test at testmy.net. Your ping times under load should remain relatively unchanged vs. an idle line. Congrats, you’ve solved your bufferbloat problem to 80%.

Update (7/29/2012): Thanks to John Taggart for pointing out a more rigorous page on QoS tuning for tomato.

Now you can experiment with increasing the lower and upper bounds of your QoS settings to get more upstream bandwidth. As always, make a change, save, re-run the ping + upload test, and check the results. Remember, the goal is to keep latency under load about equal to what it is on an idle line.

Now your colleagues will thank you for the increased smoothness of your video chats, although remembering to brush your teeth and put pants on is the “last mile” problem I can’t solve for you.

Read more
alex


san francisco santacon, 2011

I’m happy to announce that a few packages I’ve been working on over the past year have finally landed in Ubuntu Precise[1].

If you have a 3G USB modem, and it currently doesn’t work well (or at all) in Debian or Ubuntu, you should check this list of modems[2]. If it listed, then you may be a candidate to try an alternative 3G networking stack.

$ sudo apt-get install wader-core

This command will remove ModemManager and install wader-core. It should be an entirely transparent operation to you, except that after you reboot, your modem should appear as a connection option in the network manager applet.

Yay!

###
1: naturally, I was a good boy and uploaded the packages to Debian unstable first
2: this list is predominantly composed of Vodafone-branded modems, but there are others in there as well.

Thanks to the Debian python team for mentoring me and to Al Stone and dann frazier for even more mentoring in addition to sponsoring me.

Read more
alex

Last January through April, I pretty much fell off the face of the earth, in real life as well as online. For those that asked, I alluded to some long hours at work, but of course couldn’t say much publicly.

Well, we finally launched, and I’m quite proud of all our team accomplished.

Without question, this was the hardest project I’ve taken on in my career to date. But I was part of a great team, and we pulled together to ship.

We’re bringing free, open software to the world. This is the mission I signed up for.

Some links for your reading pleasure, take with a grain of salt:

Read more
alex

As of this writing, it is a little painful to use pbuilder to create a Debian chroot on an Ubuntu host due to LP: #599695.

The easiest workaround I could figure out was the following:

$ cat ~/.pbuilderrc-debian
COMPONENTS="main contrib non-free"
DEBOOTSTRAPOPTS=("${DEBOOTSTRAPOPTS[@]}" "--keyring=/usr/share/keyrings/debian-archive-keyring.gpg")

And then you can issue:

sudo pbuilder create --basetgz /var/cache/pbuilder/sid.tgz --distribution sid --mirror http://mirrors.kernel.org/debian --configfile ~/.pbuilderrc-debian

The better way to fix this of course, would be to fix above bug. But this works for now.

Read more
alex

It is a fact of life that everyone receives more email than they can handle.

It is also a fact that email is a skill, and there are varying levels of proficiency out there.

So, it is only a matter of time before you find yourself on the annoying end of an email thread gone awry. Perhaps it is a discussion on the wrong mailing list, or perhaps it is the infamous 1 grillion people in the To: or Cc: fields problem.

Before long, a “take me off this list” / “stop replying to all” storm ensues, and then something horrible like Facebook gets invented to “solve” this “problem”.

Of course mail filters can be deployed, shielding you from the idiocy. But what if you want to be more proactive? Is there a way to stop the insanity without having to hax0r into the mail server and just start BOFH‘ing luser accounts?

Yes, there is an easy solution that works most (but not all) of the time.

Put all the unintended recipients in the Bcc: field. Put the correct recipients in the To: field.

In the case of discussion on the wrong mailing list, this is easy; just put the correct list in the To: field. Include a note in the mail body, such as “Redirecting to foo list, which is more appropriate.” Respondents will then typically automatically respond to the correct list.

In the case of “too many Cc:s”, there’s no easy answer. You could move all the Cc: to Bcc:, and then put something like none@none.invalid in the To: address. You will get a single bounce, but then so will everyone else who attempts to respond to you. This trick only works because the people who tend to cause the problem also tend to be lazy and just respond to the last mail received. They can’t spam everyone else because their addresses are obfuscated via the Bcc:. If you feel brave, you could socially engineer the recipients by writing something inflammatory, in order to entice them to respond to you, rather than other mails in the thread, which will then result in a bounce.

Hope this helps.

[nb, .invalid is actually a reserved domain, read rfc2606 for more details.]

Read more
alex

bipcat

Like many open source folks, I consider irc a crucial piece of every day infrastructure. I use bip as a proxy to help me keep up with conversations that occurred while I was away. The next time I connect with my client (xchat, in this case), I get a playback of the old conversations, and my client does the right thing, highlighting tabs if my name was mentioned, etc.

Sometimes though, I want to read old logs, either to remind myself of a conversation I had with someone, or to dig out a URL, or whatever. bip keeps these logs around, but they can be annoying to read.

Here’s an example of how bip stores the logs:

achiang@complete:~/.bip$ ls -R
.:
bip.conf  bip.pid  logs

./logs:
bip.log  canonical  freenode  oftc  sekrit

./logs/freenode:
2011-03  2011-04  2011-05  2011-06  2011-07

./logs/freenode/2011-03:
achiang.30.log     #coherellc.31.log      #ubuntu-devel.31.log
chanserv.30.log    #ubuntu-motu.30.log
chanserv.31.log    #ubuntu-motu.31.log
#coherellc.30.log  #ubuntu-devel.30.log

They’re just free form text, which is good, because you can then use normal tools like grep on them. Unfortunately, they’re also full of long, noisy lines that look like:

achiang@complete:~/.bip$ head ./logs/freenode/2011-03/#ubuntu-devel.31.log 
31-03-2011 00:01:48 -!- zeeshan313!~zeeshan@119.153.35.115 has joined #ubuntu-devel
31-03-2011 00:03:05 -!- T0rCh__!~T0rCh_rao@187.104.99.84 has quit [Remote host closed the connection]
31-03-2011 00:06:23 -!- holstein!~holstein@unaffiliated/mikeh789 has quit [Ping timeout: 240 seconds]
31-03-2011 00:08:23 -!- m_3!~m_3@cpe-72-179-48-240.austin.res.rr.com has quit [Ping timeout: 276 seconds]
31-03-2011 00:08:48 -!- abhinav-!~abhinav@122.161.12.85 has joined #ubuntu-devel
31-03-2011 00:12:02 -!- holstein!~holstein@71-90-232-189.dhcp.gnvl.sc.charter.com has joined #ubuntu-devel
31-03-2011 00:12:03 -!- holstein!~holstein@71-90-232-189.dhcp.gnvl.sc.charter.com has quit [Changing host]
31-03-2011 00:12:03 -!- holstein!~holstein@unaffiliated/mikeh789 has joined #ubuntu-devel
31-03-2011 00:15:35 -!- andreasn!~andreas@117.192.217.128 has joined #ubuntu-devel
31-03-2011 00:20:32 -!- TeTeT!~spindler@178-26-84-220-dynip.superkabel.de has joined #ubuntu-devel

So just viewing them in an editor can be annoying.

And that was a rather long intro to describe what is one of the world’s most trivial scripts (which has at least one known bug :-/ ). But anyway, I call the snippet below “bipcat”:

#!/bin/sh

cat $1 	| grep -v "has quit" 		\
	| grep -v "is now known as" 	\
	| grep -v "has joined" 		\
	| grep -v "has left" 		\
	| sed 's/!.*:/:/' 		\
	| cut -f 2- -d' '

And now, you can get much more sensible output:

achiang@complete:~/.bip$ bipcat ./logs/freenode/2011-03/#ubuntu-devel.31.log | head
00:54:45 < dholbach: good morning
01:05:16 < pitti: Good morning
01:58:29 < pitti: should bug 685682 be closed with the new fglrx that we landed yesterday?
01:58:32 < ubottu://launchpad.net/bugs/685682
01:59:08 < didrocks: it seems that cnd still have that issue with the driver and workarounded compiz
01:59:41 < didrocks: anyway, there is still a need for a compiz upload which will come with other fixes (probably Monday)
01:59:48 < pitti: ah, thanks
02:00:09 < tseliot: the fix should be available in the next upload of compiz (it's already available in a daily PPA)
02:00:28 < pitti: so I guess for now the fglrx tasks should be closed then?
02:00:30 < didrocks: yeah, but as told, it's not working on cnd's machine, I asked him to check with you and jay

Much nicer!

[the bug is that the sed line does a greedy match, so it replaces everything up to the last ':', which is clearly not the right thing to do if someone actually typed in a ':'. suggestions for improvement welcome]

Read more
alex

There are certain situations where one might want to generate a quick .deb package that just installs things onto the target system without doing anything fancy, like compiling source files.

The classic example is if you are in charge of delivering software to a group of machines, but do not have source code to the software. Maybe you just have a pre-compiled library you want installed somewhere.

You could ask your users to:

$ sudo cp mylib.so /usr/lib

But then what if you need to update mylib.so somehow? I can see the nightmare a-comin’ from all the way over here.

So then you think to yourself, gee, I have a very nice package management system; why don’t I use it? Which means, you’re going to try and teach yourself the bare minimum debian packaging skills needed to do such a thing, to which I say, good luck.

Perhaps there are easy examples out there [and if so, let me know and I'll update this post]; in the meantime, this is the bare minimum that I could come up with.

Hope it helps.

The directory layout follows:

achiang@aspen:~/Projects/example$ ls -RF
.:
debian/  usr/

./debian:
changelog  compat  control*  copyright  install  rules*  source/

./debian/source:
format

./usr:
lib/

./usr/lib:
mylib.so

Of course, we have the debian/ directory, which is where the magic happens. But the other top-level directory, usr/ in our case, is what we want to install on the target system.

The easiest thing to do is to re-create your target system’s filesystem layout as a top-level directory, and then put your files in the appropriate spot. Here, we want to install mylib.so into /usr/lib on the target system, so you can see that I’ve recreated it above.

If you also wanted to install, an upstart script, say, you’d also create:

$ ls -RF
.:
debian/  etc/  usr/

./etc:
init/

./etc/init:
myjob.conf

Ok, next let’s look at the stuff in debian/:

achiang@aspen:~/Projects/example/debian$ cat rules 
#!/usr/bin/make -f
%:
	dh $@

That’s pretty easy. How about the control file?


achiang@aspen:~/Projects/example/debian$ cat control 
Source: example
Section: libs
Priority: extra
Maintainer: Alex Chiang 
Build-Depends: debhelper (>= 7)
Standards-Version: 3.9.1

Package: example
Architecture: any
Depends: ${misc:Depends}
Description: A skeleton installation deb
 An example, minimal package that installs files into the filesystem, without
 any processing from the packaging system.

Ok, one more interesting file, the ‘install’ file:

achiang@aspen:~/Projects/example/debian$ cat install 
usr/

The usr/ entry in ‘install’ maps to the usr/ directory you created above. Again, if you also wanted to install something into etc/, you’d add the corresponding line into ‘install’. Extend this concept to any other directories/files you’ve created.

The rest of the files are more or less boilerplate. I’ll display them for completeness’ sake:

achiang@aspen:~/Projects/example/debian$ cat compat 
7
achiang@aspen:~/Projects/example/debian$ cat copyright 
This package was debianized by Alex Chiang  on
Mon Jul 11 14:30:17 MDT 2011

Copyright:

    Copyright (C) 2011 Alex Chiang

License:

    This program is free software: you can redistribute it and/or modify it
    under the terms of the the GNU General Public License version 3, as
    published by the Free Software Foundation.
 
    This program is distributed in the hope that it will be useful, but
    WITHOUT ANY WARRANTY; without even the implied warranties of
    MERCHANTABILITY, SATISFACTORY QUALITY or FITNESS FOR A PARTICULAR
    PURPOSE.  See the applicable version of the GNU Lesser General Public
    License for more details.
 
On Debian systems, the complete text of the GNU General Public License
can be found in `/usr/share/common-licenses/GPL-3'
achiang@aspen:~/Projects/example/debian$ cat source/format 
3.0 (native)

So, there you have it. A pretty trivial example on how to package a binary inside of a debian source package. Of course, you could do this with text files, PDFs, whatever.

Feedback appreciated.

Read more
alex

Warning: I just learned how to spell python-twisted today, so it’s entirely plausible I’m advocating bonghits below. If you have real answers, especially for developers, please let me know.

Let’s say you’re a sysadmin deploying or a developer writing a twisted app. It’s likely the app is writing a bunch of logs somewhere. The problem is that the default logging settings for twistd are set to write an unbounded number of log files, so over time, it’s possible to fill up your filesystem with old logs.

developers
As a lazy developer (or more likely lazy bug fixer who is just trying to fix something and move on with life without having to learn yet another huge framework because let’s face it, you have gtk, glib, firefox, and kernel bugs that need fixing), you google and find the quick basic help on logging in twisted. You read:

If you are using twistd to run your daemon, it will take care of calling startLogging for you, and will also rotate log files.

So you think “top banana”, and merrily invoke your program with:

from twisted.python.release import sh
sh("/usr/bin/twistd --nodaemon --logfile=%s" % "myapp.log")

And you’re done and it’s time to go to the pub. Not so fast, Speedracer. Let’s take a look at the defaults (I checked twisted 10.0 and 11.0):

137	class LogFile(BaseLogFile):
138	    """
139	    A log file that can be rotated.
140
141	    A rotateLength of None disables automatic log rotation.
142	    """
143	    def __init__(self, name, directory, rotateLength=1000000, defaultMode=None,
144	                 maxRotatedFiles=None):
...
234	class DailyLogFile(BaseLogFile):
235	    """A log file that is rotated daily (at or after midnight localtime)
236	    """

See that maxRotatedFiles=None? It means you will eventually hit -ENOSPC, and pandas will be sad.

A little more digging, and reading through the twisted application framework howto, you get the hint on how to modify the default logging behavior. The example says:

The logging behavior can be customized through an API accessible from .tac files. The ILogObserver component can be set on an Application in order to customize the default log observer that twistd will use.

Ok, so you look at the example, and then you say to yourself, that’s pretty good, but Sidnei da Silva could improve it, and then you bug Sidnei for a few hours, asking him rudimentary questions about whether the whitespace in Python really matters or not, and then he politely gives you the following rune (all the ugliness below comes from me, anything elegant comes from Sidnei):

from twisted.application.service import Application
from twisted.python import log
from twisted.python.logfile import DailyLogFile

application = Application("foo")
log.startLogging(DailyLogFile(maxRotatedFiles=1).fromFullPath("/var/log/myapp.log"))

And that kinda works without any fancy observers or anything, but then you realize that your codebase has the following pattern repeated across many hundreds of source files:

from twisted.python import log
...
log.msg("unique message 1 out of 47890478932423987234")

[This is the point where I just gave up trying to learn twisted in 45 minutes or less, and also wanted to stress eat a cheeseburger that is 180% beef, 120% bacon while forgetting everything I thought I knew about how percentages worked.]

You think to yourself, “this is Linux, and I wouldn’t be using this OS unless my stubbornness rating was +8″ so you go bash around on google for a while before remembering that Linux has a tool to take care of all this for you already called logrotate. So you stop trying to wrestle with twisted and beg your downstream packagers to fix it for you. Example below.

sysadmins
Luckily, twisted’s built-in log rotation mechanism matches up with what logrotate expects, namely, that when it rotates log files, it will do things like:

foo.log -> foo.log.1
foo.log.1 -> foo.log.2

So, all you have to do is teach logrotate about your application’s logfiles, and you are done. The beauty is, all the standard logrotate commands Just WorkTM.

On debian and Ubuntu, you can drop a new file into /etc/logrotate.d/. Not sure how the other distros package it.

Here’s an example file:

# cat /etc/logrotate.d/myapp
/var/log/myapp.log {
        rotate 0
        nocreate
        copytruncate
        missingok
}

This says:

  1. Pay attention to this file, /var/log/myapp.log. twisted will try and rotate it to /var/log/myapp.log.1, myapp.log.2, etc.
  2. We don’t want many log files around. In fact, we will only keep around myapp.log and myapp.log.1. Anything older gets auto-deleted
  3. Don’t try to create myapp.log. Let the myapp create it on its own
  4. Important! Don’t rename myapp.log to myapp.log.0. Rather, just copy it to myapp.log.1, and then clear out myapp.log so it’s empty. This is good because myapp might assume that its file descriptor is still valid and continue trying to write to it. Without this command, myapp might lose data because it’ll assume the original log file is still around and continue to write to it, even though it’s gone. It’s a little dangerous because it’s possible that you might lose some logged data between the time that logrotate copies the new log file and truncates the old file. Caveat administrator.
  5. Dear logrotate, please do not puke if you do not find myapp.log, it’s ok, I’ll give you a hug.

And now, twisted and logrotate are playing nicely with each other. On Ubuntu, logrotate runs as a daily cronjob, so twisted shouldn’t get too far ahead by creating too many extra log files. Of course, if it does, you can just create an hourly cronjob or something even more special, but probably the real answer is to discover why myapp is creating more than 1MB of logs so quickly.

One last tip, you can experiment with the behaviors above by playing with twisted and logrotate at the command line, without needing a reboot.

To force twisted to rotate its log files:

# kill -SIGUSR1

To check what logrotate will then do:

# logrotate -f -d /etc/logrotate.d/myapp

Beware, the above doesn’t actually do anything, it just pretends. So to make it actually do stuff, but with verbose output:

# logrotate -f -v /etc/logrotate.d/myapp

And that, friends, is a thousand words on how to solve what should be a pretty simple problem but turns out to be way harder than necessary because you don’t know yet another framework. Good luck. As always, I recommend moAR bacon.

Read more
alex

coding freudians

/*  This file is part of the KDE project.

    Copyright (C) 2009 Nokia Corporation and/or its subsidiary(-ies).
    [...]
*/

[...]

#define ABOUT_TO_FINNISH_TIME 2000

Innocent English-as-a-second-language typo? Or wry self-referential pun? You decide.

Read more
alex

copyright matters


[direct link for folks who can't see the embedded video above]

The entire world of libre software is built upon the idea that copyright laws must be respected.

And just as vigorously as our community defends the GPL, so too must we defend the rights of other makers whose works are copyrighted.

We are all the intellectual children of Stallman.

[nb, there is of course, a lot of nuance not captured in the black and white declaratives I wrote above; in the US, our IP laws aren't perfect, but our first instinct should be to first patch the bugs, and only much later, tend towards civil disobedience.]

Read more
alex

Setting up a password protected channel on freenode involves wrestling with ChanServ, and if you don’t have sacrificial goats handy or an advanced degree in 8th level rune reading, you may be scratching all the hair off your head as you ponder the help that appears during any reasonable google search.

You read the page and think, “ah, I must say /msg ChanServ flags #channel +k mypassword”, but you are wrong. Because then you are missing a username.

Next, you think, “well, I better set this for all users, so I’ll use a wildcard: /msg ChanServ flags #channel *!*@* +k mypassword” but that just doesn’t work.

More googling doesn’t help.

Frustrated, you start feeling empathy with Descartes, wondering if you are the subject of some cruel demon torturing you and tricking you into perceiving a horrible reality where nothing makes sense.

It’s just irc, you tell yourself. ChanServ can’t be that complicated, you tell yourself.

Maybe you’re a stress eater, so you inhale a double-down.

Vidi

You enter an electric grease-laden food coma, and your spirit animal, a grue, leads you down a twisty maze of passages, all alike, whereupon you enter the Chamber of Knowledge and unfurl the Scroll of Explanation from the Rod of Understanding, and you read it and it says:

/msg ChanServ set #channel mlock +k mypassword

whew.

Read more
alex

Today’s simple Debian packaging task was adding a little C program and Makefile into an existing package that didn’t already compile anything. Through the magic of debian/rules, all I had to do was to write a top-level Makefile in the package directory, which would simply recurse into the subdirectory and invoke make there.

Assume a directory structure like so:
 

$ pwd
foo
$ ls -F
bar/ baz/

 

The goal is to write a top-level Makefile in foo/ that will recurse into bar/ and baz/ and call the makefiles there.

There are lots of examples on how to write such a Makefile, but I thought they were kinda kludgey. For example, this page has this solution:

 

DIRS    = bar baz
OBJLIBS = libbar.a libbaz.a

all: $(OBJLIBS)
libbar.a: force_look
        cd bar; $(MAKE)
libbaz.a : force_look
        cd baz; $(MAKE)
install :
        -for d in $(DIRS); do (cd $$d; $(MAKE) install ); done
clean :
        -for d in $(DIRS); do (cd $$d; $(MAKE) clean ); done
force_look :
        true

 

That works, but is a little ugly. They define $DIRS but only use it for the install and clean targets; a manual ‘cd bar ; cd baz’ is still required to build the libs. That’s unnecessary repetition. And the fake force_look target is definitely hacky.

It could be cleaned up a little. Here is an attempt at refinement. It assumes that the makefiles in bar/ and baz/ will properly build libbar.a and libbaz.a.

 

DIRS    = bar baz

all: $(DIRS)
$(DIRS): force_look
        for d in $@ ; do (cd $$d ; make ) ; done
install:
        -for d in $(DIRS) ; do (cd $$d; $(MAKE) install ); done
clean :
        -for d in $(DIRS); do (cd $$d; $(MAKE) clean ); done
force_look :
        true

 

The second attempt is much shorter and avoids repeating ourselves. We just need to specify the subdirectories, and issue make appropiately, depending on the action we want.

However, this still has a problem; the phony target section of the manual says it’s fragile, since you can’t detect errors in your submakes. Plus, typing out that loop several times just feels wrong. And we still have that ugly force_look target.

Here’s my final attempt:

 

DIRS = bar baz

$(DIRS):
	$(MAKE) -C $@
all: $(DIRS)
install: MAKE = make install
install: $(DIRS)
clean: MAKE = make clean
clean: $(DIRS)
.PHONY: clean install $(DIRS)

 

It’s 8 lines of real work vs 9 lines in the 2nd example and 11 lines in the initial attempt. It feels a lot cleaner than typing out the for loop several times. And using the -C option to properly recurse and invoke make, we can detect errors in our sub-makes.

I’m not a huge fan of redefining $(MAKE) so if anyone out there has better suggestions, please do let me know.

Read more
alex

of course
mom and dad love rick steve

Recently, I performed some over-the-phone tech support to get my Dad going on Ubuntu, and thought sharing that experience might be interesting.

A little bit of background: Dad is 62 years old, technical but not necessarily computer savvy (chemical engineer), has always used Windows, and English is his second language. He was able to purchase a laptop hard drive from Newegg and install it himself, and was able to download the Lucid ISO and burn it in Windows on his own with a minimum of instruction from me.

Kudos to the folks who wrote the Ubuntu download page especially the section that has screenshots of how to burn an ISO in Windows. Dad’s first attempt went poorly, but then he went back and actually followed the instructions and his second attempt was successful.

Up until this point, I was simply sending him emails with pointers on how to get going, but then he ran into a little problem with his wireless, so he called me on the phone.

I googled his [older] computer model and discovered to my dismay that it had a Broadcom wifi chip in it, which mean messing around with proprietary firmware.

I had him go to System -> Administration -> Hardware Drivers but for some odd reason, jockey (?) simply said “no proprietary drivers are in use on this system”. There was no way to ask it to scan the system and guess whether a proprietary driver might be needed or not.

I don’t know how normal people would have resolved this issue, but in my case, I told him how to open the terminal prompt, and then I sent him an email of some apt-get commands to copy and paste. After he’d installed bcmwl-kernel-source, jockey detected a proprietary driver and said it was enabled.

At this point, the story should have ended, but he still wasn’t seeing any wireless APs in the gnome-nm applet. Half an hour later, I guessed that the physical rfkill switch for his wireless radio was in the “off” position; once he moved it to “on”, life was good again.

Some other observations:

  • he couldn’t figure out how to reboot the machine. I guess the icon wasn’t obvious to him.
  • after we installed the proprietary broadcom firmware package, upon reboot, there was a little text message in a bubble that said “In order to use your computer properly, you need to…” and then the message disappeared before he had a chance to read it fully. I think it was a dialogue from jockey, but the point is that he didn’t have time to read whatever it was before it disappeared.
  • upon re-opening jockey, the big message about “restricted” drivers, vendor support only, etc. caused him great concern before I told him just to ignore it.
  • once wireless was working, he opened Firefox. Then, he wanted to maximize the window, but couldn’t figure out how to do that. We figured out he wasn’t looking in the correct location for the button, but once he found the controls, it was obvious to click the “up” arrow instead of the “down” arrow.
  • at one point, he wanted me to just remote control his computer. He’s familiar with IT folks at his company using NetMeeting to remote his machine. We ship a “remote desktop” option, but it only works if your machines are on the same subnet. I ended up having him install TeamViewer, which worked quite well. Since many of us support our less savvy friends and family, I think having a remote control option that works across the internet at large would be a killer feature.
  • he had already played around with OpenOffice before this support call, and had a question lined up for me — he saw the nice PDF export icon in the Writer toolbar, and wanted to know how he could pay money to enable that feature. When I told him that OO.o could indeed export as PDF for free, he was shocked and ecstatic. I think folks on the other side of the chasm are quite concerned about creating PDFs, and the fact that we can do it for free is another killer feature. Perhaps it could be highlighted as one of the callouts during the installation process.
  • he had also already discovered our version of Minesweeper but scoffed at how easy it was. He wanted to know how to get the advanced version. In gnomine’s preferences, changing the size of the grid is labeled “Field Size” with “Small”, “Medium”, “Large”, and “Custom” as the options. I told him to select “Large” and he didn’t trust me that it would actually do it until he actually saw the size of the grid increase. My takeaway is that this wording is poor, and could be made way more friendly, especially for folks looking for familiarity coming from a Windows environment.

Those are all the notes I managed to scribble down over the course of our phone call.

I guess there are bugs, papercuts, and wishlist features in there, but I’ll hold off on filing Launchpad bugs unless others think it would actually be useful or not.

Read more
alex

wagging the warthog

pumbaa
  Serengti pumbaa (yes, I’m recycling photos…)

I joined Canonical in April, as my loyal blog readers (best looking people on the planet!) know, and immediately, I was drowning in the deep end. I left the relatively safe and sane sandbox of the kernel and tried to choke down two huge pills at once, one called “userspace” and the other called “packaging”.

The medicine was worth it. In fact, it was a large reason why I wanted to join Canonical in the first place: to gain breadth across the entire software stack and learn — really learn — how an entire computer works.

The other large reason I joined Canonical was because I wanted to contribute more to free software, not less.

And that’s why it hurts to see my company constantly getting sniped at for not contributing back to the greater good of the free software ecosystem. The most commonly voiced complaint is that the amount of Canonical’s code contributed back to the ecosystem is very small percentage-wise1.

Today, I’d like to build a little bit on what my colleague Colin King does and talk a little about what I do2.

One of Colin’s salient points:

Because Canonical has a good relationship with many desktop hardware vendors, I have access to the BIOS engineers who are actually writing this code. I can directly report my findings and suggested fixes to them, and then they re-work the BIOS. The end result is a PC that is sold with a BIOS that works correctly with Linux, and not just Ubuntu – all distributions benefit from this work.

You may not see my fixes appear as commits in the Linux kernel because much of my work is silent and behind the scenes – fixing firmware that make Linux work better.

It’s hard to explain how important the above paragraphs are. In the bad old days, the companies who made your laptop and your desktop designed them for Windows only, and didn’t care at all about Linux. This meant that Linux was always going to be a second-class citizen, having to guess at how Windows did something3, and then trying its best to re-implement it, getting it mostly right, but often, subtly wrong in ways that lead to frustration.

In our brave new world, there’s a growing consumer demand for an alternative to Windows. And most of the time, that alternative is the Ubuntu flavor of Linux. The consumer hardware vendors want Ubuntu because their customers — real human beings — want computers with Ubuntu. And those human beings want Ubuntu because Ubuntu focuses on something that the other large distributions don’t care about as much as we do4: namely, the fit, finish, polish, and usability of Linux on the desktop5.

Of course, if a hardware vendor is actually going to sell a machine with Ubuntu, they’re going to want to make sure it is a high quality product, which means lots of testing. And so, major consumer hardware vendors are starting6 to test with Linux before they ship a product, and not leaving Linux users unsupported in the cold, after the fact, having to guess on how to get their machines working with their operating system of choice.

The fact that hardware vendors are choosing to partner with Canonical is a) a direct payoff of focusing on the Linux desktop and b) puts Canonical in a unique position to make huge contributions on behalf of the free software ecosystem that never appear in a git repository7.

Enough about Colin’s work. In my own short time with Canonical, I’ve become the lead engineer on an embedded Ubuntu project, with responsibilities ranging from customizing the entire software stack from debian-installer up through Chromium as well as providing technical consulting for our customer on a wide variety of topics.

One little adventure we shared recently was the selection of a USB wifi part for their product. They asked us which chip they should use and we were delighted that they should ask our opinion. Naturally, we recommended a solution that was easiest for us to support which meant a chip that had good, open drivers with an active upstream kernel development activity.

Luckily, our customer also understood the value of embracing upstream and was willing to pay for a slightly more expensive, but much better supported chip rather than the cheapest, but poorly supported chip from a company that has an adversarial relationship with the upstream kernel community.

I’m not going to lie; helping to influence this purchasing decision, directly financially supporting a chip company that has embraced the “upstream first” model of kernel development is probably one of the best things I’ve done for Linux, moreso than the 192 kernel patches I’ve written.

Of course, I want to make upstream code contributions too. I’m a software developer, after all. And that’s why being a part of Canonical’s OEM team is so exciting. We get lots of exposure to what consumer hardware vendors care about, which is a direct measures of what actual consumers care about, which means we get the chance to scratch big, community itches. By and large the vendors’ requirements share common themes: boot faster, run cooler, save battery life, use less disk space. These are generic problems with generalizable solutions, and it’s my intention to push as many as possible upstream in order to benefit the entire free software ecosystem.

So, that’s a little bit on my tiny role in the software ecosystem. And don’t just take my word for it. Check out some of the things that other Canonical employees are working on.


1: The complaints tend to focus on two major chunks of the ecosystem: the kernel and the GNOME core. If I may be permitted to torture a car analogy, the kernel might be the engine, and GNOME would be the body. (return)

2: There’s a lovely bit self-referentiality here that mirrors our actual work relationship: Colin provides a foundation for my software project; why not also leverage my blog entries off his? ;-) (return)

3: Where “something” might be suspend. Or resume. Or wifi. Or 3D graphics. You know, minor stuff like that. (return)

4: And for good reason too. Most of the money in Linux these days is in servers. So it makes perfect sense to put your effort where the money is and avoid unnecessary distractions. (return)

5: Going back to the car analogy, Ubuntu focuses on the trim level. It’s like a Camry vs. a Corolla. Who knows, maybe we’ll even be Lexus one day, but taking the crown from Apple will be hard. (return)

6: We’re certainly not the only Linux distro that does this. Novell has an OEM team as well, getting openSUSE preloaded on machines. But I’m pretty sure Canonical has more partners, and thus influence. Just a guess, no insider knowledge here… (return)

7: Forsooth! The title of my entry explained! (return)

Read more